Few-shot prompting
Giving a language model a small set of completed examples (input plus desired output) so it infers tone, format, and constraints instead of you describing them only with adjectives.
Michal Juhas · Last reviewed May 2, 2026
Who this is for
Recruiters and sourcers who have good artifacts already and want the model to match them without writing a novel of rules.
In practice
- Show, do not lecture: one bad and one good example often teaches faster than ten bullet rules.
- Lock the schema: if you need columns, show a tiny table or JSON in the examples.
- Name the role family: "senior backend EU remote" narrows vocabulary more than "engineer".
Where it breaks
Homogeneous examples teach bias (for example only one demographic in outreach). Sparse examples teach guesswork. Stale examples teach outdated tone after a brand refresh.
From recent workshops
AI-in-recruiting sessions keep returning to MJDS-style demos: the model mirrors what you put on the canvas. Participants who brought anonymized threads got to credible drafts faster than those who asked for "something compelling" with no anchor.
Few-shot versus long instructions
| Style | When it wins | Watch out |
|---|---|---|
| Few-shot | Tone, format, micro-patterns | Hidden bias in samples |
| Long rubric | Legal must-nots, compliance | Token cost, skim risk |
| Hybrid | Production prompts | Needs an owner to edit both |
Related on this site
- Blog: How to use AI in recruiting
- Tools: Gemini
- Guides: Recruiters
- Membership: Become a member