AI with Michal

Few-shot prompting

Giving a language model a small set of completed examples (input plus desired output) so it infers tone, format, and constraints instead of you describing them only with adjectives.

Michal Juhas · Last reviewed May 2, 2026

Who this is for

Recruiters and sourcers who have good artifacts already and want the model to match them without writing a novel of rules.

In practice

  • Show, do not lecture: one bad and one good example often teaches faster than ten bullet rules.
  • Lock the schema: if you need columns, show a tiny table or JSON in the examples.
  • Name the role family: "senior backend EU remote" narrows vocabulary more than "engineer".

Where it breaks

Homogeneous examples teach bias (for example only one demographic in outreach). Sparse examples teach guesswork. Stale examples teach outdated tone after a brand refresh.

From recent workshops

AI-in-recruiting sessions keep returning to MJDS-style demos: the model mirrors what you put on the canvas. Participants who brought anonymized threads got to credible drafts faster than those who asked for "something compelling" with no anchor.

Few-shot versus long instructions

StyleWhen it winsWatch out
Few-shotTone, format, micro-patternsHidden bias in samples
Long rubricLegal must-nots, complianceToken cost, skim risk
HybridProduction promptsNeeds an owner to edit both

Related on this site

Frequently asked questions

How many examples is "few" in practice?
Usually two to five pairs. Diminishing returns kick in fast; quality of examples matters more than count. In workshops we see sourcers paste three good messages and get usable fourth variants in seconds.
Where do few-shot prompts help recruiting most?
Outbound sequences, intake summaries, scorecard rationales, and JD cleanup where you already have gold-standard rows in a sheet. Pair with how to write better AI prompts for structure.
What is the main downside of few-shot prompting?
You can overfit to the examples: the model copies quirks you did not mean to canonize, or it ignores edge cases your samples never covered. Refresh examples when your bar moves.
How is this different from a saved system prompt or Gem?
Few-shot is the teaching set inside a single call or thread turn. A Gem or custom GPT stores that teaching persistently as system instructions. They stack: persistent system rules plus fresh few-shots for a new req.
Can few-shot reduce hallucinations?
It can reduce style drift and missing fields, but it does not fix factual invention. You still verify employers, dates, and URLs. See the hallucination entry for checks.
Which tools support few-shot well?
Any chat UI that accepts long prompts, plus Claude and ChatGPT when you pin examples at the top. For a guided path, join a workshop or take Starting with AI: the foundations in recruiting.
Should I anonymize real candidate examples?
Yes. Strip names, emails, and identifiable employers before you paste into third-party models. Treat few-shot packs like internal documentation with the same retention rules as your CRM.

← Back to AI glossary in practice