AI with Michal

Prompt chain

A sequence of model calls (or human checkpoints) where each step consumes the last step’s output, for example intake notes to outline, outline to JD, JD to outreach, with review gates between.

Michal Juhas · Last reviewed May 2, 2026

Who this is for

Recruiters and TA ops people turning messy intakes into repeatable JD and outreach packages without one 4,000-token gamble.

In practice

  • Define inputs per step: what fields must exist before the next model call runs.
  • Freeze intermediate artifacts: save the outline as Markdown before you ask for full prose.
  • Add a human gate before anything with legal or DEI sensitivity.

Where it breaks

Skipping clarification, chaining too many creative steps without checks, or mixing languages mid-chain without telling the model.

From recent workshops

Live sessions on AI in recruiting return to intake to JD to outreach as the canonical teaching chain. Participants see that fixing the second prompt does little if the hiring manager never answered location and seniority in step zero.

Chain versus single-shot

ApproachBest forRisk
Single-shotTiny tasksHidden assumptions
ChainMulti-artifact hiringMore handoffs to own
Chain + automationHigh volumeAPI and GDPR review

Related on this site

Frequently asked questions

Why use a chain instead of one mega-prompt?
Smaller steps are easier to debug, cheaper on tokens, and let you insert human approval between candidate-facing stages. When one step drifts, you do not throw away the whole run.
What is a simple recruiting chain that works in real teams?
Hiring manager intake bullet list, clarifying questions back to the HM, draft JD, HM edits, then outbound variants. Workshops often demo clarify before draft so the model does not invent scope.
How does this relate to workflow automation?
Chains can stay manual in chat, or you automate handoffs with tools like Make or n8n once the prompts stabilize. See workflow automation for when that is worth it.
Where do structured outputs fit?
Between steps: step one emits JSON or a table (score, rationale), step two writes prose from those fields. That reduces format drift. See structured output.
What breaks first in prompt chains?
Ambiguous inputs early in the chain. Garbage in step one compounds politely because the model tries to be helpful. Invest in the first step’s checklist.
What should we read before we script this?
Who owns maintenance when the chain lives in automation?
Name a product owner for prompts and a separate owner for credentials and data mapping. Unowned chains rot when APIs change.

← Back to AI glossary in practice