Structured output
Asking a language model to return machine-parseable shapes (JSON, CSV columns, or rigid tables) instead of prose alone, so you can sort, filter, and automate the next step reliably.
Michal Juhas · Last reviewed May 2, 2026
Who this is for
Recruiters and ops people who want sortable screening assists and cleaner handoffs to automation.
In practice
- Show one complete JSON example in the prompt, then ask for more rows.
- Constrain enums (
senior,mid,junior,unknown) instead of free text wherever possible. - Write rationales as quotes from the resume snippet when you need auditability.
Where it breaks
Models fill every field even when data is missing, unless you prompt for explicit null or unknown. JSON with trailing commas also breaks parsers.
From recent workshops
Automation-heavy sessions love fit score plus rationale patterns because they tee up filters. The cautionary tale is the same: API keys, storage, and GDPR responsibilities do not disappear because the column looks tidy.
Prose versus structured handoff
| Output | Human read | Automation read |
|---|---|---|
| Long paragraph | Easy | Fragile |
| Table in Markdown | Medium | Medium |
| JSON / CSV | Harder skim | Robust |
Related on this site
- Blog: AI candidate screening
- Blog: How to write better AI prompts
- Tools: n8n
- Guides: Recruiters
- Course: Starting with AI: the foundations in recruiting
Frequently asked questions
Why bother with JSON for recruiting if we are not engineers?
Because downstream tools (Sheets, Make, n8n, ATS imports) need predictable fields. A score, a short rationale, and three tags parse cleanly; a blob paragraph does not.
What is a minimal JSON shape for screening assist?
Fields like
fit_score (integer), confidence (low or medium or high), must_have_hits, gaps, and next_question. Keep enums small so reviewers can scan.How does this connect to workshop demos with scoring 0 to 10?
Live sessions show sheets with numeric scores plus rationales, then filter thresholds. Structured output is how you get those columns without hand-typing every cell, but humans still own what happens after the filter.
What breaks with structured output?
Schema drift (the model invents new keys), overconfident scores without evidence, and silent truncation on long inputs. Validate JSON with a quick script or Sheets formula before you trigger automations.
Is structured output the same as a scorecard?
Scorecards define what to measure; structured output is the transport. Pair both so the model fills rubric-aligned fields. See scorecard.
Which tools support JSON mode well?
When should we avoid automation on structured scores?
Until calibration is done and legal agrees on how scores are used. Structured fields are not neutral; they speed up whatever bias the rubric encodes.