AI with Michal

Hallucination

When a language model produces fluent but false or ungrounded details (employers, dates, URLs, policy claims) that look credible until you verify them.

Michal Juhas · Last reviewed May 2, 2026

Who this is for

Anyone who pastes profiles, resumes, or policies into a model and might ship the output externally without a second read.

In practice

  • Ground the task: paste the source text or use retrieval; forbid the model from inventing metrics you did not supply.
  • Ask for uncertainty: prompt for "unknown" instead of guessing when a field is missing.
  • Separate draft from send: keep AI output in a staging doc until a named reviewer approves.

Where it breaks

Under time pressure, reviewers skim. Beautiful Markdown hides wrong seniority. Multilingual profiles increase mismatch risk if the model assumes English-only titles.

From recent workshops

Participants often discover hallucinations first on outbound: wrong office city, wrong product name at the employer. The fix is rarely "a better model" and usually verify-before-send plus shorter prompts that only restate verified facts.

Hallucination risk by task

TaskRelative riskMitigation
Outreach personalizationHighFacts from profile only
Intake summaryMediumQuote hiring manager
Boolean stringLowerStill test in tool
Policy interpretationHighLegal, not LLM

Related on this site

Frequently asked questions

What do hallucinations look like in recruiting work?
Wrong company tenure, invented certifications, broken profile links, or "they led X team" when the profile never said that. Co-pilot style drafting over a blank profile is especially risky.
Why do models hallucinate if they are "trained on the internet"?
They optimize for plausible continuation, not database lookup. Without retrieval or tools, they cannot know your private candidate truth. Even with retrieval, wiring mistakes can still surface wrong snippets.
What is the minimum viable verification loop?
For candidate-facing text: open the source profile or ATS field, spot-check employers and dates, and paste URLs instead of trusting the model's memory. For internal drafts, mark every claim that needs a citation.
Does RAG eliminate hallucinations?
It reduces unsupported claims when retrieval is correct, but models can still misread chunks or stitch two people together. See RAG and treat retrieval as assist, not oracle.
How do workshops talk about this with hiring managers?
Plain language: AI drafts save time on structure and phrasing; humans own facts and fairness. That framing prevents "the computer said so" approvals in debriefs.
Which blog posts should the team read together?
When should we avoid generative models entirely?
High-stakes compliance narratives, redundancy selections, or anything you cannot audit. Prefer deterministic templates or official legal review paths there.

← Back to AI glossary in practice