AI with Michal

Large language model (LLM)

A neural model trained to predict the next token over broad text, which powers chat assistants, drafting, classification, and tool-using agents when wrapped in product guardrails.

Michal Juhas · Last reviewed May 2, 2026

Who this is for

TA leaders choosing defaults and practitioners who need a shared vocabulary with engineering, legal, and vendors.

In practice

  • Match model depth to risk: public job ads versus candidate-specific messages get different review bars.
  • Teach the ladder: chat for exploration, saved prompts for repeat work, APIs only when the process is stable.
  • Instrument quality: spot-check weekly, not only at go-live.

Where it breaks

Teams buy "AI access" but skip knowledge bases, so every req reinvents prompts. Or they jump to APIs before prompts are stable and blame the model for bad wiring.

From recent workshops

Recruiting workshops emphasize proficiency ladders: skills versus one-off scripts, and when "good enough" drafting is enough versus when you need deterministic code. That is an LLM strategy conversation, not a model pick list.

Chat versus API depth

LayerYou getYou still need
Chat UIFast draftsCopy-paste hygiene
Saved skills / GemsConsistencyOwners and updates
API + automationScaleKeys, monitoring, GDPR

Related on this site

Frequently asked questions

What should recruiters actually know about how an LLM works?
Enough to calibrate trust: it completes patterns, it does not query your database unless wired to tools, and it has a context window limit. That mental model prevents magical thinking in debriefs.
How do I choose between vendors (OpenAI, Anthropic, Google, and others)?
Start from product needs: EU data handling, SSO, audit logs, and whether you need API access for automation. Model benchmarks help, but workflow fit and governance beat a leaderboard for TA teams.
Is a bigger model always better for recruiting text?
Not always. Smaller models with a strong prompt, retrieval, and few-shot examples often beat a frontier model with a vague ask. Cost and latency matter when you scale to many reqs.
What is the difference between an LLM and automation?
The LLM proposes text or labels; automation (Make, n8n, webhooks) moves data between systems. Workshops separate "just prompting" from skills in project folders and APIs because integration depth changes risk and GDPR surface area.
Where do maturity models help?
They align TA, HRBPs, and sourcing on how deep you go this quarter. Read AI adoption maturity levels and map owners per stage.
Which on-site tools should we standardize on first?
Most teams pick one chat assistant plus one automation path. Compare ChatGPT, Claude, and n8n in the directory before you fork into five stacks.
Do we need an engineer to use LLMs responsibly?
For chat-first workflows, no. For synced CRM writes and webhooks, yes or a strong ops partner. The Starting with AI: the foundations in recruiting course stays on the recruiter-native side first.

← Back to AI glossary in practice