AI with Michal

AI adoption ladder

A simple maturity map for TA and recruiting teams: from no AI use, through ad-hoc chat and saved instructions, to workflow automation and fully redesigned AI-native processes.

Michal Juhas · Last reviewed May 2, 2026

Who this is for

TA leaders, recruiters, and sourcers who want a shared vocabulary for “where we are” with AI so roadmaps, budgets, and training line up.

In practice

  • Name the rung in kickoffs: intake, sourcing, and scheduling benefit when everyone agrees you are systemizing before you automate sends.
  • Pair the ladder with owners: who maintains instructions, who approves candidate-facing text, who holds API keys for workflow automation.
  • Tie progress to artifacts: shared Markdown, versioned prompt packs, and a short list of automated scenarios beat slide decks alone.

Where it breaks

Treating the ladder as a vanity label invites “we are AI-native” without hallucination checks or GDPR discipline. Comparing teams only by tool count ignores prompt quality and data hygiene.

Chatting versus systemizing versus automating

RungWhat changesTypical risk
ChattingFaster drafts, still manual contextInconsistent tone, no audit trail
SystemizingSaved rules and examplesDrift when brand or policy changes
AutomatingRows and stages move without retypingLeaked keys, duplicate sends, bad filters

Related on this site

Frequently asked questions

What are the usual rungs on the ladder?
A practical version is: no AI, chatting in a general assistant, systemizing (Gems, custom GPTs, Markdown playbooks, Claude projects), automating (sheets, ATS, email via tools like Make or n8n), then AI-native work where the process itself is designed around models and checks.
Why does the order matter?
Automation copies whatever quality you feed it. If system instructions and examples are weak, you scale junk outreach or noisy scores instead of fixing the root cause. Live workshops spend time on chat and systemizing before wiring APIs.
How do I know which rung we are on?
Look for artifacts: is context retyped every time, or saved in a shared doc? Do handoffs live in one person’s chat history? Are there owners for prompts and for automation keys? Honest answers map you more reliably than job titles.
Can we skip straight to automation?
You can start a demo fast, but production teams regret skipping stable prompts and Markdown for AI knowledge files. Stabilize one workflow first, then add workflow automation.
Where is AI-native on this map?
It is the top end: workflows assume models, data, and QA hooks by design, not as an afterthought. Read What is AI-native work? and the AI-native glossary entry for the operating style.
What should we do next after reading this?
Pick one recurring task (for example outbound for one role family), write system instructions, then decide if frequency justifies automation. For guided practice, join an AI in recruiting workshop or take Starting with AI: the foundations in recruiting.

← Back to AI glossary in practice