AI with Michal

GPT in recruiting

Using OpenAI's Generative Pre-trained Transformer model family (GPT-4o, GPT-4, and successors) as the AI layer behind recruiting work, whether accessed through the ChatGPT interface, the OpenAI API, Azure OpenAI Service, or embedded inside ATS, sourcing, and scheduling tools that run GPT without advertising it.

Michal Juhas · Last reviewed May 5, 2026

What is GPT in recruiting?

GPT (Generative Pre-trained Transformer) is OpenAI's model family: the technology behind ChatGPT and the engine that powers AI features in dozens of third-party ATS platforms, sourcing tools, and screening systems. In recruiting, the term covers the full range of ways this model family shows up in a hiring workflow, from a recruiter typing directly into ChatGPT to a vendor's AI-powered button that calls the OpenAI API in the background.

The term is broader than ChatGPT for recruiters, which describes the chat interface specifically. GPT sits within the wider AI for recruiters category alongside other model families: Claude in recruiting, Gemini in hiring, and DeepSeek in recruiting each address a different underlying model from a different provider.

Illustration: GPT model node powering both a direct chat interface and embedded ATS and sourcing tool tiles, outputting a structured draft card through a human review gate before reaching an ATS pipeline and a candidate message channel

In practice

  • A TA coordinator clicks the "Generate JD" button in their ATS and receives a first draft in three seconds. The vendor licenses GPT-4o through the OpenAI API; the coordinator does not know which model version is running unless they ask.
  • A sourcer pastes a role brief into ChatGPT (running GPT-4o on the Teams tier) and asks for five Boolean search strings. They review each string for false positives before running them in LinkedIn Recruiter.
  • A TA lead asks a new sourcing platform vendor: "Which GPT model version does your candidate ranking use, and what is your data processing agreement with OpenAI?" because any AI feature that touches candidate records needs a documented legal basis.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how GPT fits your daily workflow, your ATS, or your sourcing stack.

Plain-language summary

  • What it means for you: GPT is the model family OpenAI builds. When you use ChatGPT or click an AI button in a recruiting tool, GPT is often what runs behind it. Understanding this helps you ask the right questions about data handling and output quality.
  • How you would use it: Directly through ChatGPT for ad hoc drafting, or indirectly through ATS and sourcing tools that embed GPT. In either case, treat the output as a draft that requires a human review before it reaches a candidate or an ATS record.
  • How to get started: Pick one text-heavy task you repeat weekly: job description drafts, outreach messages, or call summaries. Write a structured prompt for it, run it for two weeks alongside your normal process, and note where GPT saves time and where editing overhead is high.
  • When it is a good time: When you have a stable, repeatable task, a documented prompt, and 60 seconds to review the output. Not when the process changes weekly or when the output would reach a candidate without a review step.

When you are running live reqs and tools

  • What it means for you: GPT may be running inside tools you already pay for, not just in ChatGPT. Understanding which model version and which data routing your vendors use is part of responsible stack management.
  • When it is a good time: After you have confirmed your vendor's data processing agreement covers candidate personal data and you have written two or three stable prompts for the task. Before that point, the review overhead can match or exceed the time saved.
  • How to use it: Set a system instructions-style opening message for each ChatGPT session: your company name, the role, tone expectations, and any must-avoid phrases. For vendor-embedded GPT features, ask the vendor for the model version and the prompt template they use, then test after any platform update. Log which model version produced each output.
  • How to get started: Check whether your team uses ChatGPT Free, Plus, Teams, or Enterprise. Move candidate data only to Teams or Enterprise (signed DPA in place). For Azure OpenAI deployments inside vendor tools, request the data processing agreement before submitting named candidate documents. Review AI outreach drafting for the outreach-specific prompt pattern.
  • What to watch for: Hallucinations on company names, dates, and credentials when you ask GPT to research rather than draft from provided context. GDPR risk if personal candidate data enters a consumer-tier account. Model drift when OpenAI releases a new GPT version and vendor tools silently upgrade, changing previously reliable prompt behavior.

Where we talk about this

On AI with Michal live sessions, GPT comes up in the first conversation because it is the model family most participants are already using through ChatGPT before they join. The AI in recruiting track covers model tiers, prompt structure, and data handling obligations. The sourcing automation track moves toward the point where stable GPT prompts get embedded in light automations via API or no-code tools. If you want the full room conversation with a practitioner cohort, start at Workshops and bring a prompt you are already using so the feedback is grounded in real output.

Around the web (opinions and rabbit holes)

Third-party creators move fast on this topic. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data through a workflow you found in a tutorial.

YouTube

  • GPT recruiting prompts for practitioner walkthroughs of prompt-to-draft flows across GPT-4o and earlier model versions, including before-and-after comparisons of output quality
  • ChatGPT GPT-4o sourcing Boolean search for sourcing-specific prompt patterns and Boolean string generation demos used by full-cycle recruiters
  • Azure OpenAI GDPR HR recruiting for compliance-focused discussions on data residency options and what enterprise deployment changes for teams processing personal candidate data

Reddit

  • r/recruiting: GPT ChatGPT surfaces candid practitioner feedback on which GPT tasks save time, which produce slop, and where human editing still carries the load
  • r/humanresources: ChatGPT GDPR covers the compliance side, including threads on enterprise tiers, DPA obligations, and how HR teams document AI use for audits
  • r/RecruitmentAgencies: AI drafting GPT for agency-side views on volume, personalisation limits, and client expectations when GPT drafting is part of the delivery model

Quora

  • How is GPT used in recruiting? collects practitioner answers from sourcers and TA leaders (read critically; quality varies and not all contributors have deep recruiting backgrounds)

GPT versus other AI models for recruiting

DimensionGPT (OpenAI)Claude (Anthropic)Gemini (Google)
Context windowGPT-4o: 128K tokensUp to 200K tokensUp to 1M tokens (Gemini 1.5+)
Enterprise data tierChatGPT Enterprise, Teams, API with DPAClaude for Work with DPAGoogle Workspace with DPA
Azure deployment optionYes (Azure OpenAI Service)NoYes (Google Cloud Vertex AI)
ATS integrationManual copy-paste or API; some vendors embed itManual copy-pasteManual copy-paste or Workspace sidebar
Audit trailNone by default; your team must create oneNone by defaultNone by default
Best fitBroad task range; most vendor tool integrationsLong multi-document tasks; large context needsGoogle Workspace users; very long context

Related on this site

Frequently asked questions

What is GPT and how is it different from ChatGPT?
GPT stands for Generative Pre-trained Transformer, OpenAI's foundational model family. ChatGPT is the chat interface built on top of GPT models. When recruiters open chat.openai.com, they are using ChatGPT, which runs GPT-4o by default on paid tiers. The distinction matters because GPT also powers third-party recruiting tools through OpenAI's API: many sourcing platforms, ATS writing assistants, and scheduling copilots license the same model family without surfacing the OpenAI branding. Knowing this helps when vendors claim AI-powered features: ask which model version and whether OpenAI's data processing terms apply to your candidate records.
Which GPT model versions matter for recruiting, and should teams care?
GPT-3.5-turbo, GPT-4, and GPT-4o represent generational jumps in reasoning quality and instruction-following. For recruiting tasks, the practical difference is reliability: GPT-4o follows structured output formats more consistently, which matters when the model needs to fill a scorecard template or return structured data for an ATS integration. GPT-3.5-turbo is cheaper and faster, but drifts more on nuanced instructions. The relevant risk for teams is vendor-side model swaps: a tool built on GPT-3.5-turbo that silently upgrades to GPT-4o may produce different behavior. Log which model your prompts were written for and test after any vendor update.
How do GPT-powered features end up inside the recruiting tools teams already use?
Sourcing platforms, ATS writing assistants, interview scheduling copilots, and resume screeners often license GPT through the OpenAI API and surface the capability under their own brand name. When a job description tool offers a generate button or a sourcing platform ranks candidates by relevance, there is often a GPT model handling the text behind the scenes. This matters for three reasons: candidate personal data you enter may be processed under OpenAI's API terms rather than the vendor's own servers, output quality depends on the vendor's prompt design rather than your own, and model upgrades or deprecations happen on OpenAI's timeline, not the vendor's release schedule.
Is GPT compliant with GDPR for processing candidate data?
Compliance depends on the tier and deployment model. Consumer ChatGPT tiers (Free and Plus) can use conversation data for model training, which creates significant GDPR risk for candidate personal data. The ChatGPT Enterprise, Teams, and API tiers contractually exclude customer data from model training and include a signed Data Processing Agreement, satisfying most lawful basis requirements. Azure OpenAI Service provides GPT access hosted inside Microsoft's infrastructure with EU data residency options, which some legal teams prefer for sensitive hiring workflows. In all cases, confirm data routing with your legal or IT team before submitting named candidate documents, and strip direct identifiers where possible.
How do you write prompts that work consistently with GPT-powered recruiting tools?
Prompts that produce consistent GPT output in recruiting share a few patterns: start with a role declaration ("You are a recruiting coordinator drafting a structured job description"), specify the output format explicitly with numbered sections and word counts, supply examples from past approved content, and end with a constraint on what to exclude. GPT-4o follows structured instructions more reliably when the format is defined before the content task, not after. For teams using GPT embedded in an ATS tool, document the exact prompt template and model version so that when a vendor upgrade changes behavior, you have a baseline to compare against rather than diagnosing from scratch. See system instructions for the pattern.
What are GPT's limits for recruiting work?
Three limits matter most in practice. First, hallucination: GPT produces confident, plausible-sounding text that can include invented job titles, dates, or credentials when the model lacks sufficient input context. Second, model drift: OpenAI updates GPT models on its own schedule, and a prompt that reliably produces a formatted scorecard note today may behave differently after the next API version bump, affecting any tool in your stack that licenses GPT without pinning a specific model version. Third, no native candidate database: GPT has no memory of past hiring decisions, company policies, or ATS records unless you provide that context in every prompt.
Where can I learn to use GPT for recruiting with peers?
The fastest path is a structured cohort where you test GPT-powered prompts on real req briefs alongside other practitioners. Live sessions in the AI in recruiting workshop include hands-on prompt exercises for job descriptions, outreach, and screening summaries, with peer review of outputs and immediate feedback on what makes a prompt useful versus generic across models including GPT-4o. For self-paced grounding, the Starting with AI: foundations in recruiting course builds practical prompt habits from first principles. Membership office hours give you a space to share a specific prompt you are trying to stabilise and hear what other full-cycle recruiters and sourcers are using in production right now.

← Back to AI glossary in practice