AI with Michal

Talent acquisition metrics

The KPIs and measurements TA teams track to understand recruiting performance: how fast roles fill, what each hire costs, where qualified candidates come from, and whether hires succeed after joining.

Michal Juhas · Last reviewed May 3, 2026

What are talent acquisition metrics?

Talent acquisition metrics are the numbers TA teams use to understand how recruiting is performing: how long roles take to fill, what each hire costs, which sources deliver candidates who make it to offer, and whether people hired are succeeding six months in.

Illustration: a talent acquisition metrics dashboard showing time-to-fill, cost-per-hire, offer acceptance rate, and source of hire as interconnected data points for TA teams

In practice

  • A TA leader at a 400-person tech company opens the weekly pipeline review with four numbers on a shared slide: time-to-fill by department, offer acceptance rate, cost-per-hire this quarter, and source-of-hire breakdown. Nothing else goes on the slide.
  • After a bad quarter of offer declines, a recruiter pulls stage-by-stage drop-off data and finds that candidates who reached the final round waited an average of 11 days for feedback. No AI tool flagged it; the metric did.
  • "Quality of hire" comes up every board cycle, but the definition shifts unless someone has pinned it to a 90-day manager survey from the HRIS. Ambiguity is the norm until TA names an owner for the definition.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how metrics show up in dashboards, ATS configuration, or executive reporting.

Plain-language summary

  • What it means for you: Talent acquisition metrics are the numbers that tell you and your stakeholders whether recruiting is working: how fast, how much, from where, and with what result after hire.
  • How you would use it: Pick four or five metrics, define them once with Finance and HR Ops, and review them at a regular cadence so the team reacts to trends rather than surprises.
  • How to get started: Pull time-to-fill and offer acceptance rate from your ATS for the last six months. Plot by department. The outliers will tell you where to look first.
  • When it is a good time: Before any budget conversation or headcount planning cycle, and immediately after a spike in offer declines or longer-than-usual fill times.

When you are running live reqs and tools

  • What it means for you: Metrics are only as reliable as your stage definitions in the ATS. Inconsistent use of "active," "on hold," or "offer extended" stages corrupts every downstream calculation.
  • When it is a good time: When TA is being asked to defend headcount, justify tool spend, or connect recruiting output to business results.
  • How to use it: Configure your ATS stages to match agreed definitions, build a shared data dictionary with Ops, and schedule a quarterly calibration where Finance and TA reconcile cost-per-hire numbers from different source systems.
  • How to get started: Audit how your team currently moves candidates through stages. If two recruiters define "offer extended" differently, your average time-to-hire is wrong. Fix the process before you fix the dashboard.
  • What to watch for: Vanity metrics (applications received, LinkedIn impressions) crowding out outcome metrics (offer acceptance, quality of hire, retention at 12 months). High application volume with low interview rate is a sourcing quality problem, not a success signal.

Where we talk about this

AI with Michal Workshops cover talent acquisition metrics in the context of AI-assisted recruiting: which numbers to surface in model prompts, how to structure ATS exports for analysis, and when AI-generated insights about pipeline health are trustworthy versus when they are guessing from bad input data. Come with your real ATS export and a metric your leadership does not agree on yet.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

Core metrics quick reference

MetricWhat it measuresCommon pitfall
Time-to-fillReq open to signed offerInconsistent "open" definition across teams
Time-to-hireFirst contact to signed offerMixing with time-to-fill in the same report
Cost-per-hireAll spend divided by hiresExcluding internal recruiter time
Offer acceptance rateOffers accepted / offers extendedNot segmenting by role level or function
Quality of hirePost-hire performance indicatorsNo agreed definition between TA and HR Ops
Source of hireChannel that produced the hireCrediting last touch instead of first touch

Related on this site

Frequently asked questions

What are the core talent acquisition metrics every TA team should track?
Time-to-fill, time-to-hire, cost-per-hire, offer acceptance rate, and source of hire are the five most TA teams start with because they tell you how fast, how expensive, and where candidates come from. Add candidate NPS if you run post-process surveys, and quality of hire (usually manager rating at 90 days) if leadership wants outcome data. Start with four or five numbers your ATS already surfaces without a custom report. Tracking twenty metrics nobody acts on creates noise instead of signal. In workshops, teams consistently underestimate offer acceptance rate as the earliest warning sign that comp, JD copy, or process length is misaligned with the market.
What is time-to-hire versus time-to-fill and why does the difference matter?
Time-to-hire measures from a candidate's first application (or sourcer's first contact) to signed offer. Time-to-fill measures from req approval to a signed offer. The gap between them tells you how much of the delay lives inside your process (approvals, scheduling, debrief turnaround) versus outside it (sourcing volume, passive candidate pipelines). Most ATSs track both if you configure stages consistently. Where teams get into trouble: mixing definitions across business units so the average becomes meaningless. Standardize what "open" and "closed" mean before you benchmark against industry data, and name one person who owns the definition so it does not drift after a reorg or a new ATS.
How do you measure quality of hire?
Quality of hire is the hardest metric to standardize because it combines outcomes that live in different systems: performance review scores, ramp time, manager satisfaction at 30/60/90 days, and retention at one year. Pick two or three indicators you can actually retrieve without a manual survey blast. The most defensible version ties a new hire's scorecard traits at offer to their first performance rating, so TA can see which sourcing channels and interview signals actually predicted success. Run the analysis quarterly, share findings with hiring managers, and treat the result as a calibration signal for your scorecard anchors, not as a performance review of recruiters.
What is cost-per-hire and what does it actually include?
Cost-per-hire adds up every dollar spent to make one hire: recruiter salaries and overhead, job board fees, agency fees if any, tool subscriptions, assessment costs, and interview panel time valued at hourly rates. Most teams only count the easy line items (job postings, agency invoices) and undercount internal recruiter time by 30 to 50 percent. Run the full calculation at least once with Finance so the board-level number and the operating number match. High cost-per-hire is not always bad: a senior leadership role or a hard-to-fill technical position warrants more spend. What matters is whether the spend aligns with the value the hire brings in year one.
How do offer acceptance rate and candidate NPS diagnose pipeline health?
Offer acceptance rate dropping below 80 percent is a clear signal: comp is off-market, the process took too long, or something the candidate saw in the interview loop changed their read on the role or the company. Candidate NPS after rejection is a less common but high-value signal that tells you whether people who did not get the job would still refer a friend or apply again. In async screening flows where candidates have less human contact, NPS often reveals friction that no one on the team noticed because nothing obviously broke. Pair both metrics with stage-level drop-off data to find where you are losing candidates to the process versus the market.
How do you use metrics to make sourcing decisions?
Source-of-hire data tells you which channel produced each hire, but the more useful read is source-to-interview rate and source-to-offer rate: not all applicants are equal, and a job board producing 200 applicants at a 2 percent interview rate costs more per qualified candidate than a referral channel producing 20 at 40 percent. Combine with candidate data enrichment tools to understand channel overlap before adding spend. Run the analysis at role family level, not overall, because engineering sources often look nothing like sales sources in the same company. Share source quality reports with hiring managers so sourcing budget conversations are grounded in data rather than opinions about which platform feels active.
Where can we learn to run these metrics with real recruiting teams?
Join a workshop where TA teams walk through live ATS dashboards, debate which metrics their leadership actually reads, and practice sourcing-channel analysis with real data hygiene constraints. The Starting with AI: the foundations in recruiting course covers how to structure data for model inputs alongside the operational context that prevents AI-generated metric summaries from misleading decision-makers. Bring your ATS export or a sample anonymized data set; the group will surface gaps you would not catch reviewing slides alone. After the session, assign one person to own the metric definitions document and a quarterly review with Finance so numbers mean the same thing in TA and in the board pack.

← Back to AI glossary in practice