AI with Michal

California rules on AI in employment decisions

California state laws, CPPA draft regulations, and FEHA obligations that govern how employers use AI tools to screen, score, or rank candidates, including bias testing, data rights, and transparency requirements for automated decision-making.

Michal Juhas · Last reviewed May 5, 2026

What are California rules on AI in employment decisions?

California has more active AI employment regulation than any other US state. Three frameworks overlap for employers using AI in hiring: the Fair Employment and Housing Act (FEHA), the California Privacy Rights Act (CPRA) and its CPPA-led rulemaking on automated decision-making technology, and a growing set of legislative transparency requirements. Each framework places obligations on the employer, not the vendor. A vendor saying their tool is compliant does not transfer liability.

Illustration: California AI employment decision compliance showing a candidate pipeline intersected by a layered regulatory band with bias audit, transparency disclosure, and data rights shields, a human review gate before the final hiring decision, and an audit log strip beneath

In practice

  • A California-based tech startup running async video screening through an AI vendor still carries FEHA liability if that tool screens out more candidates from one protected group than another, even though the vendor built and runs the model.
  • A recruiter might say "our vendor is EEOC-compliant" when no such certification exists, so vendor marketing language often covers a gap the employer must actually audit and document.
  • Talent leaders at companies scaling quickly in California often discover during a compliance review that they added three AI tools in a year with no adverse impact logging, no vendor data processing agreement for candidate data, and no privacy notice update.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: If you use any AI tool to filter, score, or rank candidates for a California role, you own the compliance. Vendor terms do not substitute for employer obligations under FEHA or CPRA.
  • How you would use it: Run an annual adverse impact check on any AI-assisted stage. Update your candidate privacy notice each time you add a tool that touches applicant data.
  • How to get started: List every tool in your hiring stack that uses AI in any way. For each one, confirm you have a data processing agreement, bias test results from the vendor, and a named internal owner.
  • When it is a good time: Before your next vendor renewal or RFP, not after a regulator sends a letter.

When you are running live reqs and tools

  • What it means for you: FEHA adverse impact applies at every AI-assisted stage: resume screening, async video scoring, Boolean-generated shortlists, and chatbot pre-screening. Run cohort outcome comparisons at least quarterly for high-volume funnels.
  • When it is a good time: After adding any new AI feature to your ATS or sourcing stack, and before filing a response if a DFEH complaint surfaces.
  • How to use it: Log hiring outcomes by stage and protected class using ATS exports. Apply the four-fifths rule to compare selection rates across groups. Escalate gaps above 20 percentage points to legal and HR ops immediately.
  • How to get started: Pull last quarter's application-to-phone-screen conversion rates split by demographic, if your ATS captures it. If it does not, add that tracking before the next quarter closes.
  • What to watch for: Vendors changing their underlying model without notice (look for version-change clauses in your MSA), CPPA rulemaking updates, and sourcing tools that enrich candidate profiles with social or third-party data that may include protected-class proxies.

Where we talk about this

On AI with Michal live sessions we work through real scenarios: which AI tools trigger California compliance obligations, how to read vendor bias testing claims critically, and when to loop in legal versus when a TA ops fix is enough. The AI in recruiting track connects FEHA and CPRA obligations to the day-to-day hiring stack. If you want to walk through a compliance inventory for your own tools, start at Workshops and bring your vendor list.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data to a new tool.

YouTube

  • Search "California AI employment law FEHA" on YouTube for employment attorney explainers on how FEHA applies to algorithmic decisions, released since late 2023 when EEOC guidance arrived.
  • Search "CPPA automated decision-making rulemaking" for public CPPA board session recordings where staff walk through the draft ADMT regulations in detail.

Reddit

  • r/humanresources has recurring threads on AI screening tools where California-based HR teams share what their legal teams are telling them to audit.
  • r/recruiting surfaces sourcer-level discussions on what vendors actually disclose about model training and how teams are handling bias audit requests.

Quora

  • Search "California AI hiring compliance" on Quora for practitioner and attorney perspectives on FEHA and CPRA obligations (verify dates before acting on any answer, as rulemaking moves quickly).

California AI employment law at a glance

FrameworkWho it coversKey obligation
FEHAEmployers with 5 or more CA employeesAdverse impact testing for AI-assisted decisions
CPRA / CPPA ADMT rulesAny employer processing CA resident dataDisclosure, opt-out, risk assessment (draft rules)
AB 2013AI developers, indirectly employers via vendor due diligenceTraining data transparency
Federal EEOC guidanceAll US employersTitle VII applies to algorithmic screening tools

Related on this site

Frequently asked questions

What is California's automated decision-making framework for hiring?
The California Privacy Protection Agency (CPPA) is developing regulations under the CPRA that cover automated decision-making technology (ADMT) in employment contexts. Draft rules proposed in 2024 would require employers to disclose when AI influences significant decisions including hiring, promotion, and termination; offer candidates a right to opt out or request human review; and conduct risk assessments for high-risk systems. Final rules had not passed as of mid-2025, but employers running AI screening in California should monitor CPPA filings and build opt-out mechanisms and transparency disclosures into their hiring tech stack now, before enforcement begins.
Does FEHA apply to AI screening tools used by California employers?
Yes. The California Fair Employment and Housing Act prohibits discrimination based on protected characteristics and applies regardless of whether the decision was made by a person or an algorithm. If an AI screening tool produces adverse impact against a protected group, for example a selection rate below the four-fifths rule for one group compared to another, the employer carries the burden to demonstrate job-relatedness. California employers must either validate the tool or stop using it for covered decisions. Run adverse impact analyses at least annually and log cohort outcomes so you have data if a regulator or claimant asks.
What candidate data rights exist under CPRA for AI-assisted hiring?
The California Privacy Rights Act gives California residents the right to know what personal information is collected and how it is used, the right to correct inaccurate data, and the right to limit use of sensitive personal information. For hiring this means candidates can ask what data your AI sourcing or screening tool holds on them, request corrections, and restrict sharing with third parties. Employers should map which vendors receive candidate PII, confirm those vendors have compliant data processing agreements, and publish a privacy notice that names AI-assisted decisions. See candidate data enrichment for where enrichment data typically flows and how to document lawful basis.
How does California's AI transparency requirement affect sourcing tools?
AB 2013, signed in September 2024, requires developers of AI tools trained on California residents' data to publish training data summaries. For talent teams this matters at vendor selection: ask sourcing platform vendors whether their models fall under AB 2013, what training data the model used, and whether any training signals included protected-class proxies. A vendor that cannot answer is a compliance risk as CPPA rulemaking and legislative enforcement expand. Add these questions to your ATS and sourcing tool RFPs before contract renewal. Read AI bias audit for the evaluation framework to apply after vendor selection.
What should an employer document when using AI for employment decisions in California?
Maintain a record of which AI tools touch each hiring stage, what decision the tool influences, what bias testing was done before deployment, and who owns remediation when an issue surfaces. For each automated decision touching a California applicant, log the tool name, version, model type, and the date range of use. Under draft CPPA rules you may also need a risk assessment for high-risk ADMT categories. Keep records for at least four years to cover FEHA statute of limitations and anticipated CPPA enforcement windows. Pair documentation with a human-in-the-loop review gate before final decisions reach candidates.
How does California's approach compare to federal EEOC AI guidance?
Federal EEOC guidance released in 2023 confirms that Title VII applies to algorithmic hiring tools and that employers remain liable for adverse impact even when a vendor operates the model. California goes further: FEHA has a lower employer threshold of five employees versus fifteen under Title VII, the CPPA is building specific ADMT disclosure rules with no federal equivalent yet, and California plaintiffs have access to state courts with broader remedies. Employers covered by both must meet the stricter standard. If you are building AI hiring compliance for a US-wide team, California rules are effectively your compliance floor, not a regional edge case.
Where do California TA teams typically get stuck with AI compliance?
In live sessions the sticking points are: assuming vendor compliance transfers automatically to the employer (it does not), skipping adverse impact logging because application volume seems low, and treating privacy notices as a one-time legal task rather than a document that must update each time you add a new AI tool. Teams also underestimate how quickly a new sourcing add-on or AI resume screener triggers FEHA obligations. The fix is a short vendor intake form that asks about training data, bias testing results, and data processing agreement terms before any AI tool touches California applicant data. Join a workshop for a live walkthrough, or read adverse impact to understand the four-fifths test in practice.

← Back to AI glossary in practice