EP. 014 2026-01-01 38:12

The AI Hiring Tools No One Audits

AI hiring tools are sold as objective, efficient, and bias-free. The reality is more complicated: these systems make consequential decisions about people's livelihoods while operating as black boxes with minimal oversight. Most people stop at the headline. This episode follows the incentives, checks the claims against primary sources, and shows what the first-watch narrative leaves out.

Key Receipts

  • NYC Local Law 144 — The first AI hiring audit requirement (2023)
  • EEOC Guidance — Federal position on algorithmic discrimination
  • Harvard "Hidden Workers" Study — Primary data on ATS rejection rates
  • Amazon Recruiting AI — The case that proved bias can be baked in

Quick Verdict

  • AI resume screening tools are used by over 75% of large employers, often without candidate knowledge
  • Vendors claim bias reduction but rarely provide auditable evidence to support this claim
  • The NYC AI hiring law (Local Law 144) requires bias audits but enforcement remains limited
  • Video interview AI that claims to assess "personality" or "culture fit" has particularly weak empirical support
  • Candidates can request information about AI use in hiring in some jurisdictions—but most don't know this
  • The accountability gap exists because no one party has both the incentive and ability to demand better

The Audit

Claim Supports Weakens Confidence
AI hiring tools reduce human bias in candidate screening Vendor white papers; some academic studies showing reduced variance; theoretical argument that algorithms apply criteria consistently Amazon's scrapped recruiting AI (discriminated against women); EEOC settlements; studies showing algorithms can amplify historical bias; lack of pre-deployment testing requirements Low
Over 75% of large employers use AI or automation in hiring SHRM survey data (2022); Harvard Business School "Hidden Workers" study; vendor market research Definition of "AI" varies widely in surveys; some responses may include basic ATS systems; response rates may skew toward tech-forward companies Medium
Video interview AI can accurately assess candidate personality and job fit Vendor claims citing internal validation studies; some I/O psychology research on structured interviews Independent research showing poor generalization; concerns about proxy discrimination via accent, lighting, background; FTC scrutiny; lack of peer-reviewed validation Low

Who Benefits

Vendors: Sell tools at scale to HR departments. Revenue depends on perceived efficiency gains and compliance marketing. Strong incentive to claim bias reduction without funding rigorous independent validation.

Employers: Want to reduce hiring costs and legal liability. AI tools offer both—or at least the appearance of both. Less incentive to audit tools they've already purchased and deployed.

Candidates: Want fair consideration. But individual candidates have almost no leverage or visibility into what systems rejected them. Collective action is difficult when you don't know who else was rejected.

Regulators: Want to ensure fair employment practices but often lack technical expertise and resources. NYC's law was a step forward but enforcement mechanisms remain underdeveloped.

So What?

Here's what's actually happening: companies are using AI tools to filter job applicants, and most candidates never know an algorithm decided they weren't worth a human conversation.

The tools might be better than humans at some things. They might be worse at others. The problem is that nobody is systematically checking which is which. Vendors have incentives to claim their tools work. Employers have incentives to believe the claims. Candidates have no visibility.

This isn't a story about evil AI or evil corporations. It's a story about an accountability gap—a situation where no one has both the incentive and the ability to demand evidence that these systems actually do what they claim.

If you only remember one thing

For job seekers: Know that AI screening exists, ask about it when possible, and don't take automated rejections as meaningful signal about your qualifications.

For employers: Demand audit evidence from your vendors, not just marketing materials. If they can't provide it, that tells you something.

Receipts

  • Primary New York City Local Law 144 (2023). "Automated Employment Decision Tools."
  • Primary EEOC (2023). "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures."
  • Primary Harvard Business School (2021). Fuller, J. & Raman, M. "Hidden Workers: Untapped Talent."
  • Primary SHRM (2022). "Automation & AI in HR" survey data.
  • Secondary Reuters (2018). "Amazon scraps secret AI recruiting tool that showed bias against women."
  • Academic Raghavan, M. et al. (2020). "Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices." ACM FAT* Conference.

Transcript

Full transcript available in two formats:

Update Log

2026-01-01 Initial publication