Is your AI recruiting tool compliant?

A checklist for recruiting leaders

Hey, it’s Jason Zoltak. 👋 If this is your first time reading The Final Interview, here’s where you can subscribe so you don’t miss future breakdowns on navigating AI in recruiting and talent management. 

Talent sourcing will look different in 10 years, because AI drastically multiplies our human capabilities. It’s already changing work. With regards to recruiting, it means it’s also starting to affect how we evaluate human potential.

AI isn’t new. But its accessibility is so pervasive that it’s infiltrating our work at lightning speeds, moving from the fringes into mainstream adoption. AI in recruiting is no longer a question—it's a necessity.

And it's already here.

The Complicated Reality of AI in Hiring

Mainstream adoption of AI brings with it a complicated web of questions:

–Is AI discriminatory by nature?
–How much can human prejudice impact AI decision-making?
–Is human connection or human data more important in hiring?
–How can we protect sensitive information when AI is everywhere?

Countries worldwide are scrambling to answer these questions and regulate AI in hiring, which results in the rules changing rapidly. As a result, many recruiting leaders find themselves caught between a rock and a hard place: retain a competitive edge AND stay compliant. 

How do you do that? 

I wish I could give you a simple answer. 

Unfortunately, it’s a moving target. 

So I can’t. 

(Please don’t shoot the messenger. 😅

What I can do is share what we’ve learned while doing all of this research to build Tofu and help our clients stay ahead of the curve as they implement only the highest-quality AI tools and systems into their hiring process. 

So here’s the low-down on the current state of AI regulation (plus, a few tips on how to stay ahead of the curve as regulation continues to shift).

What's Actually Happening in AI Regulation
Here’s What Recruiters Need to Know*

Local Law 144 

This law, the first of its kind, went into effect in NYC back in mid-2023. It requires employers to conduct bias audits of automated employment decision tools (AEDTs) and disclose their use to job candidates to ensure fairness and transparency in the hiring process.

This set the stage for generalized AI regulation in hiring across the United States. And it’s just the beginning: 

While the…law covers only one jurisdiction, legal experts and HR technology analysts say it’s only a matter of time before other states and jurisdictions enact similar—if not more sweeping—legislation that will include stipulations to conduct AI bias audits. Some attorneys believe, for example, that future laws may require audits for potential age and disability bias, not just the more-narrow gender and race discrimination covered by Local Law 144 in New York City.

SHRM 

More State Laws Impacting How Teams Hire With AI 

Maryland recently introduced legislation similar to NYC’s Local Law 144, requiring bias audits in AI tools used for hiring. While still in its early stages, the law marks a growing trend toward regulating the use of AI in recruitment.

Illinois’ AI Video Act, which passed in 2020, requires companies to notify candidates if they are using AI technology to assess the candidate during video interviews. Candidates must provide explicit written consent for the AI to analyze the video interview.  

In California, a new bill is under review, aiming to mandate greater transparency and accountability in how AI-based hiring tools function. Employers using AI to assist in recruitment may soon be required to disclose their use and ensure these tools don’t perpetuate bias. This would go into effect in early 2026 if signed into law. 

In the 2024 legislative session, as many as 45 states have proposed AI legislation and 31 have adopted resolutions or enacted legislation around AI. Some of these bills explicitly mention recruiting. Others have nothing to do with AI in the hiring process. But much of this legislation (like bills surrounding algorithmic discrimination or facial recognition) won’t directly impact recruiters today, but could have implications for future laws related to hiring.  

Regardless, the growing number of new proposed bills will have an implicit (if not direct) impact on future compliance guidelines for AI used in the recruiting process. 

More Recruiting-adjacent AI bills to watch 👀 

US Federal Governance on AI 

More and more, the federal government is taking an interest in the AI revolution. Most recently, the Biden-Harris Administration set in motion an AI task force: 

The Biden-Harris Administration is hiring dedicated people who want to help leverage AI responsibly to improve government services, make smart policies and regulations around AI to protect people’s rights, safety, and privacy, and build our research and development (R&D), so the United States continues to lead the world in cutting-edge AI innovation.

Don’t forget EEOC compliance. 

While the EEOC hasn’t issued specific regulations that govern the use of AI in hiring, existing employment laws still apply to your hiring process–regardless of whether an AI tool is used in the process or not. The EEOC has, however, made it clear that employers are responsible for making sure any AI-driven hiring tools they use do not violate federal anti-discrimination laws. 

Back in May, the Office of Federal Contract Compliance Programs (OFCCP) released new guidance that sheds light on “the use of AI systems…to perpetuate unlawful bias and automate unlawful discrimination.” These guidelines also require federal contractors not only to inform candidates they’re using AI in the hiring process, but also to ensure every AI tool they use during hiring is OFCC compliant. 

If you’re in charge of evaluating your company’s hiring-related AI tools, this means the buck stops with you. More on that after we expand on governance outside the U.S. 

Beyond the US: The Impact of Global Governance on Recruiting Teams 

For talent leaders in charge of hiring global teams, the nuances of AI regulation in hiring become even more complicated. However, the state of AI regulation across the world will by default have a trickle-down effect on teams of all sizes. 

The EU’s AI Act, which is on track to be fully enacted in August 2025, identifies hiring and employment-related AI systems as "high-risk" and outlines a strict list of requirements for recruiters using AI in recruiting across the EU (including risk-mitigation systems, high-quality of data sets, clear user information, human oversight, etc.) 

Governments in Canada, the U.K., Australia, and more regions are actively working on legislation that could impact anyone recruiting or hiring in these countries. 

In addition, the World Economic Forum has been closely tracking the development of AI in the talent selection process, and has called out the potential flaws related to discrimination and bias in the algorithms companies use to select (or reject) candidates. 

While there is currently no active regulation on these issues by the World Economic Forum, AI was one of the biggest topics discussed at Davos in 2024—and those conversations aren’t going to slow down anytime soon. 

Trust me: this is just the beginning. 

Here’s why it should matter to you: 

Things to Keep in Mind When Evaluating AI Tools

While the laws are becoming clearer by the day, many AI companies find loopholes or skirt the issue of compliance completely. 

Despite those companies’ sneaky shenanigans, every recruiting leader carries the responsibility to evaluate the tools and processes they use in hiring to ensure compliance.

Don’t let that scare you–recruiters armed with the knowledge of the impact their hiring algorithms have will be forces to be reckoned with. 

Here’s what you need to know:  

4 steps to ensure you remain compliant (now and in the future) 

1️⃣ Ask the tough questions: Make sure you’re asking your AI vendor about their understanding of potential future bias audits and how their algorithms operate. If the tool is a “black box” that can’t be explained, move on. If they dance around the subject, move on.

🚩 More Red Flags to Look out for in AI recruiting tools:

  • Auto-rejecting candidates without clear reasons

  • Using zip codes or school names as ranking factors

  • Screening out candidates with employment gaps

  • Hidden bias in "culture fit" algorithms

  • Lack of transparency in decision-making

  • Teams that cannot fully describe how their technology presents a list of candidates to you

2️⃣ Prepare for future audits: Keep a detailed record of how AI tools are selected, configured, and applied. Document decision-making processes, including why certain candidates were screened out and how AI impacted those decisions. This record-keeping will be invaluable when audit time comes.

Here’s how to implement an AI policy + documentation for your company’s recruitment process.

3️⃣ Stay informed: Ensure that you and your team are up to date on the latest AI regulations at the local, state, and federal levels. 

Resources for staying updated on AI compliance and regulations:

4️⃣ Build custom AI resume review bots: Instead of relying on generalized review tools, create custom tools around your specific parameters and legal requirements to avoid bias. Find a tool (like our platform at Tofu) where you can work with a team educated in compliance and build a custom system you understand that complies with all laws and regulations at both a local and federal level.

Let me be clear:

AI regulation isn’t a roadblock for recruiters in 2024—it’s a competitive advantage

I assume within the next decade, bias audits and compliance checks could be as routine as financial audits. However, the laws and regulations surrounding AI are still being developed, and the next few years are likely to bring some unique challenges for recruiting teams to overcome. 

The key to navigating the road ahead? Stay informed, stay agile, and use that to get ahead of the curve. 

In a world where the rules are still being written, recruiters who can see ahead and be quick on their feet will lead the way. 🚀

Jason 

*Obligatory Disclaimer:
The information provided in this newsletter is for general informational purposes only and is not intended to be legal advice. While we aim to provide valuable insights on recruiting compliance, we recommend consulting a qualified legal professional before making any decisions. I’m an information curator, not an attorney 😅 and nothing in this newsletter should be interpreted as legal guidance.