A note on the state of applicant fraud

Co-authored by Matt Hoffman, Partner and Head of Talent @ M13

Hey, it’s Jason Zoltak. 👋 If this is your first time reading The Final Interview, here’s where you can subscribe so you don’t miss future breakdowns on navigating AI in recruiting and talent management. 

From Matt and Jason

AI is transforming the way we do recruiting, but it also comes with new challenges, and the one everyone seems to be discussing lately are fraudulent applications. The scope of the issue may be surprising. While the problem has been emerging for a few years, only recently has it gained significant attention. Depending on the industry, as many as 70% of applicants for remote engineering roles can be fake, making applicant fraud now one of recruiting's most pressing challenges. Even CISOs are becoming increasingly involved in recruiting conversations, as bad actors attempt to infiltrate companies through deceptive job applications. This growing threat poses serious risks to security, recruitment efficiency, and company integrity.

This has also been a big topic of discussion among our portfolio companies at M13. So, while researching the topic and evaluating potential solutions, I was impressed with Tofu’s approach and the solution they designed to attack the problem at scale.

Together, we looked at the data in order to help shed light on this growing issue and discuss potential remediations in this post. We pull back the curtain on Applicant Fraud and why this is an issue that all Talent Acquisition leaders should take seriously. Below we will share what applicant fraud is, what Tofu is seeing across their customer base, why it is occurring, and how recruiters can identify and respond to these threats.

Applicant fraud overview

If you’re new to this phenomenon, applicant fraud occurs when job candidates deceive employers during the hiring process. Remote hiring has exacerbated this issue, making it easier for applicants to falsify resumes, impersonate others, or have someone else take interviews on their behalf. Companies hiring remotely now regularly encounter fraudulent applicants and it’s starting to take over their ATS and a significant amount of recruiter time

What does applicant fraud look like in practice

Tofu has over 250 ATS connected to its agents, which has helped it develop an educated opinion on all the ways applicant fraud can manifest itself. Below are a few common types of applicant fraud Tofu is seeing across it’s customer base and us across the portfolio :

  • Resume Fabrication: Fraudulent candidates often use AI tools to create resumes tailored precisely to job descriptions, including fake work experiences, skills, and references.

  • Proxy Interviewing: Fraudsters sometimes arrange for more qualified individuals to complete technical interviews on their behalf or use deepfake technology to disguise their identity during video calls.

  • Organized Operations: Fraudulent activities may be coordinated by organized groups creating convincing online profiles, LinkedIn accounts, and even fake companies to enhance credibility.

  • Geographic Misrepresentation: Candidates may falsely claim to be based in the U.S. or another targeted country while actually operating from abroad, often using accomplices to appear as local hires.

  • Security Risks: Fake candidates may successfully access sensitive IT systems, posing direct threats to company security.

Fraudulent applicants often combine multiple methods, making detection more challenging.

Who is being impacted

Data rich companies and their engineering teams are being hit the hardest. Think industries like cybersecurity, healthcare, fintech, and tech companies managing significant customer, financial or personal information. Fraudulent applicants target these companies to gain unauthorized access to confidential details, trade secrets, and sensitive financial data. Most of the time they’re applying for engineering roles since those functions have the greatest access to sensitive infrastructure data. Most B2B SaaS companies sit on important customer data that is valuable to someone with malicious intent. Seldom do we see it on GTM roles, although it’s likely that will change in the future with the relative ease of impersonation and application.

Why is this happening

The prevalence of remote work + the explosion of AI has sent applicant fraud spiking through interview funnels. Without early in-person interviews it’s easier for candidates to misrepresent their identity or qualifications. Powerful generative AI, like deepfake tech, voice altering tech, AI application tools, among others, can help create believable fake identities that are getting past recruiting teams and hiring decisions.

The culprits are often foreign actors from countries like North Korea living in countries like Russia and China. Though it may sound like something out of a spy novel, their goal is often to infiltrate these companies, gain unauthorized system access, and steal sensitive information to support espionage or criminal activities.

Just how bad is it?

Industry professionals have shared concerns about applicant fraud. For example, CNBC recently highlighted a hiring manager's experience:

"We've had multiple cases where candidates passed technical interviews, got hired, and then couldn't perform any of the tasks. Upon investigation, we discovered they had completely faked their identity and work history."

Tofu analyzed the engineering recs of 10 different customers with thousands of applications. Healthcare and Fintech/Blockchain are being hit the hardest, then security and next traditional SaaS, which includes industries like data warehousing and other verticalized companies. We even saw one fintech customer with 70% of their applicants scoring below a likelihood score of 4 (out of 10).

Distribution of fake applicant likeliness by industry

The problems it creates for recruiters

Without a good, scalable process of identifying fraud, having to manually review thousands of fake applicants or rely on gut feel once you’re on zoom puts pressure on the entire process. Once it snaps, bad actors slip through the cracks. Here’s how:

  • Wasted Time, Lost Talent: Recruiters waste valuable hours processing fraudulent applicants, limiting their ability to engage genuine candidates.

  • Hiring Mistakes: Fraudulent hires who lack the necessary skills result in resource wastage and the need to repeat the recruitment process.

  • Security Risks: Fraudulent employees pose severe threats, potentially exposing sensitive company information and undermining customer trust.

  • Increased Costs: Fraudulent hiring results in additional costs associated with hiring, onboarding, and managing unqualified or deceptive employees.

Jason recently wrote a post on LinkedIn to which one person commented “if you can’t tell someone is a fake candidate on a call, you shouldn’t be in recruiting”. Unfortunately by that point it’s too late, but regardless that doesn't get to the root of not the real problem. It’s interesting people are trying to get really good at detecting who is pretending to be someone they’re not, on a call. Because if you value time, by that point you’re already behind.

Detecting this at the earliest point in the funnel with good scalable systems should be everyone’s top priority. Here’s an example of why it matters:

Imagine a typical process:

  • 1,000 resumes

  • 43% fake -> 430 resumes

  • 25 seconds to review (10 seconds to review each one + 15 seconds for validating yourself on the internet, very generous)

  • 10,750 seconds -> 180 minutes -> ~3 hours wasted reviewing junk

If you have more than even one engineering role, multiply that time by the number of open roles. It can get overwhelming quickly!

That overwhelming burden can then have disastrous effects. We’ve heard countless stories where companies have made offers to fake applicants, have found their laptops were shipped overseas to countries in South East Asia, IT detected information being sent to remote areas in the East, and found new employees that showed up to work were not the ones they interviewed. The anecdotes are unbelievable, literally. 

How can AI help?

In late fall of 2024, Tofu started thinking about how to solve this issue for its customers by taking a first principles approach to the problem.

It needed to be non-invasive, thorough and seamless. So how does ‘Tofu’ filter out fraud?

  • Tofu conducts a full background on the applicant up front, at the earliest point in the funnel

  • Requires zero human interaction to avoid alerting fake applicants while not upsetting real ones

  • It validates against a network of fake applicants we’ve been building for nearly nine months, which has become our differentiating factor.

The world is changing and it’s important to be ahead of the curve. In absence of adopting a solution, here’s what we have identified as several warning signs of applicant fraud you can use to sniff out bad actors.

  • Inconsistencies between resumes, LinkedIn profiles, and verbal accounts.

  • Candidates refusing to turn on cameras or share screens during interviews.

  • Unusual requests, such as shipping equipment to a different address or insisting on remote-only work.

  • Lack of a digital footprint or social media presence.

  • Accents or personal details not aligning with claimed locations or histories.

But for those looking to apply a scalable solution to an ever changing problem, we hope this can be helpful to guiding you through it.

Matt and Jason