AI Fraud in Employment Screening with vetting Hub

The person who interviewed so well last month may not have been the person you hired.

March 10, 20267 min read

The person who interviewed so well last month may not have been the person you hired.

That is not a hypothetical. It is happening to UK employers right now, and the research published at the start of this year makes it impossible to ignore. According to data released in January 2026 by background screening firm First Advantage, 69% of UK hiring leaders identify AI-enabled impersonation and deepfake technology as the most sophisticated emerging threat to recruitment integrity. Not a concern for the future. The most pressing threat they face today.

If you are an HR Director, Compliance Manager or business owner responsible for bringing people into your organisation, you need to understand what this looks like in practice, what your current screening process is likely missing, and what you need to do about it.

What a deepfake candidate actually looks like

The word deepfake still conjures up images of badly stitched celebrity videos from a few years ago. That framing is dangerously outdated. What employers are dealing with in 2026 is entirely different.

A fraudster using today's tools can attend a live video interview with an AI-generated digital overlay on their face and voice, responding in real time to your questions, presenting the appearance and sound of a completely different person. They can submit application documents that were never issued by any institution, rendered so accurately that manual review will not catch them. They can produce supporting credentials, qualification certificates and employment history records that look, to any standard check, exactly as they should.

Experian's 2026 Future of Fraud Forecast, released in January, explicitly identifies deepfake job candidates as one of the five most significant fraud threats of the year, warning that employers are at risk of unknowingly onboarding individuals who are not who they claim to be, and handing them access to systems, clients and sensitive data from their first day in the role.

The scale is already significant. Research published by People Management in January 2026 found that one in four UK companies has reported identity fraud among new hires. Gartner is projecting that by 2028 one in four candidate profiles globally will be fake. And deepfake-related activity, according to Pindrop research cited in that same report, increased by 1,300% year on year.

These are not edge cases in the financial sector or Silicon Valley. This is the operating environment for any UK employer hiring through remote or hybrid processes right now.

Why your current screening process has a gap you may not have seen

The standard pre-employment screening process was designed for a different threat landscape. References, employment history verification, DBS checks, Right to Work documents, qualification certificates. These are all the right things to check. The problem is that the fraud has moved around them.

A candidate using AI tools can produce a synthetic identity that passes your Right to Work check because the document itself has been generated to meet the specification. They can provide references through paid referee services that generate convincing responses to any question. They can submit a CV built entirely from fabricated but plausible employment history that no standard verification will unravel.

What the process does not typically check is whether the person attending the interview is the same person whose documents you hold. For organisations screening remotely, that gap is where the fraud lives.

Two decades of processing real vetting files across regulated sectors has shown us exactly how fraud exploits process gaps. The pattern is consistent: the fraud targets the join between what an organisation thinks its process covers and what it actually verifies. In 2026 that join is the identity of the person in front of the camera.

The sectors most at risk

Every organisation hiring remotely has exposure. But the sectors that face the most sophisticated and determined fraud attempts are the ones where access has the most value.

Financial services. A rogue hire with access to client accounts, payment systems or sensitive financial data represents a very specific and very lucrative target. FCA regulated firms have screening obligations precisely because the stakes are understood. The deepfake threat does not remove those obligations, it increases the pressure to meet them rigorously.

Healthcare and care provision. CQC safer recruitment requirements exist for very good reason. An unsuitable person in a patient-facing or vulnerable adult-facing role, who passed your screening because the screening did not actually verify who they were, is a safeguarding failure with consequences that go far beyond a regulatory finding.

Security. BS7858 sets out the standard for screening in the security sector, and the standard exists because of the access security personnel have. The combination of physical access rights and a fraudulently obtained identity is a threat that every security employer needs to take seriously.

Technology and government contracting. The Pindrop research cited in People Management found that sectors with the highest rates of sensitive system access, including banking, finance and technology, report the highest frequency of deepfake fraud attempts. Government contractors holding BPSS obligations are not exempt.

For a full picture of your Right to Work obligations under the current framework, including what the Border Security, Asylum and Immigration Act 2025 now requires of organisations engaging contractors and sub-contractors, our guide at Right to Work Checks: Employer Compliance Guide 2026 sets out the current position in detail.

What a more robust process looks like

The response to this threat is not to abandon remote hiring. It is to build a process that treats identity verification as a distinct and deliberate step, separate from document checking, and one that is not satisfied by a video call alone.

Several things make a meaningful practical difference.

Requiring at least one in-person identity verification step before onboarding, where the person attending is compared against the documents you hold and a photograph taken at the time, closes the most significant gap in remote hiring.

Using a certified digital identity verification service that includes active liveness detection, rather than passive document scanning, creates a meaningful barrier to AI-generated identity fraud. The Digital Verification Services regime introduced under the Data Use and Access Act 2025 sets a framework for which services can be trusted for this purpose.

Comparing the person who attended the interview with the person whose documents you verified, formally and as a documented step in your process, creates an audit trail that demonstrates due diligence and gives you a defensible position if a fraudulent hire is later identified.

Training everyone involved in recruitment to understand what the current fraud landscape looks like is not optional. The Pindrop data cited in January's People Management research found that 62% of hiring professionals admitted that candidates are now better at using AI to deceive than recruiters are at detecting it. That is a knowledge problem with a practical solution.

The compliance dimension you cannot ignore

This is not just a fraud risk. It is a compliance risk.

Under the Economic Crime and Corporate Transparency Act 2023, large organisations now have a legal obligation to take reasonable steps to prevent fraud. Knowingly operating a hiring process that leaves a significant and well-documented gap in identity verification is not a neutral position. It is one that becomes very difficult to defend when something goes wrong.

For regulated sector employers, the obligation is more direct still. Organisations under FCA, CQC or BS7858 obligations are already required to screen to a defined standard. If your process is not keeping pace with the fraud landscape it is supposed to be protecting against, the standard is not being met in practice, whatever your procedure documents say on paper.

The honest picture

Two thirds of UK hiring leaders have already identified this as their most urgent screening threat. A quarter of companies have already reported identity fraud among new hires. Experian has placed deepfake employment fraud on their official list of the top five fraud threats of 2026.

This is not something to monitor. It is something to act on now, before the next hire, not after one has already gone wrong.


Vetting Hub is a subscription based consultancy founded by Graham and Vivianne Johnson, drawing on twenty years of operational experience in employment screening across every regulated sector in the UK. We give organisations the knowledge to understand every aspect of screening and vetting compliance, the practical tools to implement it correctly, and direct access to Graham and Vivianne when a specific question needs a specific answer. Everything works together as one ongoing consultancy relationship, not a course you buy once and forget. Find out more at www.vettinghub.co.uk

Graham and Vivianne Johnson are the Founders of Vetting Hub, Empowering Your Business to Get Employment Screening Right Every Time

Graham and Vivianne Johnson

Graham and Vivianne Johnson are the Founders of Vetting Hub, Empowering Your Business to Get Employment Screening Right Every Time

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog