
The Person Sitting in That Interview May Not Exist
The CV arrived looking polished and plausible. The employment history read well. The qualifications checked out on paper. The video interview went smoothly — confident answers, professional appearance, a candidate who seemed to know exactly what they were talking about. You made the offer.
The person you hired does not exist.
Not in the way you think they do, anyway. The CV was generated by AI in under ten minutes. The qualifications were fabricated. The face on the interview screen belonged to a deepfake filter layered over someone sitting in a different country entirely. The employment history was constructed to pass a superficial check and nothing more.
This is not a hypothetical. It is happening right now, to organisations that consider themselves careful, in sectors that consider themselves well-regulated. Experian's 2026 fraud forecast identified deepfake job candidates and AI-generated CVs among the top five fraud threats of the year, warning that employers will unknowingly onboard individuals who are not who they claim to be. Experian Research from First Advantage found that 69% of UK hiring leaders now cite AI-enabled impersonation and deepfake technologies as the most sophisticated emerging threat to their recruitment process. Responsesource
Most of those organisations do not yet have a process capable of catching it.
Why 2026 is different from every year before it
Employment fraud is not new. Two decades of processing vetting files across security, healthcare, financial services and government taught us that people have always been willing to stretch a date, inflate a job title or quietly omit a role they would rather you did not examine. What is new is the scale, the quality and the sheer accessibility of what is now available to anyone who wants to deceive.
Until very recently, building a convincing fraudulent application required effort, contacts and a reasonable tolerance for risk. Generating a fake CV that reads like a real human wrote it, complete with consistent dates, plausible progression and language tailored precisely to the job description, used to require either skill or a complicit colleague. Creating a professional-looking photograph for a fraudulent application meant stealing one or having one taken. Passing a live video interview as someone other than yourself was, for almost everyone, simply not possible.
None of those barriers exist in the same meaningful way any more.
What began as isolated incidents has become a structural problem. A survey of hiring managers found that 59% suspect candidates of using AI tools to misrepresent themselves, one in three has discovered a candidate using a fake identity or proxy in an interview, and 62% of hiring professionals admitted that job seekers are now better at deceiving with AI than recruiters are at detecting it. Withsherlock
That last figure deserves a moment. The people whose job it is to identify fraud are, by their own admission, losing the detection race.
The three layers of AI-assisted candidate fraud
It is worth being precise about what you are dealing with, because AI-assisted fraud in recruitment now operates across three distinct layers. Treating them as one problem leads to an incomplete response.
The CV and credentials layer
This is the most common form and the one most organisations encounter without realising it. A YouGov survey for the Hedd qualification fraud service revealed a sharp rise in CV fraud, with employers warning that AI makes it easier than ever for candidates to falsify applications. Prospects AI tools have eliminated the effort that fabrication used to require. A candidate wanting to claim a qualification they do not hold, extend employment dates to cover a gap, invent a role that never existed, or generate a fluent explanation for six months of nothing can do all of that in minutes — and the output no longer carries the obvious hallmarks of something made up. Grammar is perfect. Formatting is professional. The story holds together.
What makes this particularly difficult is that the candidate can be briefed by the same AI tool that wrote the CV. Ask them to walk you through their experience in interview and they will do it convincingly, because they have rehearsed a version that is internally consistent. The only reliable response at this layer is independent verification against sources the candidate cannot control. Our published guide on CV fraud and HMRC PAYE verification sets out in detail how HMRC employment records provide the one tamper-proof employment check available to UK employers. For qualifications, verification must go directly to the awarding institution — nothing the candidate provides themselves is verification.
The identity layer
Beyond CV fraud, AI is now being used to fabricate or manipulate identity documents and generate entirely synthetic candidate personas at scale. Fraud networks generate multiple fake identities, each built to appear legitimate and complete with documents, visuals and interview-ready responses, then flood job boards and application portals in coordinated waves, rotating between personas and tweaking content to avoid detection. Recruitics
The most organised example of this operating against UK employers involves North Korean IT workers, and it is worth naming it directly because there is still a widespread assumption that this is an American problem. It is not. Google's Threat Intelligence Group has observed North Korean IT workers increasingly targeting European organisations as US enforcement action has disrupted their operations stateside. Infosecurity Europe These operatives build fraudulent identities, apply through mainstream platforms including LinkedIn, and use facilitators to assist with background checks once an offer has been made. The practice of North Korean nationals using stolen or falsified identities to obtain employment with Western organisations has been documented in the US, UK and Australia for several years Infosecurity Magazine — and recent cases have extended beyond salary diversion into active data theft and extortion.
For regulated sector employers, the compliance exposure is severe. An organisation unknowingly employing someone subject to international sanctions while failing its sector's identity verification standard is in a position that is very difficult to defend in an audit or an enforcement conversation.
The live interview layer
Deepfake-related activity increased by 1,300% year on year according to Pindrop research, with AI-generated or replayed audio appearing in approximately one in every 106 company meetings monitored. People Management Applied to recruitment, this means a candidate attending a video interview can present a different face, a different voice and a different persona in real time — and the technology required to do so is freely available, inexpensive and improving rapidly.
Companies including Google and McKinsey have responded to the surge in AI interview fraud by reintroducing mandatory in-person interviews. Withsherlock That is not a practical first step for every organisation, but the principle it reflects — that remote-only hiring processes have become structurally vulnerable to impersonation in a way they were not two years ago — is one that every employer needs to take seriously now rather than after the event.
What your current process almost certainly cannot catch
This is the uncomfortable part. Standard pre-employment checks were not designed for this threat environment, and several assumptions about what a thorough process looks like no longer hold.
Reading a CV carefully is not verification. A fluently written, AI-generated employment history does not contain the inconsistencies a careful reader would traditionally spot. Asking a candidate to talk through their experience in interview is not verification. A coherent and confident answer is precisely what the same AI tool that built the CV was designed to produce.
Checking that a LinkedIn profile exists and matches the application is not verification. Fake LinkedIn profiles, built using stolen identities and generative AI, are a routine part of the fraud process NPAworldwide — not an afterthought. Accepting a digital or paper qualification certificate from the candidate is not verification. Neither is a photograph.
The gap between what most organisations believe their screening process catches and what it actually catches has always existed. AI has widened it considerably, quietly and without most organisations noticing until there is already a problem inside their building.
What a process fit for this environment actually looks like
Closing these gaps does not require technology that most organisations cannot access. It requires systematic verification against independent sources, combined with some specific procedural adjustments that reflect what the current threat landscape actually is.
For employment history, HMRC PAYE records remain the most robust tool available to UK employers. Every UK employer who has paid someone through PAYE is required to submit a Full Payment Submission recording that employment. That data sits with HMRC, cannot be altered by the candidate, and provides a direct comparison between what someone has declared and what the government actually holds on record. Our published guide on employment history verification covers exactly how to build this into a standard process rather than treating it as a specialist step.
For qualifications, verification goes directly to the awarding institution or through an accredited verification service. Reviewing a certificate the candidate has produced is not sufficient and has not been for some time.
For identity, digital identity verification through a certified Digital Verification Service under the UK's GPG45 framework provides a level of assurance that document inspection alone cannot achieve. For roles involving video-based hiring, require candidates to perform a live, unscripted action on camera during the interview — something as simple as holding up a piece of paper with a word written on it. No pre-recorded or AI-filtered video can respond to a real-time instruction it has not anticipated. For higher-risk or regulated roles, in-person identity verification at some stage should be standard practice, not an exception reserved for suspicion.
For right to work, the checks must be conducted exactly as the Home Office requires. Our right to work employer compliance guide sets out the current requirements in full, including the implications of the BRP phase-out and the expanded obligations introduced by the Border Security Asylum and Immigration Act 2025. If you are still running a process built around accepting physical documents at face value, that process has two problems now, not one.
For employers in security, healthcare, financial services and government contracting, the baseline standard for identity verification and employment history confirmation under BS7858, CQC, FCA and BPSS is already higher than many organisations realise they are required to meet. If your current process would not hold up under your sector's audit standard, it will not hold up against this threat environment either. The pre-audit vetting file review that Vetting Hub offers exists precisely to show you where those gaps are before an inspector finds them instead.
The mindset shift that matters most
Two decades of doing this work operationally taught us one consistent lesson: the organisations that caught fraud reliably were not the ones with the most aggressive or suspicious cultures. They were the ones that had stopped treating what a candidate told them as evidence.
A CV is a claim. A certificate shown on a video call is a claim. A LinkedIn profile is a claim. An employment date on an application form is a claim. None of it becomes fact until it has been checked against a source the candidate does not control.
That principle has always been true. What AI has done is make the claims more convincing, more consistent, more professional in appearance and far harder to challenge on instinct. The response is not to become hostile to every applicant. It is to build a process that does not depend on spotting the tells — because in 2026, the tells are disappearing.
A leading research firm predicts that within three years, 25% of job candidates globally could be fake. Bradley Whether that figure proves accurate or not, the direction of travel is clear. The organisations that handle this well will be the ones that verify independently, systematically and as a matter of routine. The rest will find out there was a problem when the problem is already inside their organisation.
Working with Vetting Hub
At Vetting Hub, we work with HR teams, compliance managers and business owners who want to handle employment screening with complete confidence in a landscape that is moving faster than most internal processes can keep pace with.
Our subscription gives your organisation CPD certified practical training built from two decades of real operational experience, the compliance tools and frameworks to put that knowledge to work immediately, and direct access to Graham and Vivianne Johnson when a specific case or question needs a specific answer. Whether you are reviewing a process that has grown without being tested, preparing for an audit, or trying to understand exactly where your gaps are, we are here as an ongoing resource rather than a one-off course.
Visit us at www.vettinghub.co.uk
