
Deepfakes and Synthetic Identity Fraud: How to Protect Your Organisation During Remote Onboarding
The Research That Should Make Every HR Manager Stop and Think
Research published yesterday by GetReal Security should stop every HR manager and compliance officer in their tracks. Their Deepfake Readiness Benchmark Report found that 41% of organisations with 1,000 or more employees have already hired and onboarded a fake job candidate. Not almost hired. Actually brought someone into the business who was not who they said they were.
Eight out of ten organisations report encountering AI deepfake or impersonation attempts at least occasionally. Only 1.5% say they have never encountered one. If your organisation is hiring remotely and you are confident this has not happened to you, the data suggests it is worth looking much more carefully.
What Organisations Keep Getting Wrong
In 18 years of running employment screening, I saw identity fraud at the point of hire become steadily more sophisticated. What changed in the last couple of years was not the intent behind it. It was the technology enabling it at scale.
The mistake most organisations make is treating this as a technology problem. It is not. It is a process problem. Remote hiring processes were built for a world where the person on the video call was the person in the documents. That world no longer exists. The process has not caught up.
A video interview is not identity verification. It confirms that someone showed up to the call. It does not confirm who that person is. That distinction has always mattered. Right now, it matters more than ever before.
What Deepfake Hiring Fraud Actually Involves
The Two Elements Working Together
The first is a synthetic identity: a fabricated or composite profile built from real and invented information. A genuine name combined with a manufactured employment history. A real date of birth attached to invented qualifications. Documents that pass visual inspection because they were generated using the same AI tools used to analyse them.
The second is the deepfake itself: AI-generated video that animates a synthetic persona in real time. The person on the video call looks real. They respond naturally. Their face moves convincingly. In some cases, an entirely different individual conducts the interview while AI generates a face overlay that matches the photo on the application.
The candidate gets the job. They pass onboarding checks because the documents they submit match the identity constructed to pass them. They now have access to your systems, your data and your clients.
This is not a future scenario. The GetReal Security report published yesterday confirms it is already happening inside organisations of every size.
Three Vulnerabilities to Close Right Now
The video call being treated as verification. It is not. If you are not cross-referencing the person on screen against verified identity documents through a separate, structured process, you have a gap in your process.
Ask candidates to perform unscripted physical actions during the call. Hold a specific object up to the camera. Turn their head at an angle you specify in the moment. One security firm recently identified a deepfake candidate simply by asking them to wave their hand in front of their face. The synthetic video could not replicate it. The candidate disconnected immediately.
Document acceptance at face value. AI-generated documents can now pass visual inspection. Verification needs to go further: checking metadata embedded in digital files, cross-referencing against source databases and identifying the pixel-level inconsistencies that manipulation software leaves behind in every file it produces.
The gap between your written process and what happens in practice. Who actually sees and verifies original identity documents during remote onboarding? If the honest answer is that nobody does because the process relies on a candidate uploading a scan, that is precisely the gap synthetic identity fraud is constructed to exploit.
The Consequences of Getting It Wrong
Onboarding a fake candidate is not simply an embarrassing hiring mistake. The consequences depend entirely on what access that person has been given.
Access to personal data creates data breach exposure and ICO enforcement action under UK GDPR. Access to financial systems creates direct and immediate fraud risk. If the person is working with vulnerable groups and is on a barred list that the process failed to identify because the identity they presented was not their own, the safeguarding liability is serious and it sits with the employer.
There is also the negligent hiring question. If harm results and your process relied on a video call and a scanned document upload, the question in any tribunal or civil proceeding is straightforward. What did you do, and was it reasonable?
Charlotte Is the Answer to This Problem
Charlotte is the United Kingdom's first AI compliance advisor built entirely on specialist employment screening and vetting expertise. She is not a search engine and she is not a guidance document. She covers 65 specialist topic areas across the full scope of UK pre-employment screening, vetting, compliance and risk, built on 18 years of real operational experience.
On AI fraud, synthetic identity and what defensible remote onboarding looks like, Charlotte goes deeper than any generic resource. She can walk through exactly what your process should include, where the gaps typically appear and what reasonable practice looks like at the point of hire.
To give you a sense of what Charlotte can do on this topic, here is the question I put to her and the answer she came back with.
Question put to Charlotte: We carry out all our interviews by video call. What practical steps can we take to detect whether a candidate is using a deepfake during the interview?
Charlotte's answer: A video call is not identity verification, and treating it as one is one of the most common gaps I see in remote hiring processes. Build in unscripted physical actions during the call: ask the candidate to hold a specific object up to the camera or turn their head at an angle you specify in the moment. Current deepfake technology struggles with spontaneous, unpredictable movement. Run a separate document verification step entirely independent of the call, one that checks the document itself rather than relying on a scan the candidate has uploaded. Look carefully at consistency too: does the person on screen match the identity document in the detail a deepfake overlay might miss, such as ear shape, hairline and natural skin texture? None of these steps is foolproof on its own, but together they create a documented, defensible process that shows reasonable action was taken.
Charlotte provides expert guidance based on 18 years of real operational experience in UK employment screening and vetting. She does not provide legal advice. For legal matters specific to your organisation, always consult a qualified solicitor.
Path 1: If you use a vetting platform, HR system or recruitment tool, this is worth raising with your provider. Charlotte can be embedded into any authenticated software environment with a single CSS code. No technical complexity. No data risk. Ask your platform whether Charlotte is something they offer or are considering.
Path 2: If your organisation operates its own internal software or system, you can trial Charlotte directly. Software platforms and organisations with their own internal systems can access Charlotte free for seven days at https://vettinghub.co.uk/trial. One user. Full access. No commitment and nothing to cancel if she is not right for you.
Getting Charlotte deployed requires a one-time setup of £500 and an ongoing monthly licence of £995. There are no per-user or per-seat charges. Multiple authorised users across the organisation or platform access Charlotte at no additional cost. Access runs month to month with no long-term commitment.
Read More on This Topic
The broader AI fraud threat landscape is covered in full in an earlier post on this blog. It sets out how AI is being used across the entire hiring process, not just at the interview stage, and what every organisation needs to understand. Read it here: https://vettinghub.co.uk/post/ai-fraud-employment-screening-threat-landscape-uk-2026
The identity question also connects directly to right to work compliance. Gaps in how you verify who you are dealing with during remote onboarding apply equally to your right to work process, where a civil penalty of up to £60,000 per illegal worker can apply regardless of whether the failure was deliberate. That full guide is here: https://vettinghub.co.uk/post/right-to-work-checks-employer-compliance-guide-2026
Frequently Asked Questions
What is synthetic identity fraud in hiring?
Synthetic identity fraud involves constructing a fake candidate profile from a combination of real and invented information. It is not simply stealing an existing identity. A genuine name or date of birth is typically combined with a fabricated employment history, invented qualifications and AI-generated supporting documents, producing a candidate who looks credible at every stage of a standard screening process.
Can a deepfake candidate really pass a live video interview?
Yes, and it is already happening. Current deepfake technology can animate a synthetic face in real time, synchronise lip movements with speech and respond naturally to unscripted questions. Research published in May 2026 found that 41% of large organisations have already onboarded someone who was not who they claimed to be. A video call confirms someone showed up. It is not identity verification.
What should a remote hiring process include to reduce deepfake risk?
Three things matter most. Treat the video interview and the identity verification step as entirely separate processes. Include unscripted physical actions in video calls that deepfake technology cannot reliably replicate. Verify identity documents through a structured process that examines the document itself, not just a scan the candidate has uploaded.
Who is responsible if a fake candidate is onboarded and causes harm?
The employer is responsible for the adequacy of the hiring process. If harm occurs and the process relied on a video call and a scanned upload with no further verification, that is unlikely to be considered reasonable or defensible. The question is not only whether fraud occurred. It is whether the process in place was adequate to detect it.
The Two Ways to Access Charlotte
The research published yesterday is not comfortable reading. Four in ten large organisations have already brought someone into their business who was not who they claimed to be. If your remote onboarding process has not been reviewed recently, that needs to change.
Charlotte can help you work out exactly what your process should include, where the current gaps are and what defensible practice requires. She is available every hour of every day.
If you use an existing HR or vetting platform, ask your provider whether Charlotte is available or being considered. She can be deployed into any authenticated software environment through a single CSS code with no data risk and no technical complexity.
If your organisation runs its own internal system, trial Charlotte free for seven days at https://vettinghub.co.uk/trial. One user. Full access. Nothing to cancel if she is not the right fit.
A one-time setup of £500 deploys Charlotte securely into your environment. Ongoing access is £995 per month, with no per-user charges, no long-term contracts and no limits on authorised users. She covers 65 specialist topic areas across pre-employment screening, vetting, compliance and risk, every hour of every day.
Fraud does not wait for your next policy review. Neither should your process.
Graham Johnson, Founder, Vetting Hub
