The employee who doesn't exist
Not long ago, the idea of a fake employee sounded far-fetched. Resume fraud? Sure. Outsourced interviews? Occasionally. But a completely synthetic person (face, voice, work history, and identity) getting hired, onboarded, and trusted inside a company used to feel like science fiction.
That era is over. Gartner predicts that by 2028, one in four candidate profiles worldwide could be fake. The firm also reports that 6% of job candidates admit to interview fraud, including impersonation or having someone else interview for them. Hiring teams are already seeing face-swapping and synthetic identities appear in real interview workflows.
Taken together, the pattern is clear: companies are increasingly interviewing, and in some cases hiring, people who don't exist. These "employees" can pass screening, ace remote interviews, and start work with legitimate credentials. Then, once inside, they steal data, map internal systems, divert funds, or quietly set the stage for a larger attack.
Hiring is now an initial access vector
Remote work didn't create this problem, but it made it scalable. Generative artificial intelligence (AI) finished the job.
Attackers can now fabricate convincing human identities at low cost and high speed. What used to take months of social engineering can now be assembled in hours:
- Synthetic resumes polished and optimized for applicant tracking systems (ATS)
- Synthetic LinkedIn profiles that look lived-in
- Voice cloning and real-time video deepfakes that allow a "normal" interview with the wrong person on camera
On the surface, everything looks normal. Internally, the risk profile changes completely.
This matters because the goal isn't to trick one person into clicking a link. The goal is access. A job offer provides:
- Legitimate credentials
- Trusted system permissions
- Time, often weeks or months, to operate without suspicion
Law enforcement has confirmed that some of these campaigns are tied to state-backed operations. In 2025, the U.S. Department of Justice announced coordinated nationwide actions targeting North Korean remote IT worker schemes, including indictments, arrests, searches of laptop farms, and seizures of financial accounts and fraudulent websites used to launder funds.
It's repeatable, and it scales. That makes it a playbook for getting inside companies.
Why common defenses fail
Most security programs aren't built to defend against employees, especially those who arrive through legitimate channels. The hiring pipeline assumes trust by default, and attackers exploit that assumption.
The gap shows up in familiar places:
- Background checks verify documents, not lived identity
- Identity verification is often a one-time event
- Hiring teams are optimized for speed and candidate experience, not adversarial pressure
That mismatch is now a security risk. Federal guidance from the National Security Agency (NSA), Federal Bureau of Investigation (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) warns that synthetic media is already being used for deception and social engineering, and it emphasizes verification, planning, and training rather than assuming perfect detection.
Detection tools help, but they don't scale as a single line of defense. Two hard realities keep showing up:
- Detection performance is inconsistent. NIST evaluations show deepfake detection varies significantly by deepfake type and media conditions, which means what works today may fail tomorrow.
- Attackers adapt and impersonations are hard to distinguish. Interpol has warned that synthetic media can enable highly convincing impersonations that are difficult to distinguish from genuine content, reinforcing that "spot the fake" isn't dependable at scale.
Controls that actually work
Defending against deep fake job hires starts with accepting an uncomfortable reality: this is a trust problem first, and a tooling problem second. You can't firewall your way out of synthetic identity. A video interview is not proof of personhood. And treating hiring as outside security just because it sits under Human Resources is no longer defensible.
If you're a Chief Information Security Officer (CISO) or security leader, your hiring pipeline is part of your attack surface. It needs the same layered verification you apply everywhere else.
1) Make interviews harder to fake
Most interview loops were designed for speed and repeatability. That worked when deception was expensive. With synthetic media, predictability helps the attacker.
The fix is controlled unpredictability – moments that require a live human and are hard to execute reliably with pre-recorded video or real-time overlays. Shift from rehearsed prompts to real follow-ups that force context and reasoning, like:
- Why did you make that tradeoff?
- What broke during the project, and what did you learn?
- What would you do differently with hindsight?
- How did you handle disagreement on the team?
Then add a quick liveness prompt that fits remote work: adjust camera/lighting, move the webcam to show the room, or read a randomly generated sentence aloud. These checks take seconds, but they anchor the interaction in the physical world, exactly where many synthetic systems still struggle. This approach aligns with federal guidance emphasizing real-time verification and trained humans as practical mitigations for synthetic-media deception.
Takeaway: Add controlled unpredictability, enough that only a live human can pass.
2) Add identity friction earlier than you're comfortable with
Most organizations verify identity after they've already decided a candidate is strong. That's backward. Identity verification should happen before trust is extended and before access is granted.
A good north star is how digital identity systems think about identity proofing. NIST's Digital Identity Guidelines describe identity proofing and enrollment as evidence-based processes designed to establish identity at defined assurance levels. Hiring doesn't need to become a federal identity program, but the mindset carries over: treat identity as a control, not a formality.
In practice, that means adding small, deliberate friction that is hard to fake and easy to validate:
- Verified identity checks (for example, government-issued identification with validation steps)
- A process designed to make it expensive to present a manufactured person
- For remote roles, add a real-world checkpoint: require at least one in-person identity confirmation before onboarding is complete – either a brief meeting, an office visit, or a verified local partner location for equipment pickup and ID verification. A single physical handoff breaks many synthetic workflows and raises the attacker's cost dramatically.
Biometric checks and liveness checks can help when they're used correctly, as one signal in a layered system. The trap is treating them as a single gate and assuming the problem is solved. Federal guidance points the same way: don't bet on one detector – build verification, training, and response readiness into the process.
3) Treat resumes as claims, not facts
AI-generated resumes are often perfectly written, aligned to job descriptions, and optimized for applicant tracking systems (ATS). That polish is exactly why they work.
Security-minded hiring requires a shift: treat the resume as a set of claims to validate, not a reliable narrative. Push for specificity, then test it. Ask for details around timelines, real systems used, constraints, trade-offs, team dynamics, and measurable outcomes. The goal is to force the candidate into lived specifics, because that's what synthetic narratives struggle to reproduce consistently.
Reference checks are also becoming more important, not less, because they help verify the candidate's claims. Since fake references and "reference houses" exist, automated workflows are easier to game than a live conversation that verifies the relationship and probes for concrete examples.
4) Bring security into recruiting workflows
Recruiters don't need to become forensic analysts, but they do need basic adversarial awareness: what synthetic applicant tradecraft looks like in real workflows.
Train recruiters and hiring managers to recognize patterns that consistently show up:
- Interview signals: flat answers, evasiveness, resistance to live prompts
- Profile signals: over-polished but vague timelines, thin professional footprints
- Post-onboarding signals: persistent camera failures, refusal to join live video, identity evasiveness, or unexplained access anomalies
Federal guidance emphasizes training, verification, and preparedness as key mitigations for synthetic deception. Recruiting needs the same escalation path security already relies on: early, clear, and stigma-free.
5) Monitor new hires like you monitor identities
Many organizations treat onboarding as a one-time gate: background check completed, identity verified once, access granted. That's not how modern security works anywhere else.
Zero trust is an identity-centered security model that assumes no implicit trust based solely on network location or initial authentication. NIST's Zero Trust Architecture frames this shift as continuous evaluation of users, assets, and access decisions, rather than automatic trust for internal actors.
That model applies cleanly to synthetic hires. Monitor access patterns, privilege requests, and unusual data movement, especially in the first 30-90 days. Establish baselines for new roles and flag anomalies early. The point isn't surveillance; it's catching identity-risk before it becomes persistence.
The economics: why this scales
This threat is growing because the economy works. Deepfake hiring fraud is cheap to run and easy to scale: build one convincing identity, reuse it, and push applications at volume until one gets through. The projections point the same way – access to powerful tools is getting easier, which is exactly how niche tradecraft turns into an industrialized process.
The Citi Institute projects that up to 8 million deepfakes will be shared online by the end of 2025, up from about 500,000 in 2023, driven by easy access to powerful tools and abundant data. Once deception becomes cheap, attackers can run high-volume campaigns until one gets through.
The fraud economy is already moving in the same direction. Deloitte estimates generative artificial intelligence (AI)-enabled email fraud losses could reach about $11.5 billion by 2027 under an aggressive adoption scenario. Deloitte's research has also been cited as projecting U.S. fraud losses could rise to $40 billion by 2027, up from $12.3 billion in 2023.
Conclusion: trust is no longer implicit
For most of the modern security era, we treated hiring as an administrative function and security as a technical one. That separation no longer holds.
Deepfake job hires collapse the boundary between human trust and system access. Federal guidance from the NSA, FBI, and CISA makes clear that synthetic media is already being used for deception and social engineering and that organizations should plan around verification, training, and response – not perfect detection.
The fix is a posture change. The same evolution that brought multi-factor authentication, zero trust, and continuous monitoring now needs to reach hiring. Identity has to be verified. Trust has to be earned. And access has to be monitored for as long as it exists.
At Adaptive Security, we built our platform around the idea that AI-powered social engineering isn't a future problem. It's a present one. When a hacker's first day can be your breach, delay sends the wrong signal.
Brian Long — CEO & Founder at Adaptive Security https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhS_CR8TE5tYwnHLmW1N2LbafBXrtu7CahfmKN0df8_ekwxIzNuvWk8LVBCGPfCOO0QY278ky0XzwWpqMP_zBz9GwcDFvxdQhnsYLziYCwsPkE8MWsRwQNW-7V1kAIDNqbjgj43kgfKNd2LEwigdILM_I0DrB02XHoukPhuDFBx2tZ6NpEVP4hf5uYtHJU/s728-rw-e365/brian.png




