#1 Trusted Cybersecurity News Platform
Followed by 5.20+ million
The Hacker News Logo
Subscribe – Get Latest News

Digital Identity | Breaking Cybersecurity News | The Hacker News

Category — Digital Identity
Deepfake Job Hires: When Your Next Breach Starts With an Interview

Deepfake Job Hires: When Your Next Breach Starts With an Interview

Jan 05, 2026
The employee who doesn't exist Not long ago, the idea of a fake employee sounded far-fetched. Resume fraud? Sure. Outsourced interviews? Occasionally. But a completely synthetic person (face, voice, work history, and identity) getting hired, onboarded, and trusted inside a company used to feel like science fiction. That era is over. Gartner predicts that by 2028, one in four candidate profiles worldwide could be fake . The firm also reports that 6% of job candidates admit to interview fraud, including impersonation or having someone else interview for them. Hiring teams are already seeing face-swapping and synthetic identities appear in real interview workflows. Taken together, the pattern is clear: companies are increasingly interviewing, and in some cases hiring, people who don't exist. These "employees" can pass screening, ace remote interviews, and start work with legitimate credentials. Then, once inside, they steal data, map internal systems, divert funds, or quietly set the...
Purdue University’s Real-World Deepfake Detection Benchmark Raises the Bar for Enterprise Models

Purdue University's Real-World Deepfake Detection Benchmark Raises the Bar for Enterprise Models

Dec 22, 2025
Deepfakes aren't just viral clips or political media anymore — they're appearing in enterprise workflows where a camera feed is treated as proof: onboarding, account recovery, remote hiring, privileged access, and partner verification. That shift forces security teams to ask not just, "Does this look fake?" but, "Can we verify in real time that the capture is authentic and the channel isn't compromised — without disrupting the workflow?" A new benchmark from Purdue University addresses that question. Instead of testing detectors on clean, lab-style samples, Purdue evaluated tools on real incident content pulled from social platforms — the kind of compressed, low-resolution, post-processed material that tends to break models tuned to ideal conditions. What Purdue tested — and why it matters Purdue built its benchmark around the Political Deepfakes Incident Database (PDID), which focuses on deepfake incidents circulating on X/Twitter, YouTube, TikTok, and Instagram. Real-world distri...
Cybersecurity Resources