Deepfakes aren't just viral clips or political media anymore — they're appearing in enterprise workflows where a camera feed is treated as proof: onboarding, account recovery, remote hiring, privileged access, and partner verification. That shift forces security teams to ask not just, "Does this look fake?" but, "Can we verify in real time that the capture is authentic and the channel isn't compromised — without disrupting the workflow?"

A new benchmark from Purdue University addresses that question. Instead of testing detectors on clean, lab-style samples, Purdue evaluated tools on real incident content pulled from social platforms — the kind of compressed, low-resolution, post-processed material that tends to break models tuned to ideal conditions.

What Purdue tested — and why it matters

Purdue built its benchmark around the Political Deepfakes Incident Database (PDID), which focuses on deepfake incidents circulating on X/Twitter, YouTube, TikTok, and Instagram. Real-world distribution shifts are where detectors tend to fail, so Purdue designed the test to reflect what security teams encounter in practice.

The dataset intentionally includes "messy" characteristics common in the wild:

  • Heavy compression and re-encoding
  • Sub-720p resolution
  • Short, social-media-style clips
  • Heterogeneous generation pipelines and post-processing

False-acceptance rate (FAR) – how often a fake is mistakenly accepted as real – is often more critical than accuracy alone. A detector that triggers too many false alarms can be impractical at scale.

PDID contains 232 images and 173 videos. Detectors were evaluated end-to-end using standard metrics (accuracy, AUC, and FAR), covering academic, government, and commercial approaches. These realistic inputs reveal how models are likely to perform in production, not just in the lab.

How Deepsight performs

With real-world data exposing the limits of many detectors, the question becomes: which tools can actually deliver in production? In Purdue's evaluation, Deepsight, a deepfake detection platform from Incode Technologies, delivered metrics relevant for identity and trust workflows:

  • Lowest image FAR: 2.56% with 91.07% accuracy
  • Best commercial video accuracy: 77.27% with 10.53% FAR

PDID notes that Deepsight is designed for identity verification, not political content, yet it performed strongly on this benchmark. It shows why resilience across sources, compression, lighting, and formats matters.

Why deepfake defense is a model-vs-model security battleground

The industry conversation often frames deepfakes as a content problem. In enterprise security, it's increasingly a systems problem. Attackers aren't just generating convincing faces — they're targeting the capture path to scale attacks:

  • Injecting manipulated content through virtual cameras
  • Using rooted or jailbroken devices to hijack camera feeds
  • Running sessions in emulators designed to appear legitimate
  • Automating probes to optimize attacks

Even strong detectors can fail if attackers control the input path.

Layered defense in Deepsight

Teams responsible for verification care not just about detection accuracy, but real-world outcomes. Deepsight uses three real-time layers to protect both media and systems:

  • Perception: multi-modal signals across video, motion, and depth
  • Integrity: device and camera checks to detect tampering or spoofed feeds
  • Behavioral: risk signals to flag automation and non-human patterns

This architecture extends protection from device to decision, not just the media.

Real-world deployment results

According to internal Incode testing across 1.4M identity verification sessions, Deepsight:

  • Reduced false-acceptance rate by 68×
  • Identified 10× more deepfakes than trained human reviewers
  • Caught 24,360 fraudulent sessions missed by other systems

What security teams should know

PDID provides rare, real-world comparison data. When evaluating deepfake detection tools, teams should consider:

  • FAR and false-positive rates at recommended thresholds
  • Performance under compression, low resolution, and post-processing
  • Coverage for capture-path tampering (virtual cameras, emulators, compromised devices)
  • Frequency of updates against new-generation techniques

The bottom line

Deepfake detection is an arms race, and real-world conditions are what matter most. Purdue's benchmark makes this clear, while platforms like Deepsight demonstrate how layered, end-to-end defenses help enterprises stay ahead. For teams relying on camera-based verification, resilience against emerging threats is now a necessity, not an option.

About the Author: Ricardo Amper is the founder and CEO of Incode Technologies, launched in 2015 in San Francisco to transform the digital identity space. Under his leadership, Incode develops AI- and ML-powered, privacy-centric solutions that help banks, governments, retailers, and other industries reduce fraud, increase revenue, and deliver seamless user experiences. A serial entrepreneur with over 20 years of experience, Ricardo previously founded La Burbuja Networks, co-founded Amco Foods (acquired by Grupo Bimbo), and led Grupo Amco before selling it to Brenntag. Born in Mexico and based in San Francisco, he continues to advance Incode's vision of "One Identity Everywhere," enabling broader access to services while empowering users to control their identity information.

Ricardo Amper — Founder & CEO Incode Technologies https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsntYKkfVzLAlxQWWtc9ztWNrvhlVnI0l9dgwo1S8W8dz49IspfOJ_nv9vgB2nZrcOmZRrULu7soextDuqHOV6znLPJcI3GyXX2IvWuMe8q6mgCR3hHzuzJR5D_Zn8rLjysfStp6jp2_grYqQaQqIAMfOQ3e4U8xm6D4yVKL5M3bFyqNz3mKG5OpiggVE/s728-rw-e365/author.png
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.