Imagine joining a video call with your CEO, only to find out later the CEO participant was actually an AI-generated fake. Welcome to the new digital battlefield.
Adversarial AI and deepfakes have created an identity attack surface that is not just digital, but is also based on reality itself. These technologies are no longer science fiction or theoretical. They are actively being used to spoof identities, manipulate political perceptions, and circumvent even the best cybersecurity training initiatives. , and circumvent even the best cybersecurity training initiatives.
If your cybersecurity defenses rely solely on human perception, voice recognition, or even visual evidence, you are vulnerable to an attack.
From Cat and Mouse to Machine vs. Machine
Cybersecurity has always been a game of cat and mouse. As defenders (mice), we have historically been able to adapt our defenses for phishing, malware, ransomware, and insider threats. Today, we're also strategizing against emerging threats, from Artificial Intelligence (AI) to quantum decryption. The cat in this case is no longer just clever, though. Today's threat actors can be synthetic-driven, self-learning, devilish, and manipulative in ways we never imagined.
Envision receiving a voice message from your CEO authorizing a wire transfer, or a live video call from your IT administrator asking you to disable multi-factor authentication (MFA). When threat actors leverage deepfakes of an actual employee as part of a social engineering attack, you are not just being tricked, you're an accessory. A mistake here means you are permitting an unauthorized transaction or authentication to occur within your own organization.
What Adversarial AI and Deepfakes Mean for Security
To gage the scope of this threat, let's breakdown a few key concepts.
Adversarial AI
Adversarial AI refers to techniques that deliberately manipulate AI models, like image / video generation or language processing, into making content with a malicious intent or outcome. As an example, consider image recognition software that is tricked into seeing a turtle as a gun based on the way the image is constructed. As a human, we see a turtle, but for AI software, it sees a weapon based on the colors, style, pixelation, and layout of the imagine.
In cybersecurity, adversarial AI content can bypass spam filters, confuse malware detection engines, or evade fraud detection by feeding the precise manipulative inputs to confuse AI systems.
Deepfake Attacks
Next, consider deepfakes. Deepfakes are altered audio or video recordings of humans generated by machine learning models (typically GANs—Generative Adversarial Networks), often used for malicious intent. A deepfake can make someone appear to say or do something that is not real. What was once a novelty on social media using filters is now weaponized for social engineering, fraud, bribery, sextortion, and blackmail.
Five Real-World Deepfake Attacks
Consider the following well-documented deepfake attack vectors:
1. Impersonating Executives
Threat actors use deepfake technology to clone an executive or political leader's voice using short audio samples obtained from public sources. They then use the deepfake to call employees requesting urgent fund transfers or gift cards. They may also give inappropriate instructions, or ask them to relay sensitive information. These attacks are an extension of voice phishing (vishing) and have a higher chance of success through the limited bandwidth of a phone, they sound authentic.
2. Faking Live Video Calls
Real-time rendering is now possible on modern systems. Threat actors are applying this capability to pose as trusted colleagues or contractors via Zoom, Teams, FaceTime, or Slack video. Deepfakes generated this way look convincingly real and mimic human mannerisms, including blinking eyes, moving lips, raised eyebrows, and even some hand movements. The results look frighteningly real, especially over low bandwidth or on devices with small screens.
This attack technique was behind the notorious Hong Kong deepfake CFO scam, where threat actors deepfaked an entire video web conference, impersonating a prominent CFO and other meeting participants to social engineer the victim into transferring $25.6 million dollars into five different Hong Kong bank accounts.
3. Embedding Malware in Media
Malware can be embedded in images, PDFs, web pages, and other deepfake content, potentially allowing it to avoid antivirus tools and evade anomaly detection systems. The threat actors need not brute force their way in, they just exploit human curiosity to crack the door open for Infostealers and other attacks. These attacks generally exploit vulnerabilities in the solutions used to display contents or have spoofed file names that may actually launch executables.
4. Deploying AI Chatbots for Phishing
Attackers deploy AI-based phishing bots that simulate customer support representatives on watering hole-based websites (websites that seem to represent a real company, but are fraudulent). The goal is to manipulate users into handing over login credentials (including MFA) or disabling security settings when they engage with the faux website.
5. Forging Digital Biometrics
Synthetic fingerprints, 3D face models, and AI-replicated iris scans can all be digitally duplicated if the original technology hosting the biometrics has been compromised. This serves as a warning to all. Even though the biometric scanner may be secure, if the digital representation of the biometrics is stored insecurely, either elsewhere or locally, it can be attacked, extracted, and used for a future attack. Remember, you can change your password, but not your biometrics. Understanding how your biometrics are stored is key to ensuring whether a risk exists that could be leveraged against you in the future using deepfakes.
Actionable Steps for Defending Against Today's Digital Adversaries
So, how do we protect ourselves in a world where perceived reality can be spoofed?
1. Implement Multi-Modal Verification
Deepfakes often focus on a single sensory modality: voice or video as the primary component generated by AI. Humans, however, can interact through multiple channels.
Therefore, pay attention to all sensory queues. Inspect the voice, face, behavioral cues, hands, tonal inflections, background, mannerisms, etc. to determine if the content is fake. If a request comes via video, validate it through a secondary, known-good communication, like a Teams or Slack message, or even a code word. Trust, but always verify if the connection is received using out-of-band communications especially if the request is questionable, like a wire transfer.
2. Adopt AI to Detect AI
Defensive AI tools can be trained to detect deepfakes through frame rate analysis, pixel inconsistencies, and unnatural blinking or voice modulation. This includes liveness detection, which looks for unnatural patterns, or the lack of natural patterns, in the way the subject moves, speaks, or blinks. These tools continuously evolve as deepfake generators do, and are available as plugins for most major unified communication platforms. In the end, it's not about a block-and-tackle defense, but rather active detection and response.
3. Incorporate Deepfake Scenarios into Employee Education
Periodic cybersecurity training should include scenarios that require employees to question "authentic" interactions. This implies that your simulated phishing campaigns should now include video and audio on corporate and private communication channels. The best scenarios include fake video calls and deepfake voicemails.
Finally, encourage a culture of challenge, where even senior staff understand and accept security roadblocks, like callbacks, voice-to-text verification, or multi-approver authentication to prevent fraud.
4. Digitally Sign Sensitive Communications
Use cryptographic signatures for video, audio, and documents, especially in regulated industries or where sensitive C-level communication occurs. Trusted verification processes matter and can be trained upon as well. If a CEO sends a video, it should be signed and verifiable, like any piece of digitally-signed software and only come from authorized business communication channels.
5. Limit Your Public Media Footprint
The less deepfake AI training data a threat actor can consume, the harder it is to create convincing fakes. Limit the availability of high-quality images, videos, and voice samples, especially if you are a public figure or have a public persona. Avoid uploading unnecessary media, remove metadata when possible, and restrict access to downloadable video content.
In addition, consider adding hidden and visible variable watermarks to any content to stymie malicious training efforts.
Looking Ahead When "Reality" Is Uncertain
Adversarial AI and deepfakes have fundamentally altered the threat landscape. These are not just tools for amusement; they can automate, amplify, and disguise attacks, making them extremely difficult to detect.
Trust, but verify everything. That core essence of zero trust doesn't just apply to networks and devices anymore, but also to interactions, communications, and even identities. Cybersecurity professionals must embrace AI not only to detect and respond, but also to anticipate and outmaneuver. It's a new age of the old cat and mouse game for the security of our organizations.
Defense today is no longer about stopping a foreign IP address or malicious packet; it also entails questioning the reality of what we are experiencing. Therefore, don't believe everything you see or hear, especially without proof. Skepticism is no longer cynicism; it's the mindset you need to navigate when reality itself can be faked.
About the Author: Morey J. Haber is the Chief Security Advisor at BeyondTrust. As the Chief Security Advisor, Morey is the lead identity and technical evangelist at BeyondTrust. He has more than 25 years of IT industry experience and has authored four books: Privileged Attack Vectors, Asset Attack Vectors, Identity Attack Vectors, and Cloud Attack Vectors. Morey has previously served as BeyondTrust's Chief Security Officer, Chief Technology, and Vice President of Product Management during his nearly 12-year tenure. In 2020, Morey was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board, assisting the corporate community with identity security best practices. He originally joined BeyondTrust in 2012 as a part of the acquisition of eEye Digital Security, where he served as a Product Owner and Solutions Engineer, since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. Morey earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.
Morey J. Haber — Chief Security Advisor at BeyondTrust https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUzrHtPnHi2HYBzMUyjc6umfh04AUrz7BHYkpDUex0Jog1GXT2WhbnkzX1c9anm6D9fP9omNlv9y1-w3VdIemGJb-wdZp6BhnUkPwJKWeiwvazP2YmhokhcXz2bBuB-FYXLZSfgpQYR9O61ZMqYH9VLXi39uW1Wb9r4EuTp5ZcxxK4VEnvGzlLaqVvo6c/s728-rw-e365/morey.png