How Deepfakes Bypass Liveness Checks in Real Identity Verification Systems

Liveness checks confirm a face looks real. Injection attacks bypass the camera entirely. Here is why those are different problems, and why most verification systems are only solving the first one.
By Segolene Ayosso
March 17, 2026
l
6
 min read
Table of Content
No items found.

Introduction

In 2024, half of all businesses worldwide reported experiencing deepfake fraud, according to a survey of over 500 fraud and identity decision-makers across financial services, fintech, crypto, healthcare, and law enforcement. The financial sector carried the heaviest share: average losses per company reached $603,000, with one in ten financial services firms reporting losses above $1 million.

Those figures reflect a specific problem. Deepfakes are not being used only to defraud individuals. They are being used to defeat the systems that financial institutions built to prevent exactly that. The FATF Horizon Scan on AI and Deepfakes, published in December 2025, stated plainly that fraud detection technology has not kept pace with generative AI, and that deepfakes can pass through liveness and biometric checks while only triggering alarms later, creating a window for fund diversion.

Key Takeaways

  • Liveness detection was built to stop presentation attacks. Generative AI now produces reactive faces that pass behavioral challenges in real time.
  • Injection attacks bypass the camera entirely, feeding synthetic biometric data directly into the verification pipeline before any detection can occur.
  • A liveness check can return a passing result on completely synthetic input if the attack happened upstream of where detection operates.
  • CEN/TS 18099 and the forthcoming ISO 25456 define injection attack detection as a separate standard, formally acknowledging PAD alone is insufficient.
  • FinCEN, NYDFS, and FATF have all issued formal guidance requiring institutions to detect, log, and document deepfake-related fraud attempts.
  • Adequate defense requires transport integrity verification, continuously retrained content analysis, and explainable outputs that satisfy audit requirements.

What Liveness Detection Was Actually Built to Solve

Liveness detection was developed to answer one specific question: is the face being presented to the camera a real, physically present person, or some kind of substitute? Early fraud was blunt: printed photographs held up to a webcam, or a screen playing a looped video. Presentation Attack Detection, standardized under ISO/IEC 30107-3, was built to catch those techniques. It was meaningful progress at the time.

Active liveness systems went further by prompting users to perform unpredictable actions: blink, turn your head, say a phrase. The assumption was that a static image or pre-recorded clip could not respond dynamically to a challenge it had not seen before. That held until generative AI made it possible to produce reactive faces on demand.

Face-swap models now run on consumer hardware, mirroring liveness prompts with realistic micro-expressions and appropriate response latency. The behavioral challenges that active liveness systems use to prove presence can now be answered by AI generating a passing response to each one, a limitation the FATF Horizon Scan on AI and Deepfakes confirmed when it stated that deepfakes can pass through liveness and biometric checks while only triggering alarms later.

The Deeper Problem: Injection Attacks Skip the Camera Entirely

Beyond face-swapping, there is a second and structurally distinct attack type that most liveness discussions skip over. Injection attacks do not try to fool the camera. They replace what the camera sends before the system ever gets to analyze it.

In a presentation attack, a fraudster shows something to the lens. In an injection attack, a fraudster feeds synthetic or pre-rendered biometric data directly into the verification pipeline using virtual camera software, browser plugins, or API-level manipulation. The biometric module receives what looks like a legitimate camera stream and has no reliable mechanism for knowing it is not.

Group-IB's Weaponized AI report, published in January 2026, documented 8,065 attempts to bypass a single financial institution's liveness checks for digital KYC loan applications using biometric injection attacks with AI-generated deepfake images, all recorded between January and August 2025. That figure covers one institution across eight months. It is not an industry aggregate.

Liveness detection checks whether the content it receives looks real. Injection attacks control what content gets delivered. Those are different problems, and most deployed systems are only solving the first one.

FinCEN made this visible at a regulatory level in its November 2024 alert, FIN-2024-Alert004, which described a spike in Suspicious Activity Reports tied to deepfake media circumventing identity verification. The alert specifically documented the use of third-party webcam plugins during live verification checks as an active fraud pattern appearing in Bank Secrecy Act filings.

The same alert noted a behavioral dimension worth attention: actors attempting to bypass live verification were observed repeatedly claiming technical difficulties or requesting alternative verification routes. The fraud is not purely technical. It is designed to exploit the process responses that apparent failures trigger.

Why a Passing Liveness Score Does Not Mean the Input Was Real

The core issue is architectural. Liveness detection is a content-layer analysis. It examines what the media shows: texture, motion, depth, light reflection patterns. A well-designed system doing all of that correctly can still return a passing result on completely synthetic input, because the attack happened at the transport layer, upstream of where detection operates.

A presentation attack and an injection attack are not variations of the same problem. They require different defenses. A PAD system, however well-designed, is not interrogating the provenance of the stream it receives. It evaluates whether that stream contains the expected biological signals. Feed it a synthetic stream engineered to contain those signals, and it passes.

CEN/TS 18099, the European technical specification published in 2024, was written to address this gap directly. It defines Injection Attack Detection as a distinct and complementary layer to ISO 30107-3, formally acknowledging that PAD and IAD cover different parts of the attack surface. The forthcoming ISO 25456 standard will establish global testing procedures for injection-resistant systems. That two separate standards are now required to cover what was previously assumed to be handled by one is itself a clear signal that the attack landscape has moved.

What Regulators Are Now Requiring

The compliance question has shifted in a way that matters practically for procurement and audit decisions.

The EU AI Act classifies remote biometric verification systems as high-risk, requiring documented transparency, safety testing, and audit trails that show how each decision was reached. A system that returns a verdict without being able to explain which signals drove it does not satisfy that requirement. Explainability here is a compliance condition, not a product feature.

FinCEN went further than issuing a warning. FIN-2024-Alert004 asks financial institutions to reference deepfake-related suspicious activity using the specific term FIN-2024-DEEPFAKEFRAUD in SAR field 2, creating a formal documentation requirement tied directly to this threat category. Institutions that cannot show they are detecting and logging this activity carry a regulatory exposure alongside the operational fraud risk.

NYDFS reinforced the same point in its October 2024 industry letter on generative AI cybersecurity risks, identifying deepfake-enabled social engineering as a primary concern requiring active organizational response, not just monitoring. Between that letter, the FinCEN alert, and the FATF horizon scan, the regulatory direction across three distinct jurisdictions and standard-setting bodies is the same: the question is no longer whether an organization has a liveness check in place, but whether that check is producing evidence that can be examined, explained, and filed.

Three Things a Detection Stack Actually Needs

Transport integrity verification has to come first. Detecting deepfakes in a biometric stream is only meaningful if the stream itself is trustworthy. That means confirming the capture device is a real camera and not virtual camera software, checking device integrity signals, and where possible establishing cryptographic attestation of the media pipeline. NIST's Face Recognition Vendor Testing program has documented consistently that real-world adversarial conditions produce meaningfully different results from controlled benchmarks, a gap that applies directly to injection testing, where most current evaluations do not simulate transport-layer manipulation.

Content-layer detection still matters for presentation attacks and for synthetic media that reaches the system through legitimate channels. The practical requirement is generalization: models trained only on known generation engines will miss content produced by tools not in the training set, and that list of tools grows continuously. Retraining is not a periodic activity. It has to be continuous.

Explainability of outputs is the third requirement, and the one most often absent. When a session is cleared or flagged, compliance teams need to know which signals contributed to that outcome, what artifacts were examined, and how the system reached its verdict. That is what produces audit-ready records for regulatory filing and what lets fraud analysts investigate edge cases without needing to interpret a raw confidence score.

When a verification system cannot explain how it reached a decision, a compliance team cannot document that decision. Those are the same problem.

The Gap Between Certification and Actual Coverage

A 2024 industry survey covering businesses across financial services, fintech, crypto, and law enforcement found that 56 percent of businesses claimed confidence in their ability to detect deepfakes, while only 6 percent reported having avoided financial losses from them. That gap between stated confidence and actual outcomes has a specific cause: organizations are assessing their readiness against the standards they have certified to, not against the attacks currently being used against them.

The DHS assessment on deepfake identity threats stated explicitly that existing identity verification infrastructure was designed for a threat model that preceded modern generative AI, and that compliance with older standards does not transfer to newer attack categories. ISO 30107-3 is rigorous for what it covers. It does not cover injection.

An organization can hold ISO 30107-3 certification, deploy a well-regarded liveness vendor, and still have a verification pipeline that returns passing results on injected synthetic input. That is not a failure of the certification or the vendor. It is a consequence of evaluating against one standard when the attack has moved to a different surface.

The useful questions to put to any identity verification provider are direct. What does the system do when the camera is bypassed entirely? How is the provenance of the biometric stream verified? Can it produce an explanation of each decision in a form that satisfies a regulatory audit? What is the retraining cadence, and against what attack types?

Where This Leaves Identity Verification Teams

Europol's EU Serious and Organised Crime Threat Assessment 2025 described deepfake tools as accessible, requiring no high technical skills, and already deployed by organized crime groups in CEO fraud and identity theft at scale. Group-IB's 2026 research found Deepfake-as-a-Service platforms advertising synthetic identity creation for as little as $15 per identity on Telegram and dark web channels, meaning the barrier to mounting a liveness bypass campaign is now negligible.

Liveness detection remains a necessary part of any identity verification stack. The issue is that it was never sufficient on its own for the current attack environment, and the distance between what it covers and what it does not has grown large enough to show up in loss figures and regulatory alerts across multiple jurisdictions.

Closing that gap requires building at the transport layer as well as the content layer, requires detection outputs that can be explained and filed, and requires an honest assessment of whether existing certifications actually cover the attack surface being exploited. That is a procurement question, a compliance question, and an architecture question. The organizations working through all three simultaneously are the ones whose verification infrastructure will hold.

Close the Gap Liveness Can't See

DuckDuckGoose detects what liveness checks miss, with explainable outputs built for regulatory audit.

By Segolene Ayosso
DuckDuckGoose AI

About the author

By Segolene Ayosso
DuckDuckGoose AI

Discover the Power of Explainable AI (XAI) Deepfake Detection

Schedule a free demo today to experience how our solutions can safeguard your organization from fraud, identity theft, misinformation & more