Liveness Detection vs Deepfake Detection: What's the Real Difference?

Most identity verification stacks treat liveness detection as deepfake protection. Here is why that distinction matters, what the standards actually say, and what a verification stack built for today's threat landscape looks like.
By Segolene Ayosso
March 19, 2026
l
6
 min read
Table of Content
No items found.

Introduction

There is a question embedded in almost every identity verification procurement happening right now: when a vendor says their system handles deepfakes, do they mean it stops someone holding a printed photograph up to a camera, or do they mean it detects an AI-generated face being injected into the video data stream from a virtual camera application running on the attacker's device? These are not the same attack. They require different technology to stop. And in most conversations taking place across fraud operations and compliance teams today, the distinction is not being made.

The scale of the mismatch is visible in the data. The WEF’s Cybercrime Atlas, published in January 2026, analysed 17 face-swapping tools and eight camera injection tools and found that even moderate-quality face-swap models, when combined with camera injection techniques, can deceive certain biometric systems. Group-IB recorded 8,065 attempts to bypass a single financial institution’s liveness checks in just eight months of 2025, using biometric injection attacks with AI-generated deepfakes. Gartner predicted in February 2024 that by 2026, 30 percent of enterprises would no longer consider identity verification and authentication solutions reliable in isolation because of AI-generated deepfakes. That deadline is now weeks away.

The gap between those figures is not primarily a technology problem. It is a vocabulary problem. Organisations are procuring liveness detection believing it addresses deepfakes. In most cases, it does not. Here is why.

Key Takeaways

  • Deepfake fraud drained $1.1 billion from U.S. corporate accounts in 2025, tripling year-on-year
  • Liveness detection (PAD) was designed to stop physical presentation attacks at the camera sensor. ISO/IEC 30107-3 explicitly states that digital injection attacks are outside its scope.
  • Deepfake detection analyses media content for AI-generation artefacts: GAN fingerprints, frequency domain anomalies, temporal coherence failures, and physiological signal absence.
  • A system can carry full PAD certification and remain entirely vulnerable to an injection attack that never touches the camera sensor. Both statements are true because they describe different attack surfaces.
  • NIST SP 800-63-4 (2025) encodes PAD and injection attack detection as two separate normative requirements. One ISO 30107-3 certification cannot satisfy both.
  • CEN/TS 18099, MAS Singapore, and FATF now treat injection attack detection as a distinct, mandatory control alongside liveness.

What liveness detection was actually built to do

Liveness detection, in its technical definition, answers one question: is a real, physically present human being in front of this camera right now? The international standard that governs it is ISO/IEC 30107, which defines the domain as Biometric Presentation Attack Detection, or PAD. Its purpose is to detect presentation attacks: physical objects placed in front of a camera sensor to fraudulently pass a biometric check. Printed photographs, screen replays, silicone masks, 3D face models, partial overlays placed over a live face.

Passive liveness analyses a single captured frame, looking for the texture signatures and light reflection inconsistencies that distinguish real skin from a flat or manufactured surface. Active liveness adds a challenge layer: the user blinks, turns slightly, opens their mouth. The system verifies that the physical object in front of the camera responds naturally to those prompts.

ISO/IEC 30107-3:2023, the testing and reporting standard against which vendors are evaluated, is explicit about what it covers. It states that the attacks considered in the document take place at the biometric capture device during presentation, and that any other attacks are considered outside the scope of this document. ISO 30107 does not claim to address digital injection. It never did. The assumption that PAD certification covers the full deepfake threat comes from buyers, not from the standard itself.

A system can carry PAD certification with a perfect test score and remain entirely vulnerable to an attack that never touches the camera sensor. These are not competing claims. Both are true because they describe different attack surfaces.

This is not a flaw in liveness detection. It was designed to address a real and persistent class of attacks, and it does so rigorously. The problem is that the dominant attack class has shifted, and a certification designed for physical artefacts is being relied on to cover digital ones.

What deepfake detection actually does

Deepfake detection answers a different question: is this media content authentic, or does it carry the forensic signatures of AI generation? It does not make assumptions about how the content was captured or delivered. It analyses the content itself.

Generative models leave traceable artefacts. Generative adversarial networks produce characteristic patterns in the frequency spectrum of generated images, sometimes called GAN fingerprints, which appear in the mid-to-high frequency bands in ways that do not occur in natural photographs. Research published at AAAI in 2024 by Tan et al. introduced a frequency-aware detection approach that exploits these spectral patterns and achieved a 9.8 percentage point improvement over prior state-of-the-art performance across 17 different GAN architectures. Diffusion models leave different artefacts: Ricker et al. found a mismatch toward higher frequencies caused by diffusion training objectives that is detectable even when GAN-specific fingerprints are absent.

Beyond frequency analysis, deepfake video detection also analyses temporal coherence failures between frames, the absence of remote photoplethysmography signals (the subtle colour variation in facial skin caused by blood flow that real faces exhibit but synthesised faces do not), and the upsampling artefacts that appear near hairlines, eye edges, and teeth in AI-generated faces. Research by Hernandez-Ortega et al. demonstrated rPPG-based detection achieving above 98 percent AUC on standard deepfake benchmark datasets. As Wu et al. note in The Visual Computer journal, the face forgery process disrupts periodic facial colour changes in a way that functions as a reliable biological indicator of manipulation.

Where liveness detection is a question of physics applied to the capture moment, deepfake detection is a question of digital forensics applied to the content. One operates at the sensor. The other operates on what was captured. They are not interchangeable, and no amount of PAD optimisation closes the gap that deepfake detection is designed to address.

The gap between them, and why it is being exploited at scale

Once the technical boundary is understood, the attack is obvious. A digital injection attack does not place anything in front of a camera. It bypasses the camera entirely by inserting synthetic video into the data stream at the software layer, between the sensor and the application processing the feed. The liveness system receives data that appears to originate from a live camera. It may confirm, under its own operational logic, that the stream resembles a responsive human face. What it cannot confirm is whether that stream reflects reality, because it was never designed to inspect the data layer where the attack occurred.

NIST formalised this distinction in its updated Digital Identity Guidelines, SP 800-63-4, finalised in August 2025. The document defines an injection attack as one in which an attacker supplies untrusted biometric information or media into a program or process, which could include injecting a falsified image of identity evidence, a forged video of a user, or a morphed image to defeat biometric comparisons. This definition explicitly separates injection from presentation. NIST then encodes both as distinct normative requirements: Section 3.11 mandates PAD, citing ISO 30107-3 as the conformance standard; Section 3.14 is a separate normative requirement covering digital injection prevention and forged media detection. Both are mandatory at Identity Assurance Level 2. One ISO 30107-3 certification cannot satisfy both.

Academic work has made the same point empirically. Carta et al. demonstrated in 2022 that a complete face recognition system secured with both passive and active liveness detection could be bypassed by injecting pre-prepared deepfake video at the application layer using freely available instrumentation tools. The injection did not need to defeat the liveness algorithm. It fed the liveness algorithm synthetic input that the algorithm accepted, because the input arrived from a layer the algorithm was never designed to inspect.

According to Group-IB’s threat intelligence, between January and August 2025 alone, researchers recorded 8,065 attempts to bypass a single financial institution’s liveness checks using biometric injection attacks with AI-generated deepfakes. In a separate investigation, Group-IB identified over 1,100 deepfake fraud attempts bypassing digital KYC at an Indonesian financial institution, with an estimated $138.5 million in potential losses over three months. These are not proof-of-concept scenarios. They are production fraud operations running at volume against live verification systems today.

What the attacks look like in practice

In February 2024, Group-IB documented the first iOS trojan observed harvesting victims’ biometric data to defeat banking facial recognition. The malware, which they named GoldPickaxe and attributed to a Chinese-speaking cybercrime group, prompted victims to record their own liveness responses: nodding, blinking, opening their mouths. Attackers fed this harvested footage through AI face-swap services, then used the resulting deepfake videos to pass Thai banking facial recognition systems that had recently been mandated for transactions above 50,000 baht. One confirmed victim lost the equivalent of $40,000 through accounts opened using this method. The attack did not fail the liveness check. The liveness check confirmed that the submitted video resembled a responsive human face. It had no mechanism to determine that the face had been processed through an AI pipeline.

A criminal operation uncovered by Shanghai prosecutors, documented by the South China Morning Post, registered shell companies and issued fraudulent tax invoices worth 500 million yuan, approximately $76.2 million, by defeating China’s State Taxation Administration facial recognition system. The system required active liveness responses: nodding, shaking, blinking, opening the mouth. The attackers used a modified mobile phone costing 1,650 yuan to hijack the device camera and feed pre-prepared deepfake video simulating all required movements. The liveness check confirmed it was receiving responsive input. The injection had occurred below that detection layer.

In April 2025, Hong Kong police arrested eight people for using AI-merged facial features on stolen identity cards to open fraudulent bank accounts. The syndicate submitted deepfake video fusing fraudsters’ facial geometry with stolen cardholders’ appearances to pass banks’ digital facial verification. According to South China Morning Post reporting, the operation targeted 44 account applications across multiple institutions. The broader enforcement operation connected to those arrests recorded losses exceeding HK$1.5 billion.

In each of these cases, liveness detection performed exactly as designed. It confirmed the presence of a responsive face. What it could not confirm was whether that face was real. That is deepfake detection’s job, and deepfake detection was not there.

The standards are drawing the same boundary

The technical distinction between PAD and injection attack detection has existed in academic literature for years. What changed in 2024 and 2025 is that standards bodies and regulators encoded it as a formal, separately testable requirement.

NIST SP 800-63-4’s structural separation of Section 3.11 and Section 3.14 is the clearest articulation. But it is not alone. CEN/TS 18099, published by the European Committee for Standardization in late 2024, is the first technical specification designed specifically to evaluate injection attack detection. Its development was explicitly framed around the recognition that ISO 30107-3 covers attack point 1, presentation at the capture device, while injection attacks occur at attack point 2, between the sensor and signal processing. Gartner VP Analyst Akif Khan responded to its publication by noting that the absence of a standard for injection attack detection had been a persistent issue in the identity verification market, making it difficult to compare vendors and their solutions.

MAS Singapore’s September 2025 circular on deepfake risks made the same distinction in regulatory language. It requires financial institutions to implement liveness detection for biometric authentication and, separately, to implement endpoint-level protection to prevent injection attacks as a distinct additional control. The two requirements appear as separate bullet points because they address separate threat surfaces. An ISO 30107-3 PAD certification satisfies one. It does not satisfy the other.

A global ISO standard for injection attack detection, ISO/IEC 25456, is now under development using CEN/TS 18099 as its baseline document. The FIDO Alliance’s Face Verification Certification, launched in May 2024, tests PAD with quantitative thresholds; for injection attack protection, the current version requires vendors to document their controls rather than submitting to active penetration testing, with more extensive evaluation listed as a future consideration.

Why the confusion persists and what it costs

Part of why this distinction has not been made consistently is structural. Procurement requirements for identity verification have historically specified liveness detection as a generic requirement without separating PAD, injection attack detection, and synthetic media analysis. Vendors have had no obligation to volunteer the distinction and some commercial incentive not to. Buyers have often lacked the technical framing to ask for it.

The cost of that ambiguity is now measurable. Trend Micro’s 2024 research into underground marketplaces for identity verification bypass found bypass services advertised daily at prices starting at approximately $30 per session, with more sophisticated toolkits priced between $180 and $600. The market exists at scale precisely because liveness detection in isolation has a known boundary. Attackers have mapped it, priced it, and built a service industry around it.

DataVisor’s 2026 Fraud and AML Executive Report found that 67 percent of senior fraud and AML leaders say their organisations lack the infrastructure to deploy effective AI defences, a condition they describe as the AI Readiness Gap. Javelin Strategy’s 2025 research put total U.S. identity fraud and scam losses at $47 billion in 2024, with new account fraud alone reaching $6.2 billion, the category most directly exposed to deepfake-powered onboarding attacks. According to the WEF Cybercrime Atlas January 2026 report, criminals are now routinely combining AI-generated identity documents, advanced face swaps, and camera injection to bypass live verification simultaneously, a coordinated multi-vector attack that no single liveness check is designed to address.

FATF’s Horizon Scan on AI and Deepfakes, published in December 2025, stated directly that fraud detection technology has not kept pace with generative AI, and that deepfakes can pass through liveness and biometric checks and only trigger alarms later, creating a dangerous window for fund diversion. Their recommendation was to invest in tools that detect synthesised media in real time, treating deepfake detection as complementary to, and distinct from, biometric verification.

Three layers, three questions, three attack surfaces

Once the vocabulary is clear, the architecture of an effective defence follows logically. There are three distinct layers, each answering a different question about the same verification event.

Presentation attack detection answers whether a physical object is being presented to the camera sensor. It remains necessary. Physical presentation attacks have not disappeared, and ISO 30107-3 provides a rigorous baseline for evaluating this layer. What it does not answer is whether the camera feed itself has been compromised before PAD analysis runs.

Injection attack detection verifies the integrity of the data stream before content analysis occurs. It detects virtual cameras, emulators, jailbroken device tools, and function-level interception that routes synthetic video into the verification pipeline. Because device compromise can undermine client-side checks entirely, server-side injection attack detection is architecturally more reliable. CEN/TS 18099 provides a testable framework for evaluating this capability. NIST SP 800-63-4 Section 3.14 makes it a regulatory requirement.

Deepfake and synthetic media detection analyses the content itself: GAN fingerprints, frequency domain anomalies, temporal coherence failures, physiological signal absence, and diffusion model artefacts. This is the layer that catches synthetic faces on fraudulent identity documents, face-swapped identities in video KYC flows, and fully AI-generated personas attempting onboarding. According to the WEF Cybercrime Atlas research, deepfakes now account for a substantial and growing share of biometric fraud attempts, with the greatest KYC risk found where high-fidelity, real-time face swaps are delivered directly into a verification pipeline. Those attacks pass liveness. They fail deepfake detection, when deepfake detection is present.

The question worth asking

The practical question for any fraud or compliance team reviewing their verification architecture is not whether they have liveness detection. Most do. The question is what happens when an attacker bypasses the camera entirely and submits AI-generated video at the data layer. What in the current stack detects the injection before liveness analysis runs? What analyses the media content for digital generation artefacts after capture? These are not the same check. They do not overlap. The data from Group-IB, the WEF Cybercrime Atlas, Trend Micro, Gartner, and DataVisor all describe a threat environment where both gaps are being found and used.

Gartner’s prediction about enterprise trust in biometric verification is not a warning about any single technology failing. It is a warning about the system-level consequence of using a tool designed for one attack surface to cover a broader threat landscape. ISO 30107-3 is a rigorous standard that certifies exactly what it says it certifies. CEN/TS 18099 and NIST SP 800-63-4 Section 3.14 certify something else entirely. All three are now part of the same compliance conversation. Understanding what each one covers, and what each one does not, is the starting point for building a verification stack that reflects where the threat actually is.

At DuckDuckGoose AI, we build deepfake detection that covers all three layers of the verification stack, not just the camera-facing one. Our explainable AI architecture addresses presentation attacks, injection vectors, and synthetic media artefacts, with audit-ready reasoning behind every decision. If you are reviewing how your organisation approaches the gap between liveness certification and deepfake resilience, we would be glad to show you what that looks like in practice.

Is Your Liveness Check Actually Detecting Deepfakes?

Most stacks have a gap between what PAD certifies and what today's injection attacks exploit. See where yours stands with DuckDuckGoose.

By Segolene Ayosso
DuckDuckGoose AI

About the author

By Segolene Ayosso
DuckDuckGoose AI

Discover the Power of Explainable AI (XAI) Deepfake Detection

Schedule a free demo today to experience how our solutions can safeguard your organization from fraud, identity theft, misinformation & more