Introduction
Every year, organisations invest heavily in onboarding controls. They add document checks, selfie verification, liveness detection, and sanctions screening. They interrogate the moment a customer walks in the door. And every year, the fraud loss data comes back telling the same uncomfortable story: most of the damage was done after that door had already opened.
In 2024, account takeover fraud alone cost U.S. consumers $15.6 billion, a 23% increase year-on-year, according to Javelin Strategy and Research. New account fraud, the category most onboarding controls are designed to catch, came to $6.2 billion. Roughly 85% of all identity fraud losses that year occurred in accounts that had already passed onboarding.
The gap is not a coincidence. It reflects a structural reality about how modern identity fraud actually works, one that the industry has been slow to fully absorb into its detection architecture.
Key Takeaways
- Account takeover fraud cost $15.6 billion in 2024, nearly three times the losses from new account fraud, according to Javelin Strategy and Research.
- Roughly 85% of identity fraud dollar losses occur in accounts that have already passed onboarding.
- Synthetic identities cultivate trust for months or years before busting out, with 70% exhibiting normal consumer payment patterns during that period, per Federal Reserve research.
- Deepfakes are now more commonly used to impersonate existing account holders than to bypass onboarding checks.
- Every major regulator, including FATF, EBA, FCA, MAS, and the Basel Committee, now explicitly requires ongoing identity monitoring beyond the point of onboarding.
- Only 36% of financial institutions are enhancing post-onboarding account management systems, despite 85% of losses occurring there.
Why Identity Fraud Happens After Onboarding, Not During It
Every year, organisations invest heavily in onboarding controls. They add document checks, selfie verification, liveness detection, and sanctions screening. They interrogate the moment a customer walks in the door. And every year, the fraud loss data comes back telling the same uncomfortable story: most of the damage was done after that door had already opened.
In 2024, account takeover fraud alone cost U.S. consumers $15.6 billion, a 23% increase year-on-year, according to Javelin Strategy and Research. New account fraud, the category most onboarding controls are designed to catch, came to $6.2 billion. Roughly 85% of all identity fraud losses that year occurred in accounts that had already passed onboarding.
The gap is not a coincidence. It reflects a structural reality about how modern identity fraud actually works, one that the industry has been slow to fully absorb into its detection architecture.
Onboarding is a checkpoint, fraud is a campaign
The implicit assumption behind most fraud prevention investment is that fraudsters are trying to break in at the front door. Catch the bad actor at onboarding and the problem is solved. It is a logical assumption, but it treats fraud as a single event rather than what it increasingly is: a patient, multi-stage operation.
Synthetic identity fraud illustrates this dynamic better than almost any other attack type. Fraudsters combine a real Social Security number, typically belonging to a child, elderly person, or someone with limited credit activity, with fabricated personal details to construct a plausible identity. That identity then applies for credit. The first application is usually rejected, but the application itself creates a credit file at the major bureaus. From that moment, the clock starts on what fraud researchers call the cultivation phase.
The Federal Reserve's research on synthetic identity payments fraud found that 70% of suspected synthetic identity accounts temporarily exhibit typical consumer payment patterns during this phase. They look, to every monitoring system, like ordinary customers. TransUnion's data shows that the cultivation period typically spans two years. Equifax found that traditional detection methods take up to 14 months on average to identify a synthetic identity, and many remain undetected far longer.
Then comes the bust-out. Every available credit line is maxed simultaneously. Loans go unpaid. The identity is abandoned. Because no real person was harmed directly, many institutions classify the loss as a credit write-off rather than a fraud event, which means the true scale of the problem is systematically undercounted.
The Federal Reserve Bank of Boston estimates that synthetic identity fraud has now crossed $35 billion in annual losses in the United States, making it the fastest-growing type of financial crime in the country.
The numbers behind the shift
Javelin's 2025 Identity Fraud Study found that total U.S. identity fraud losses reached $27 billion in 2024, affecting 18 million victims, up from $23 billion and 15 million victims the year before. When researchers broke down those losses by category, the picture was stark. Account takeover: $15.6 billion. Existing card fraud: $11.6 billion. Existing non-card account fraud: $9.3 billion. New account fraud, the category onboarding controls are built to stop: $6.2 billion.
Alloy's 2025 State of Fraud Benchmark Report, drawing on responses from roughly 500 fraud decision-makers at banks, credit unions, and fintechs, found that only 33% of financial organisations most commonly detect fraud at the onboarding stage. The majority, 56%, catch it at the point of transaction, well after a customer relationship has been established.
Alloy's 2026 report, published in December 2025, showed the trajectory continuing upward: 67% of financial institutions saw fraud rates rise in 2025, and 22% reported direct losses exceeding $5 million. Ninety-one percent of fraud decision-makers said criminals are using AI more intensively, particularly for synthetic identities and document manipulation. Among the tactics surveyed, synthetic identity fraud was rated the most concerning by 89% of respondents.
Account takeover and the deepfake inflection point
Account takeover has always been a post-onboarding problem by definition. But it has become dramatically more dangerous as generative AI has lowered the cost and technical threshold for impersonation.
Federal Reserve Vice Chair Michael Barr noted in April 2025 that deepfake attacks have seen a twentyfold increase over the last three years. The WEF's Cybercrime Atlas analysis of face-swapping and camera injection tools, published in January 2026, found that criminals are now combining AI-generated identity documents, advanced face swaps, and live camera injection to bypass verification at any point in the customer lifecycle, not just at onboarding. The primary target of these tools is the established account relationship, not the registration moment.
Gartner's September 2025 survey of 302 cybersecurity leaders found that 43% had experienced at least one deepfake audio call incident and 37% had encountered deepfake video in the prior year, with account takeover and post-authentication impersonation identified as the dominant threat vectors. Gartner also projects that by 2027, AI agents will halve the time needed to exploit account takeovers once credentials have been compromised.
The fraud is no longer breaking down the door. It is already inside, and it is waiting for the right moment.
Group-IB's threat intelligence documented that deepfake-related fraud attempts in Asia-Pacific surged 194% in 2024 compared to 2023, with voice-based post-onboarding scams leading the increase. More than 10% of financial institutions surveyed had suffered deepfake voice fraud losses exceeding $1 million per incident, with average losses running to approximately $600,000 and fewer than 5% of those funds ever recovered. These are not attempts at registration. They are post-onboarding attacks on trust, designed to exploit a relationship that onboarding controls already approved.
A case that shows how the fraud actually unfolds
In December 2025, Oluwaseun Adekoya was sentenced to 20 years in federal prison for leading a bank fraud and money laundering conspiracy that stole more than $2 million across multiple credit unions. The case, prosecuted in the Northern District of New York, is worth examining in detail because it shows exactly how post-onboarding fraud is structured.
Adekoya and his co-conspirators did not try to open new accounts. They targeted existing Home Equity Lines of Credit held by real customers who had already been through identity verification, passed all checks, and been extended significant credit. Using publicly available property records to identify HELOC holders and stolen personal data purchased through encrypted messaging platforms, the ring manufactured high-quality forged identity documents and sent workers into credit union branches to impersonate the legitimate account holders.
Every victim had a real, legitimate account. Every account had passed onboarding. The fraud was entirely invisible until post-onboarding transaction monitoring flagged anomalous withdrawal patterns. The scheme was ultimately detected not at the point of identity verification but in the account activity that followed.
It is a pattern regulators and fraud researchers have documented repeatedly. The sophistication of the attack is calibrated to the value of the existing relationship, not the ease of breaching onboarding controls.
The regulatory direction is already set
Regulators in every major market have reached the same conclusion: onboarding verification is necessary but not sufficient. The obligation to maintain ongoing due diligence is now codified in law and supervisory guidance across multiple jurisdictions.
FATF Recommendation 10 requires that financial institutions conduct ongoing due diligence on customer relationships and maintain scrutiny of transactions throughout the course of that relationship. FATF's December 2025 Horizon Scan on AI and Deepfakes added operational urgency to that requirement, explicitly calling out deepfake-enabled identity fraud as a mechanism that can pass through onboarding checks and only trigger detection later.
The European Banking Authority's Draft Regulatory Technical Standard on Customer Due Diligence, currently in consultation under Regulation (EU) 2024/1624, sets out maximum timelines for updating customer records and requires ongoing transaction monitoring aligned to risk classification. The FCA's November 2024 Financial Crime Guide update, which for the first time integrated Consumer Duty obligations into financial crime controls, requires firms to implement transaction monitoring systems capable of detecting suspicious activity well after a customer relationship begins. The Monetary Authority of Singapore's revised Notice 626, updated in July 2025, sets parallel expectations on ongoing monitoring and risk profile maintenance.
The Basel Committee's discussion paper on digital fraud characterises fraud detection as a proactive, continuous activity rather than a point-in-time control. The direction from regulators could not be clearer.
Why the detection gap persists
If the data is this clear, why does the gap persist? The answer is partly architectural and partly organisational.
Most identity verification infrastructure was built to answer a binary question at a specific moment: is this person who they claim to be? The tools, workflows, and compliance processes were all designed around that moment. Extending the question forward through the lifecycle of a customer relationship requires different architecture, different signals, and a different mental model of what identity assurance actually means.
TransUnion's data on synthetic identities captures this mismatch precisely. Eighty-three percent of financial institutions surveyed reported plans to upgrade detection capabilities at the onboarding stage. Only 36% were enhancing account management systems to detect synthetic identities already inside the portfolio. The investment allocation does not match the loss distribution.
There is also a classification problem. When a synthetic identity busts out, the loss is typically written off as a credit default. It goes to the credit risk team, not the fraud team. The fraud system never gets the signal that the account it cleared was a bad actor all along. That data silence makes it harder to build better models and easier to continue underestimating the exposure.
What post-onboarding protection actually requires
Closing the gap between where fraud is detected and where it actually occurs requires treating identity not as a one-time gate but as an ongoing signal.
Behavioural anomaly detection is part of this, but it works downstream of an account that has already been established and may have spent months or years building a pattern designed to look normal. It catches the bust-out, not the cultivation. For fraud that operates on multi-year timelines, transaction monitoring alone is not sufficient.
The more fundamental requirement is the ability to re-verify identity claims at meaningful points in the customer lifecycle, not just at onboarding. This includes account modification events, credential changes, address updates, and any interaction that creates access to new credit or elevated permissions. These are the moments where synthetic identities and taken-over accounts tend to make their moves, and they are the moments where biometric and deepfake detection capability needs to be applied.
The EU's eIDAS 2.0 framework, which entered into force in May 2024 and mandates Digital Identity Wallets across member states by end of 2026, provides some of the technical infrastructure for this model. Institutions will be able to re-verify customer identity attributes at any point using wallet-level credentials, effectively enabling what is sometimes called perpetual KYC. The regulatory architecture for continuous identity assurance is being built. The question is whether detection capabilities keep pace with it.
The fraud is already inside
The most important thing to understand about post-onboarding fraud is that it was often not detectable at the point of onboarding. Synthetic identities are built to pass verification checks. Deepfake tools are improving faster than liveness detection benchmarks. Stolen credentials belong to real people who have already been verified. The fraud enters through controls that were working exactly as designed.
That is not an argument against strong onboarding verification. It is an argument for extending the same rigour into what happens next. Fraud is not a door-breach problem. It is a lifecycle problem. The institutions that treat it accordingly are the ones that will see the loss distribution improve.
At DuckDuckGoose AI, we build deepfake detection that works across the full customer lifecycle, not just at the front door. Whether the risk is in onboarding flows, re-authentication steps, video calls, or account modification events, our explainable AI gives your team the visibility to act with confidence. If you are reviewing how your organisation approaches post-onboarding identity risk, we would be glad to show you what that looks like in practice.
Verify Identity Beyond the Front Door
Move from one-time onboarding checks to continuous, explainable identity assurance with DuckDuckGoose
.png)













.webp)
.png)




