Tuesday , May 5, 2026

Businesses Are Struggling to Combat AI-based Fraud, a Study Finds

As online traffic powered by artificial intelligence explodes, businesses are struggling to differentiate between legitimate and fraudulent agentic commerce agents, says a study by AI-based fraud-prevention platform provider Darwinium.

Some 97% of businesses report an increase in AI-based fraud attacks in the past year, the report says. Among these businesses, 45% say the attacks were powered by improved fraud-as-a-service technology. In addition, 42% say AI is improving the precision with which criminals launch fraud attacks, and 41% say AI is being used to help fraudsters avoid detection.

As a result, 95% of businesses surveyed have made agentic AI a Top Five security priority for 2026, while 46% list AI-based fraud as a Top Three threat, Darwinium says.

The study, conducted in February, surveyed 500 senior professionals across the fintech, e-commerce, gaming and gambling, banking and financial services, and travel-and-hospitality industries in the United States and the United Kingdom. All companies surveyed had revenue of at least $30 million.

As AI-based fraud attacks increase, they are exposing cracks in businesses’ cyber defenses. While 95% of respondents claim they are prepared for AI-based threats, just 36% say they can stop fraud as it arises across the customer journey. Most businesses are limited to catching threats at isolated checkpoints, such as login or checkout, according to the report. In addition, 52% of respondents cannot track or label AI-assisted fraud. Instead, those businesses rely on broad security measures that can trigger mass customer churn, the report says.

While 89% of businesses surveyed expect non-human digital traffic to increase, they are divided as to how they monitor that traffic, with 48% saying they will evaluate non-human digital traffic with monitoring, and 31% saying they will block such traffic.

The top tactic for blocking non-human digital traffic is authentication and identity binding (46%), followed by the use of automation that distinguishes legitimate from illegitimate non-human digital traffic.  

The growth in AI-based fraud is proving costly for businesses. On average AI-based fraud costs businesses $4.5 million, while 62% estimate false-positives cost them more than $1 million. The cost of blocking good customers and good agentic traffic is nearing parity with letting bad actors through, according to the report.

Fraudsters’ increasing use of AI has also made deepfake scams more common. These scams use AI to produce realistic content that mimics a person’s voice and facial features. Some 93% of businesses have encountered deepfake fraud attacks in the past 12 months, while more than 45% say they encountered multiple deepfake attacks during that period.

Favored entry points for deepfake attacks include payments/checkout (22%), customer support/call centers (16%), and onboarding/identity verification (15%).

“Our research shows that AI traffic is surging, but businesses can’t tell the difference between fraudulent and legitimate agentic commerce,” Darwinium chief executive and co-founder Alisdair Faulkner, says in a statement. “When [businesses] can’t identify a bot’s intent, they resort to blunt-force measures that either approve fraudulent transactions or block legitimate transactions—both of which cause millions in lost revenue and damaged relationships.”

Combatting AI-based fraud will require end-to-end visibility into AI traffic and the ability to track a customer’s or bot’s intent across the entire customer journey. “If you can’t see the full picture, you can’t protect your business,” Faulkner says.

Check Also

Eye on POS: Preferred Hotels Prefer Toast; NCR Voyix Expands at Stater Bros. Markets

Restaurant payments and management-software provider Toast Inc. announced Monday that it is now an endorsed …

Digital Transactions