Monday , December 8, 2025

Fighting Fire With Fire

Bad actors are using AI to commit financial fraud. Financial institutions need to leverage AI tools to fight back.

Bank fraud is at an all-time high. Roughly one in three adults in the U.S. were victims of financial fraud or a scam in 2024, with nearly 37% of them losing money. Even more troubling is that nine out of 10 victims report that a fraudster accessed or attempted to access their personal financial information, and, in nearly half of those cases, fraudsters were successful in stealing the information.

Part of their success is the result of the use of AI to find personal financial data. The data are used to launch phishing attacks and account takeovers, as well as to create new fake identities and deep fakes used in social-engineering scams. Once fraudsters are in the digital channel of a financial institution (FI), they can change personally identifiable information (PII) or generate a transaction within 30 seconds.

But FIs are using AI, too. In a recent report, fraud detection was the leading choice (33%) of respondents when asked to rank the five most important ways their organization is currently using AI.

For the last decade, the industry has focused on catching and stopping fraud at the time of a transaction, relying on a risk score determined at a point in time by a few basic behavioral signals. And managing anomaly detection and batch transactions was done manually and produced next-day reports. But with fraudsters leveraging AI, it’s become impossible to combat the scale of fraud out there at the speed of human effort alone.

The Changing Face of Detection

As a result, banks and credit unions are turning to AI to identify threat patterns based on the ingestion of many behavioral signals about the person logging in: how they hold their phone, whether it’s in their dominant hand, whether they’re walking normally and whether the correct face is presented for face recognition. There is a multitude of behavioral signals that need to be evaluated in real time to produce a risk score.

FIs use that score to either stop a transaction or ask for additional levels of authentication from the user, which must be done concurrently. This is only possible using analytics and machine learning (ML) models. The next iteration will use more advanced AI tools, such as large language models (LLMs) and agentic AI to continue to add sophistication and speed to detection.

Taking it one step further, AI models can also look for pathways and patterns of user behavior across a consortium of data and instantly report whether the person logging in is exhibiting the same behavior seen in other sessions that ultimately resulted in fraud. For example, the finding could be that 80% of the time, this observed pattern of behavior resulted in account takeover.

Currently, a sophisticated risk model built on machine learning can be programmed to hold certain cases for human review, at which point the human can give the ML model a new policy or procedure to implement.

In the future, ML models will be able to make policy changes or security recommendations to the FI. Eventually, as the tools get better and faster, agentic AI will help the FI make faster decisions and train the model based on the information pulled that day.

AI Tools are Critical

Given the time, effort, and imminent risks fraud presents, these solutions are becoming a critical piece of infrastructure for any financial institution. However, it’s not feasible for FIs to build these solutions themselves. They need to partner with a specialized vendor that deeply understands financial institution fraud and has a sophisticated risk model that enables real-time interception in the workflows.

The solution should allow FIs to observe and interact with it, and it must have APIs that can integrate with all other payment channels inside and outside the digital environment.

It’s also critical to make sure the solution can ingest many different signals, not just one, with the ability to ingest more over time. Some vendors will push their own proprietary signals, but that’s not enough. Financial institutions should also ensure that the solution is fully integrated to their digital channel. This allows for the solution to interdict workflows in real time—a capability only possible with full integration.

Before signing a contract, ensure the solution can truly interdict in the ways described. The solution should also have a UI that allows an FI to create policies and rules in a very simple way, along with a case-management tool to manage events on the signals.

While working to find and integrate the right solution, FIs can begin to fight fraud through customer communication — specifically, pop-ups in the digital channel. So, when an account holder logs in, there’s a pop-up that reminds that the bank or credit union will never ask for the user’s passcode. Pop-ups should continue, advising the accountholder how scams work.

Once the user goes to do a transaction, another pop-up should ask if the user is sure about whom they’re sending money to, or the amount they’re transferring to an account. The pop-ups should be educational and active throughout the journey.

It’s important to remember that when it comes to fighting bank fraud, there is no silver bullet. Fraudsters have a host of AI tools in their arsenal and financial institutions need an equally dynamic and powerful arsenal to fight back.

—Jeff Scott is vice president, fraudtech solutions, at Q2 Holdings.

Check Also

The FIDO Alliance Launches a Digital Credentials Initiative

The FIDO Alliance has launched an initiative to accelerate the adoption of verifiable digital credentials …

Digital Transactions