Saturday , April 20, 2024

Battling the Bots

There are plenty of good bots out there, but the bad ones are making life difficult for financial institutions and merchants. And it’s only getting worse. What’s to be done?

To paraphrase Glinda, the good witch in L. Frank Baum’s Oz novels and the 1939 film The Wizard of Oz: Are you a good bot or a bad bot?

There are plenty of good bots out there that perform useful tasks in the Internet age. They’re bits of software code, artificial intelligence really, that are programmed to react to human inputs—typed messages or voice commands—and react fast with information or advice (“The Age of Bots,” January, 2017). An example is an automated assistant that facilitates e-commerce purchases.

The trouble for merchants, banks, and the payments industry is there are too many bad bots—malicious software applications designed to run repeated code on their own. They can unleash massive attacks on the login pages of retailers, banks, and credit unions, or any organization with personal or financial data accessible through the Internet.

“With bots, they get all of these credentials from data breaches, and they just hammer until they find one that matches,” says Shirley Inscoe, senior analyst at Boston-based research and consulting firm Aite Group LLC.

Account takeovers, a type of fraud in which a criminal gains control of a legitimate credit card, bank, or other type of financial account, are a frequent result of successful bot attacks.

Bad bots can pull data from a database, for example, a retailer’s customer list with valid passwords and usernames, and, in a type of attack dubbed credential stuffing, attempt to get into a consumer’s online account without much operator action.

A Swelling Stream

Bots typically run from servers, while some attacks rely on connected computers or Internet of Things devices surreptitiously recruited into the attacking swarm, or botnet.

These botnet attacks are generating a swelling stream of new business for a small army of specialist vendors with anti-bot technology. And most informed observers agree the scale of bot attacks is huge.

Ninety percent of login attempts may now be using bots, estimates Colin Sims, chief financial officer at New York City-based fraud-prevention firm Forter Inc. “One of the ways you try to brute-force your way into an account is using a bot,” he says.

The attacks are only growing more numerous. In both May and June there were 8.3 billion malicious login attempts by bots, according to Akamai Technologies, a Cambridge, Mass.-based Web-services company.

In the eight months from November 2017 through June, Akamai tracked more than 30 billion malicious login attempts, says the company’s 2018 State of the Internet report released in September.

Only a relative few of the bot attacks actually breach the defenders’ electronic walls, tech executives say, but more are succeeding.

Based on a late-2017 survey of more than 5,000 U.S. adults about their experiences with identity fraud, Javelin Strategy & Research estimates that account takeovers tripled over the preceding year to hit a four-year high. The Pleasanton, Calif.-based firm estimates losses reached $5.1 billion.

Like Going to the Bank

Bot-deploying criminals are devoting plenty of attention nowadays to retailer Web sites, which tend to have weaker defenses and are subject to fewer data-protection regulations than banks and credit unions, security experts say. Plus, fraudsters try to take advantage of the proclivity of consumers to use the same passwords across multiple sites.

“It’s something [retailers] haven’t dealt with before,” says Al Pascual, senior vice president of research and head of fraud and security at Javelin. “For criminals, this is almost as good as going to the bank.”

While retailers are popular targets nowadays, there are many other ones, including health-insurance providers, and the more skilled fraudsters continue to probe banks for weaknesses.

“We are seeing a massive number of account-takeover attempts,” says Robert Capps, vice president and authentication strategist at NuData Security, a Vancouver, British Columbia-based antifraud specialist owned by Mastercard Inc. that uses behavioral biometrics to spot suspicious activity. “We see a lot around retail, we see a lot around payment services. We’re seeing this drumbeat pretty much around anyone who has value behind that login.”

In an October report about data protection, Aite said 89% of financial-institution executives it surveyed stated that account takeover is a top-three cause of fraud losses in digital channels, and 42% said application fraud also is a top-three source of losses (see the Endpoint column for more about account-takeover fraud).

‘Unsatisfied Customers’

Even if they’re not causing actual fraud, bot attacks can wreak mayhem because of the sheer volume of traffic directed at target Web sites.

The traffic can resemble a digital denial of service (DDoS) attack, in which the goal is not so much to steal as to disrupt by causing a site to slow down or crash, leading to “unsatisfied customers,” says Rich Bolstridge, chief strategist at Akamai Technologies.

“The botnets keep trying and trying,” he says.

One retailer client of Cequence Security was hit with a 10-fold increase in Web traffic, 90% of it malicious, when it ran a sale over the Memorial Day weekend, notes Larry Link, president and chief executive of the Sunnyvale, Calif.-based fraud-control technology firm.

“I was surprised at the level of sophistication that hit them,” Link says. “It’s anybody that’s got a big retail presence.”

Link adds that another weak spot bots try to exploit involves application programming interfaces (APIs), the communication protocols and tools developers use in creating their programs. APIs help outside software developers work with a particular program, but they’re often vulnerable from a security standpoint, he says.

‘That’s a Problem’

Thwarting malicious bots could soon become harder because of the rise of so-called open banking in parts of the world, including the European Union, according to Bolstridge. Open banking refers to regulations that allow financial-technology firms to access some of the customer payment data held by banks.

The intent is to enable fintechs to offer a broader array of services to consumers. But these intermediaries now represent a new group of targets for criminals.

“It’s going to make [fraud control] even more challenging,” Bolstridge says.

What to do?

Merchants, banks, credit card issuers, insurance companies, and others increasingly are looking to tech firms to help sort out bad bots from legitimate traffic, all the while trying to minimize the risk of rejecting honest transactions.

One of the anti-bot technologies being brought to the front lines is behavioral biometrics, which involves software programs that can measure hundreds of variables, everything from the strength of the person’s keyboard tap to the width of fingertips on a touch screen to typing patterns.

“People, when they enter [data into] a machine, they don’t have even typing patterns,” says Frances Zelazny, chief strategy and marketing officer of BioCatch, a 5-year-old firm based in New York City that monitors six billion transactions per month, including banking and credit card applications. Bot-supplied data “is going to look like a machine, but a human will have many, many different nuances.”

Zelazny describes one trick BioCatch has used to thwart bot-driven credit card applications. Banks, card issuers, and others that need a customer’s birth date often display “wheels” containing days, months, and years from which the applicant is supposed to select his or her birthday. BioCatch’s technology can make the wheels spin faster or slower, a move that humans adjust to much easier than bots, Zelazny says.

“They weren’t able to react, they expect the wheel at a certain speed,” he says.

Bots have improved over the years, data-security executives admit, but they still often betray themselves with robotic behavior, however subtle.

“If all those touches are uniform, they’re all the same size, that might be automation,” says NuData’s Capps. “If something’s too perfect, that’s a problem. If something’s too random, that’s a problem.”

Adds Zelazny: “There are certainly bots that are trying to behave like humans, but they can’t react because they’re scripts.”

‘Low And Slow’

Despite their scripted behavior, some bots have their own tricks, and these can help them evade attention. One is to slow down their normally very high rates of login attempts.

In a recent report about credential stuffing, Akamai describes the unpleasant situation a large credit union found itself in: Under attack by three separate botnets at once.

The first sign of trouble was a more than tenfold increase in malicious log-in attempts per hour, from about 800 under normal conditions to a spike of 8,723. But Akamai calls the botnet responsible for this attack “dumb” because all of its traffic came from two Internet Protocol (IP) addresses based on a cloud platform, and for other characteristics.

The second “bot herder,” as Akamai’s report calls it, “was impatient and attacked at such a high rate it couldn’t escape notice.” Over the course of three days, this botnet generated more 190,000 malicious login attempts from thousands of IP addresses. This one needed more work to defuse than the first.

But the third bot proved to be the most dangerous and difficult to detect. “This bot used a ‘low-and-slow’ approach to attacking the site, averaging one malicious login attempt every other minute,” the report says. It used 1,500 IP addresses, but the average of login attempts per address over the time of the attack was very low.

This third, more subtle attack “does highlight the increased sophistication of the botnets,” says Bolstridge.

Another technique gaining favor among account-takeover fraudsters is to lie low after capturing an account, according to Forter’s Sims. As an example, why use credentials good for a relatively low-ticket food-delivery service if, with a little time and effort, you find out those credentials also work at a high-end retailer?

“A popular one today: take over the account and not transact because they’re trying to get other data points because they want to take the information to commit a bigger type of theft,” Sims says.

‘Sinkholing’

It’s attacks like these that are prompting data-security vendors to roll out new anti-bot products. One of the most recent comes from Cequence, which in November unveiled its Cequence ASP, for application security platform.

The platform uses artificial intelligence, machine learning, and other technology to first identify the Web assets that a client needs to protect. It then monitors the client’s Web and mobile applications, as well as its APIs, for signs of attack, Cequence CEO Link says.

A point of differentiation for Cequence ASP from older anti-bot technology is its ability to work with clients’ existing Web applications, says Link. JavaScript, a popular Web-coding technology, requires code injections and software development kit changes for each Web or mobile application, according to Cequence.

“We do this with absolutely no change to the application environment,” Link says.

Another differentiator, Link says, is an open architecture. That means the service can be deployed on-premise or in the cloud, and easily exchange data among other systems and devices.

The service is tailored for very high volumes—it comes with a tiered, subscription-based pricing model starting at $150,000 a year based on analyzing 10 million transactions per day.

Once a botnet is identified, defenders have various options to neutralize the attack. A common one has been “sinkholing” the traffic, where it’s re-routed to a so-called negative address where the bot’s credentials can’t be tested. Such an address is one “where’s there’s basically nothing,” says Javelin’s Pascual.

When the new Cequence ASP confirms a bot attack, the system attempts to squelch it through blocking, limiting traffic, deception, and other techniques. Cequence says it has tested the new service in several deployments, including a Fortune 100 multinational financial-services provider and a Fortune 500 cosmetics retailer.

‘Raising the Bar’

Apart from using this or that anti-fraud product, merchants, financial institutions and others with data to defend need to take a broader approach to fighting botnets that focuses on more than just transactions, according to Sims at Forter.

“You really need to monitor every stage of the customer lifecycle,” he says. “If you just look at the point of the transaction, you’re setting yourself up to fail. Avoid the traditional rules-based analyses, try to monitor as many different touch points in the customer lifecycle as possible, not just the checkout.”

Good bot defense also goes beyond technology and defense strategy to factor in consumers’ perceptions of how well companies guard their data, according to Pascual. That’s especially true for retailers, who are newer to the data-protection game than banks.

“If my competitors offer better security … that could be a competitive advantage,” he says.

—With additional reporting by Kevin Woodward

 

Bombarded by Bots

There were 8.3 billion malicious login attempts in May and an equal number in June.

Account-takeover losses hit $5.1 billion.

Account takeovers, often linked to bots, tripled in 2017.

89% of financial-institution executives say account takeovers are a top-three cause of fraud losses.

Sources: Akamai Technologies, Javelin Strategy & Research, Aite Group

Fallback Fraud Falls

While merchants and financial institutions are fighting a pitched battle against bots, they are winning in another arena of fraud. Fallback fraud, an offshoot of the counterfeit fraud that EMV chip cards are meant to reduce, declined over the past year, according to new findings from Auriemma Consulting Group.

Fallback fraud refers to dollar losses resulting from would-be EMV payments resorting to the credit or debit card’s back-up magnetic stripe because of a problem with the chip. Such transactions typically occur when the fraudster damages the chip, covers it with clear film, or otherwise renders it inoperable.

That forces the point-of-sale terminal to read the card’s mag stripe, which likely has been counterfeited. Incorrect insertion of an EMV card into a POS terminal occasionally initiates a fallback transaction, too.

In 2017, fallback fraud made up more than 20% of counterfeit fraud and 4.5% of total credit card fraud, according New York City-based ACG’s Card Fraud Control Benchmark Study released in November. Fraud was rising even as fallback transactions, including legitimate transactions, made up less than 2% of overall purchase authorizations, ACG reports.

But in 2018’s second quarter, fallback fraud made up just 11.5% of counterfeit fraud and 3.2% of total credit card fraud, respective declines of 45% and 30% year-over-year, according to ACG.

Auriemma gets its data from its quarterly Fraud Control Roundtables with representatives of 34 financial institutions, including 14 of the 15 largest U.S. credit card issuers, says Ira Goldman, senior director of the Roundtables operation. The firm also collects fraud data from issuers through monthly and quarterly surveys.

Fallback fraud is an activity that typically comes and goes fairly quickly after a nation converts to EMV chip card payments, though it has stuck around longer than usual in the United States, ACG said. But issuers are getting smarter about identifying and thwarting it, according to Goldman.

“They’re looking at dollar amounts, they’re looking at velocity thresholds [the number of transactions in a given time period], any sort of prior fallback activity on the same account,” Goldman says.

Fraudsters often try to get the most bang for their buck by trying to buy TVs and other pricey consumer electronics goods, thus issuers’ increased emphasis on dollar limits on fallback transactions. “The fraudster is looking to purchase an expensive item,” says Goldman.

Issuers also are looking more closely at fallback history as they try to sort the good from the bad. “There are legitimate fallback transactions,” he notes.

Banks that have implemented new fallback-fraud policies are reporting minimal disruption to customers, according to Auriemma. Fallback transactions and declines fell 12.6% and 20% year-over-year, respectively, the firm reported.

Check Also

Beyond payment acceptance – Leveraging APIs and open banking to provide more value

By Steven Velasquez, Senior Vice President and Head of Partner Business Development – U.S. Bank …

Digital Transactions