The big costs of bot-driven card testing fraud

By James Sherlow, Systems Engineering Director, EMEA at Cequence Security

Automated bot attacks are the scourge of the financial industry, with account takeover fraud (ATO) the dominant type of attack. These see bots used to identify and take control over existing user accounts or to create new accounts based on valid credentials and resulted in more than 300 million ATO attacks being blocked last year.

But another major problem is card verification fraud. Sometimes referred to as “carding” or “card checking”, this involves the attacker mass testing stolen card details through small low-risk transactions. It sees batches of credit card data tested by running these numbers against ecommerce apps and websites to make small purchases to see if they work.

The purchases are carried out using bots are used to run the card payments and record validated cards. The card details can then be sold to other cyber criminals online, commanding a higher value, or can be used to make larger fraudulent purchases. A highly lucrative attack, it’s used to validate credit cards, gift cards and loyalty cards and resulted in 69 million attempts being blocked in 2024.

Bot-driven card fraud can be difficult to detect and block due to the small payments involved and the fact that many of these endpoints do not require authenticated users, prime examples often being sites that allow mobile top ups or ecommerce platforms with guest checkouts.

In one recent case a business was subjected to high volumes of payments while the attackers quickly cycled through credit card details to see which were still active but as the payments were so low they were not immediately flagged. Blanket banning these payments was not an option. The business needed to be able to determine which were bot driven purchases and to adjust the security policies on its platform accordingly to put a stop to these.

Stopping the bots

The only way to differentiate between a bot versus a legitimate user in this type of scenario is to closely analyse the attributes of the attack. In this case, there were transaction patterns that were repeated as well as inconsistencies. By the tracking session identifiers and bearer tokens associated with the process, it was possible to trace the activity of the bots and for those monitoring the attack to better understand how it was being executed.

Session identifiers are unique strings generated by the server during user login and are used to keep track of user activities, while bearer tokens are used for authentication. Monitoring the use of these tokens and detecting replay activities (that is, instances where the same token was reused across multiple transactions) revealed which requests were automated versus those coming from legitimate customers.

This kind of analysis cannot be carried out by automated detection and response alone. It requires a human team to interpret the attack by using real-time monitoring and analysing specific attack patterns and behaviours associated with the bot’s activities. The defence mechanisms of the business can then be fine-tuned to focus on those specific activities and parameters enacted, avoiding any disruption to normal transaction traffic.

Automation and analysis

Such a response respects the ecosystem of the business. Automated tools detect and block threats in real time, while human experts conduct further in-depth analyses to better understand complex attacks and make the necessary adjustments to defences, including adding in other preventative measures. It’s the least disruptive approach but also the most successful.

Even though these transactions are small, however, they can still have numerous detrimental effects. The scale of these attacks can see them quickly add up to substantial sums even if the card isn’t maxed out. They’re costly to service because handling a large number of disputed transactions can place customer support teams under strain and lead to increased operational costs. And then there’s the matter of reputational damage. If fraudulent charges appear on customer statements those customers may lose trust in the business and potentially churn. Plus, credit card companies will often expect businesses to take steps to prevent this type of attack or risk losing their services.

These attacks underscore how important it is for merchants and those handling credit card payments to address card verification payments by combining automated security detection and response with behavioural based analysis. Identifying attack patterns and behaviours can provide the insight needed to ensure that the response is appropriate, proportionate and effective. Defences can then be tuned to targeting and thwarting the offending bots.

Looking to the future, all the evidence points to these bot-driven card verification attacks becoming AI-enabled, allowing the attacker to see where the attack has failed and to refine their attacks. Without the use of behavioural based bot detection and mitigation, businesses will struggle to not only detect these attacks but to continue to deflect them as the attacker pivots and attempts to sidestep defences.