The Big, Bad Bot Problem
Roughly half of today’s internet traffic is non-human (i.e., generated by bots). While some are good—like those that crawl websites for web indexing, content aggregation, and market or pricing intelligence—others are “bad.” These bad bots (roughly 26% of internet traffic) disrupt service, steal data and perform fraudulent activities. And they target all channels, including websites APIs and mobile applications.
Bad Bots = Bad Business
Bots represent a problem for businesses, regardless of industry (though travel and e-commerce have the highest percentage of “bad” bot traffic). Nonetheless, many organizations, especially large enterprises, are focused on conventional cyber threats and solutions, and do not fully estimate the impact bots can have on their business, which is quite broad and goes beyond just security.
Indeed, the far-ranging business impacts of bots means “bad” bot attacks aren’t just a problem for IT managers, but for C-level executives as well. For example, consider the following scenarios:
- Your CISO is exposed to account takeover, Web scraping, DoS, fraud and inventory hold-ups;
- Your CRO is concerned when bots act as faux buyers, holding inventory for hours or days, representing a direct loss of revenue;
- Your COO invests more in capacity to accommodate this growing demand of faux traffic;
- Your CFO must compensate customers who were victims of fraud via account takeovers and/or stolen payment information, as well as any data privacy regulatory fines and/or legal fees, depending on scale;
- Your CMO is dazzled by analytic tools and affiliate services skewed by malicious bot activity, leading to biased decisions.
The Evolution of Bots
For those organizations that do focus on bots, the overwhelming majority (79%, according to Radware’s research) can’t definitively distinguish between good and bad bots, and sophisticated, large-scale attacks often go undetected by conventional mitigation systems and strategies.
To complicate matters, bots evolve rapidly. They are now in their 4th generation of sophistication, with evasion techniques so advanced they require the most powerful technology to combat them.
- Generation 3 – These bots use full-fledged browsers and can simulate basic human-like patterns during interactions, like simple mouse movements and keystrokes. This behavior makes it difficult to detect; these bots normally bypass traditional security solutions, requiring a more sophisticated approach than blacklisting or fingerprinting.
- Generation 4 – These bots are the most sophisticated. They use more advanced human-like interaction characteristics (so shallow-interaction based detection yields False Positives) and are distributed across tens of thousands of IP addresses. And they can carry out various violations from various sources at various (random) times, requiring a high level of intelligence, correlation and contextual analysis.
It’s All About Intent
Organizations must make an accurate distinction between human and bot-based traffic, and even further, distinguish between “good” and “bad” bots. Why? Because sophisticated bots that mimic human behavior bypass CAPTCHA and other challenges, dynamic IP attacks render IP-based protection ineffective, and third and fourth generation bots force behavioral analysis capabilities. The challenge is detection, but at a high precision, so that genuine users aren’t affected.
To ensure precision in detecting and classifying bots, the solution must identify the intent of the attack. Yesterday, Radware announced its Bot Manager solution, the result of its January 2019 acquisition of ShieldSquare, which does just that. By leveraging patented Intent-based Deep Behavior Analysis, Radware Bot Manager detects the intent behind attacks and provides accurate classifications of genuine users, good bots and bad bots—including those pesky fourth generation bots. Learn more about it here.