Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Bots Are Getting More Sophisticated, It's Time Your Cyber Defenses Do Too


February 17, 2016 02:00 PM

This year, 2016 is set to host the "battle of the bots" as bot-generated attacks targeting web application infrastructure are increasing in both volume and scope, according to a recent survey conducted by Radware, which profiled the various cybersecurity threats that are expected to increase in the coming year.

One important fact to note is that not all bots are bad. There are plenty of bots and computer-generated traffic programs that are essential for the daily support and maintenance of web applications. Some prominent examples include search engine bots, such as Baidu Spider and Bingbot. While these bots exist to support the infrastructure, IT managers do need to be aware of the bad bots out there, as they are also numerous, and can pose a serious threat to web application performance.

These bad bots generate various web attacks, some of the most common being SQL injections and Cross-Site Request Forgery, web scraping, and, of course, the ever-looming threat of DDoS attacks.

Every web administrator knows the fear – application performance slowing to a crawl, and then crashing entirely, all because of a massive, unforeseen influx of web traffic from a bot-network. Web applications can’t handle that amount of traffic, and performance suffers.

Since humans can be just as great of a threat to web applications as bots, it’s vital for organizations to be able to distinguish between human and bot activity, in order to properly mitigate threats. One common form of detection is the use of CAPTCHA challenges, a reverse Turing test used to gauge the ability of a computer program to mimic human behavior. However, while this practice is an acceptable means to detect simple, script-based bots, the rise of "advanced bots" has posed a challenge to the IT industry.

These newer, more sophisticated bots are based on headless browser technology and pose significant complications to the detection process. Advanced bots are capable of mimicking human user behavior to a much higher degree than their script-based counterparts and use techniques such as running Javascript and following links graphically to trick detection protocols into thinking they are performing are human activities. These bots are also capable of passing CAPTCHA challenges and setting up dynamic IP addresses, which allows them to maintain low rates of activity per individual IP on a bot network, thus evading IP-based detection parameters.

Defending Against the Bots

So how can organizations defend themselves against such sophisticated bots?

The first step is to assure the use of IP-agnostic bot detection, as successful detection requires correlation across sessions. Without this correlation, it can be highly challenging to detect advanced bots jumping from IP to IP. Relying solely on IP-based detection is not sufficient and can conceal larger threats. To create this IP-agnostic system, fingerprinting is required.

The use of device fingerprinting offers IT managers the ability to identify browsers or automated web client tools through data collection. These tools are able to collect information in various forms, such as operating system specifications, TCP/IP configuration, underlying hardware attributes, and browser attributes. Commonly, this data is collected through Javascript processing, although some types, like TCP/IP, can be collected passively without obvious querying.

A great deal of client-side browser attributes can be collected to form a device fingerprint. While some attributes may seem common, the consolidation and combination of this information is what yields power and sufficiently distinct device fingerprints.

As attacks by advanced bots become increasingly common, the maintenance of an IP-agnostic detection environment is becoming more critical, as is the ability to track bots jumping across IPs via a single, consistent fingerprint.

Finally, it’s important to gauge the threat to applications across multiple attack vectors. An application DDoS attack may be targeting specific resources, however a data-focused scraping attack is typically aimed at specific web pages with the goal of information extraction. Be sure to apply device fingerprinting where it makes the most sense, whether that be a single point of interest within an application or the global implementation across domain resources.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center
CyberPedia