Recommendations for Managing a Bad Bot Problem
Organizations rely on robotic process automation (RPA), essentially the use of bots, to be more efficient and boost productivity. Good bots, like those used to crawl websites for web indexing, content aggregation and market intelligence, free human resources to focus on other responsibilities. Of concern are the bad bots deployed by bad actors to disrupt network services, steal data, perform fraudulent activities and even spread fake news.
So how should businesses manage bad bots–which account for nearly 25% of internet traffic– and mitigate their impact on the bottom line?
Assess the Real Impact of Bad Bots on Your Organizations. Understand that there is a good chance that bad bots impact your business negatively, whether by stealing sensitive data, compromising user accounts, degrading customer experience or fooling the
marketing department. There is only so much protection conventional security solutions, such a firewall or a WAF, can provide against sophisticated bots. Bot management is complex and requires a dedicated technology with experts behind it who have a deep knowledge of good and bad bot behaviors.
Build Capabilities to Identify Automated Activity in Seemingly Legitimate User Behaviors. Sophisticated bots simulate mouse movements, perform random clicks and navigate pages in a human-like manner. Preventing these types of attacks requires deep behavioral models, device/browser fingerprinting and closed-loop feedback systems to ensure that you are not blocking genuine users. Purpose-built bot mitigation solutions can detect sophisticated automated activities and help you to take preemptive actions. Traditional solutions are limited to tracking spoofed cookies, UAs and IP reputation.
Enforce Authentication via MFA and Challenge-Response Methods. Multifactor authentication (MFA) systems, such as temporary access codes via SMS, in addition to
login forms or other in-app authentication mechanisms, are vulnerable to attackers. There are multiple ways to bypass MFA protection, including using transparent proxies like Muraen and NecroBrowser. In September 2019, the U.S. Federal Bureau of Investigation (FBI) warned organizations about the possibility of cybercriminals circumventing multifactor authentication. CAPTCHA has proven to be relatively ineffective in blocking sophisticated bots that mimic human behavior and can be solved in bulk by outsourced CAPTCHA-solving teams. Presenting CAPTCHAs can be an irritant to users and
adversely impact the user experience.
Block Origins of Bad Bot Traffic. Public cloud services can safe harbor bad bots. Organizations can block suspected public cloud services and internet service providers (ISPs). However, blocking all the traffic coming from data centers or ISPs without considering the user behavior can cause false positives. For example, many users on digital publishing sites come from commercial organizations that use secure web gateways
(SWGs) located in data centers to filter user-initiated traffic. Blocking data center traffic without considering domain-specific user behavior can cause false positives for digital publishing sites.
Adopt Strict Authentication Mechanism on APIs. APIs are the key channels that enable seamless intercommunication between websites, applications and smart devices. They have become crucial in facilitating the flow of data from where it is stored to where it is needed. With the growing use of microservice architectures in organizations, poorly
secured API gateways are vulnerable to malicious bot attacks. Use API requests to ensure that traffic is coming from a genuine source and not from a malicious bot. API gateways typically only verify the authentication status, but not if the request is coming from a legitimate user. Attackers exploit these flaws in various ways, including session hijacking and account aggregation to imitate genuine API calls.
Monitor Anomalous User Behavior and Key Performance Indicators (KPIs). Cyberattackers deploy bad bots to perform credential stuffing and credential cracking attacks on login pages. Since such approaches involve trying different credentials or a different combination of user IDs and passwords, they increase the number of failed login attempts. Bad bots that visit your website to perform scraping, account takeover or any type of automated activity will result in sharp spikes in traffic. Monitoring failed login attempts and spikes in traffic can help webmasters and security teams take preemptive mitigative measures.