The Security Concerns of SSL/TLS Encrypted Traffic
Secure Sockets Layer (SSL) and Transport Layer Security (TLS) sites are used today to secure transactions on banking sites, health care portals, and enterprise network and application portals. The use of these sites on the Internet has been increasing in past years and the volume of this type of traffic continues to grow, feeding speculation that perhaps one day these secure sites could replace clear-text HTTP.
However, this use increase has brought about its own fair share of issues, among them Open SSL vulnerabilities, protocol vulnerabilities, and cipher weaknesses to name a few. Besides these issues, there has also been a marked increase in malicious activities in the forms of encrypted web page floods and the malware proliferation from SSL/TLS encrypted sites, which can include public email portals like Gmail and Yahoo mail.
Is Securing Internet Traffic a Lost Cause?
The challenges of SSL/TLS as a cover currently fall broadly into two categories: malicious activities that are directed towards enterprise servers and the malicious activity directed towards enterprise workstations, mobile devices, tablets, etc. The former consists of attackers generating application DDoS, like the application attacks that make up the Open Web Application Security Project (OWASP) Top Ten. The latter consists of malware that arrives from infected SSL/TLS servers on the Internet (such as music swapping sites, adult sites, etc.,) or via email malware/scamware that accesses the enterprise server through personal email use.
The challenge facing an enterprise is that manifold, large enterprises may find host-based malware solutions prohibitively expensive on a per seat basis. Host-based DDoS mitigation is not effective, as it still ends up taxing the server resources.
Do Most of us Want An Illusion of Security or Real Security?
While the numerous existing solutions purport to secure HTTP, the reality is most of the solutions fall short as high performance, comprehensive, and cost-effective solutions. Here’s why:
WAFs Fall Short on HTTPS Page Floods
While Web Application Firewalls (WAFs) are very efficient at guarding against application exploits such as SQL Injection, by their very nature, they are ill-equipped in protecting against HTTPS floods. Most WAFs make claims of comprehensive protections, however, either through fault or folly this doesn’t meet reality. Many solutions water down their deployments and make capitulations on the resulting configurations. The devices may also lack in raw technical protection, equating to a device analogous to an IPS in protecting against a network DDoS attack – ineffective, but purported to secure against such attacks.
NGFWs Fall Short on Encrypted OWASP & DDoS
Current Next Generation Fire Walls (NGFWs) seem to perform poorly when it comes to performance and SSL/TLS inspection, according to a report by NSS on the loss of performance.
While NGFW tend to protect one vector of an encrypted threat, such as intrusions or vulnerabilities, the NGFW themselves are vulnerable to an SSL/TLS encrypted application DDoS attack. Some of the proposed protection and mitigation measures suggested are performing application layer request rate limiting from a source, such as deploying scripts on an ADC, or a reverse proxy, etc.
The Real Demon is the BotNet!
The current environment is rife with a variety of BotNets (which can be for hire on the black market and can be paid for with stolen credit cards for a relative pittance) which may consist of hundreds of thousands, or even millions, of hijacked home and personal computers on broadband connections that can be commanded and will respond by targeting a destination for attack. These are known in the security parlance as Zombies. The threatscape has changed with respect to the floods and some basic protection sets by ISPs are effective, such as detecting high traffic volume over extended period of time. But BotNets can be commanded to be randomly bursty or individually generate low volume, this can equate to a large BotNet bringing a significant flood towards an attack target. Most of the HTTPS floods do not need to generate a high volume of traffic towards a destination, just a high amount of SSL/TLS handshakes per second, and a high number of requests per sec. In most of the enterprises, applications can be brought down by a few hundred requests per second, let alone thousands, hundreds of thousands, or worse.
Is Malware Infiltrating Your Workstations and Causing Data Exfiltration via SSL?
Now let’s look at the threats which can piggyback the content from external websites being browsed by employees in an enterprise/entity. Most of the SSL/TLS visibility tools rely on decryption of traffic, which is not an issue if the protected server belongs to the enterprise and private keys are available. However, it is an issue if the entity to be protected is not the enterprise server, but workstations, as the servers are located externally to the enterprise, and such private keys are not available – for example access to the private keys of Gmail or Yahoo Mail. By the very nature of SSL/TLS this cannot be made available, this after all is one of the cornerstone of SSL/TLS security.
How Do We Overcome These Challenges?
The challenges that are present in server security are diverse and Radware is in a unique position today in the market, being the sole vendor that can mitigate the threats targeted towards a data center, such as encrypted application floods via HTTPS (in addition to the garden variety HTTP floods, and various DNS, NTP, UDP floods and amplification attacks). These solutions also have the ability to provide visibility into the SSL/TLS traffic destined to external websites. This means that the majority of server-targeted attacks can be mitigated by a combination of OS patching, configuration, and IPS signatures, as well as standard DDoS mitigation mechanisms. One step further is providing an integrated system comprised of modules such as Network Behavioral Analysis, IPS and Packet Anomaly prevention as the Attack Mitigation System (AMS).
What Sets Radware Apart?
What sets Radware apart is the ability of the solution to not only defend against such attacks, while permitting legitimate traffic to pass through without degradation, latency, or throttling (unlike rate limiting), it can also very uniquely defend against SSL/TLS application floods, without impacting legitimate traffic by leveraging the innovative Defense SSL component of the AMS solution. The Defense SSL is protected by US Patent # 8,832,831.
Malware Infiltration and Data Exfiltration via HTTPS
The Radware Client Side SSL/TLS inspection, allows us to function as a mediating SSL/TLS proxy for outbound content and provides for a decrypted active stream of traffic, which can be inspected, intercepted and permitted or denied by security tools such as Malware Inspection tools, DLP appliances, and various policy enforcement tools guarding content posted to social media sites as per regulatory requirements. The solution is highly scalable and can intercept traffic from hundreds of thousands of enterprise workstations. Radware’s Client Side SSL Inspection is protected by US Patent #7,769,994.
In order to combat the variety of threats that proliferate today within the SSL/TLS sphere, Radware provides a comprehensive solution that addresses the mitigation of HTTPS DDoS attacks, and provides selective visibility into SSL/TLS encrypted traffic for content inspection, policy enforcement, and regulatory compliance requirements.
These unique solutions deliver high performance and low latency, and have a very cost effective TCO. They offer the ability to scale horizontally in incremental fashion, without having to rip and replace and the augmentation of capacity is transparent, with little to no impact on the service infrastructure.