Cloud Load Balancing vs. Application Delivery Controllers Revisited
About a month ago, I wrote a post on cloud load balancing versus application delivery controllers. In that post, I explored the core differences between cloud-managed load balancing and self-managed commercial load balancing, using an application delivery controller virtual appliance running over cloud infrastructure. In part two of this series, I take a closer look at some of the themes laid out in my earlier post with an emphasis on the role application delivery controllers play in addressing the challenges associated with migrating legacy applications to a general purpose cloud infrastructure.
There is no doubt that managed cloud load balancing services (such as Rackspace CLB and Amazon ELB) play a key role in scaling out application architecture and providing a high level of availability to applications. This is especially the case for applications that do not require client-side session states such as RESTful applications. But the reality is that the majority of business-critical, legacy applications, built using web technology require the browser (client) to maintain session states and have not been designed to operate in a cloud infrastructure. What’s more, these legacy applications typically have evolved to be augmented with a plethora of L7 functionality implemented on application delivery controllers.
Most importantly, however, are the fundamental differences between the architecture of cloud load balancing services and virtual ADCs running in the cloud infrastructure. For instance, normalized usage patterns for multiple cloud tenants drive cloud providers to allocate a load balancer system large enough to host as many load balancing VIP’s as required by a given amount of tenants. In most cloud providers, these load balancers run as resource pools. A VIP can migrate from one physical container to another as needed. Typically, if one of the physical containers fails, it takes a process of re-configuring all the VIP’s that were hosted on that particular load balancing device onto another device in order to restore service. Looking at the June 29th AWS outage, it was this particular method that introduced other challenges and proved to not be efficient enough to keep application traffic live during the failure on such a large scale.
But the question remains: How do the differences between cloud load balancing services and Virtual ADCs impact applications? On the one hand, it’s really up to the specific design and desired SLA of an application. Certainly for some applications, a cloud-based load balancing service is great. But when it comes to standard enterprise applications, built with the notion that the infrastructure is reliable, self-managed load balancers offer a better set of capabilities to help overcome some of the cloud uncertainty. Ultimately, when it comes to the cloud, predictability, not performance, is the biggest concern. Self-managed instances put virtual resources at an organization’s disposal, allowing for complete control over performance expectations. In addition, self-managed services offer enterprises capabilities and support options typically unavailable with open source tools.
It is certainly the case that the technical challenges associated with open source load balancers, such as lack of UDP load balancing, lack of override rules based on L7 data and the absence of ADC HA with session sync, may be addressed over time. For now, however, self-managed virtual application delivery controllers simply offer enterprises a realistic way to take their existing applications and make them work in a cloud environment.