The History of the Cloud, Part 3 – Automation and Orchestration

We designed and implemented SLB and GSLB technologies to create an architecture that makes applications available anytime and anywhere.  This was a key step to the virtualization of the application infrastructure.  The physical location of the servers and datacenters became less important as we used these technologies to enable optimal access to the application.

Change is good

The next step was to virtualize the application hosting infrastructure.  With the addition of hypervisors and common off the shelf (COTS) hardware, we were finally able to establish the core technologies to enable an agile and elastic infrastructure.

Agility is the ability to add and remove services easily and efficiently.  It was no longer necessary to physically purchase and install specialized servers and configure applications within those servers every single time we made changes.  Pre-configured copies of the application server were stored and could be loaded to any available COTS hardware through the server hardware management system, the hypervisor.  This made it easy to make changes to the application capabilities and even take applications in and out of the network.

Elasticity means on demand resourcing.  It did not make sense to have application resources sitting idle when demand was low.  When demand increased, instances of the application could quickly be created and added to the pool of available resources.  The dynamic scaling of application resources is a key capability of cloud architectures.

The addition of these cloud functions increases the need for the monitoring and manipulation of application availability.  People are needed to detect changes and adjust the application resources within the different parts of the cloud to ensure that there is optimal application availability. Manually monitoring and changing the cloud environment is complex and requires constant attention.  Location dependent scenarios (major sporting events, natural disasters, etc.) and the interaction of multiple components within a website (front-end, database, shopping cart, security, etc.) make the cloud even harder to manage with optimal efficiency.

A conductor for the orchestra

A better solution is needed.  Management and orchestration systems (MANO) deliver the capability to provide the analytics, heuristics, orchestration, and automation of the cloud.

Analytics is the collection of information from the different applications and components in the cloud to give an operational perspective concerning the health of the application delivery infrastructure.  The analytical system needs to deliver the information in an easily visualized and consumable manner.

Heuristics is the ability to apply architecture and application specific intelligence to the analytical data.  The location of the datacenters, distribution of the applications, location of the end-users, and many other metrics can be taken into account.  By intelligently looking at the application delivery infrastructure based on application-specific parameters and expected usage of the application, the heuristics engine can deliver insights into the necessary changes needed to maintain the optimal delivery of the application.

Orchestration is the understanding needed to coordinate the different components of the application infrastructure.  When traffic to a website increases, it is not enough to just deploy more webservers to meet the increased demand.  There are firewalls, databases, and other components that need to be scaled in line with the increased webserver resources.  The location and network delivery path need to be understood and incorporated into the orchestration decisions.

Automation is the final component needed to bring all of this functionality together.  With automation, we are able to eliminate the need for manual intervention every time an adjustment needs to be applied to the application delivery infrastructure.

Self-healing ecosystem

All of these functions need to interact with multiple, disparate elements in the cloud architecture.  Hypervisors, DNS tables, load balancing pools, and routing paths all need to be incorporated into the MANO to create a fully self-adjusting and self-adapting network ecosystem that can exist and evolve in a closed-loop environment.

Standards, open and proprietary, are necessary to establish the connections and communications between the MANO and elements within the orchestrated cloud environment.  At some point, the MANO of the cloud becomes an intelligent, analytical, and manipulative self-aware manager assuring optimal application delivery.   Isn’t this what we all wanted in the first place when we created the cloud?

Frank Yue

Frank Yue is Director of Solution Marketing, Application Delivery for Radware. In this role, he is responsible for evangelizing Radware technologies and products before they come to market. He also writes blogs, produces white papers, and speaks at conferences and events related to application networking technologies. Mr. Yue has over 20 years of experience building large-scale networks and working with high performance application technologies including deep packet inspection, network security, and application delivery. Prior to joining Radware, Mr. Yue was at F5 Networks, covering their global service provider messaging. He has a degree in Biology from the University of Pennsylvania.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program


An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Security Research Center