Application Virtualization – Seeing the Forest Instead of Trees
Virtualization of the application environment is on every business’ mind. Terms like hypervisors, virtual machines, and software defined [insert your own popular term here: networks|data centers|storage] are being thrown around the technology industry like hot potatoes. While IT organizations focus on virtualizing specific applications, they often forget to see how this component fits into the overall trend to virtualize the entire IT infrastructure.
One here, two there
Businesses tend to virtualize their applications individually or in small groups. This process is repeated until the majority of the application infrastructure has been virtualized. While this process makes the migration manageable, it introduces the artifact that each virtualization project is somewhat independent of the others.
The work being done within mobile service providers is a perfect example. The mobile service providers created the network functions virtualization (NFV) reference architecture to bring virtualization and cloud-like benefits to the service provider networks. They are starting to do proof of concept (POC) projects and seeing how NFV will benefit their environments. But, they are looking at very specific services and instances where they can virtualize parts of the network.
There are projects for vCPE (virtual customer premise equipment) such as one’s cable modem or DSL router. Other projects are designed to create a vEPC (virtual evolved packet core) or vIMS (virtual IP multimedia subsystem). All of these virtualized components interact with each other for everyday client-generated traffic, but since the POCs are designed and run independent of each other, there is little interaction across the different virtualized systems.
One ecosystem, not islands
Virtualization offers agility and elasticity for application deployment and availability. These benefits are achieved when the entire application delivery infrastructure is virtualized using a common architecture. It is more often the case than not that the activity and performance of one application impacts one or other applications within the network.
The best way to create this virtualized ecosystem is to develop a consistent application delivery infrastructure that is aware of the application and its interactions with other components in the network. This application delivery infrastructure must have the orchestration and analytics to be able to: 1) collect the application performance data, 2) analyze and interpret the data, and 3) enact changes to other components within the virtualized infrastructure.
Bridging the gap
Application delivery controllers (ADC) are in a key position in the network architecture to perform and initiate these tasks. To work within the agile and elastic virtualized networks, it is important to have ADCs that are virtualized either through true software solutions or true virtual instances within the existing ADC hardware platforms.
Because ADCs act as reverse proxies for the applications they host, they inherently virtualize the application availability. More importantly, because they are managing all of the application’s connections, the ADC has unique insight into the availability, performance, and health of the application as well as how the application interacts with other components within the network infrastructure.
The ADC must maintain consistency of management and integration across all versions and platforms. Also, it must integrate into the management and orchestration systems that govern the virtual network ecosystems. When the ADC technology is pervasive and appropriately integrated into the network infrastructure, businesses can fully benefit from the virtualization of their different application environments.