Application Virtualization – Seeing the Forest Instead of Trees


Virtualization of the application environment is on every business’ mind. Terms like hypervisors, virtual machines, and software defined [insert your own popular term here: networks|data centers|storage] are being thrown around the technology industry like hot potatoes. While IT organizations focus on virtualizing specific applications, they often forget to see how this component fits into the overall trend to virtualize the entire IT infrastructure.

One here, two there

Businesses tend to virtualize their applications individually or in small groups. This process is repeated until the majority of the application infrastructure has been virtualized. While this process makes the migration manageable, it introduces the artifact that each virtualization project is somewhat independent of the others.

The work being done within mobile service providers is a perfect example. The mobile service providers created the network functions virtualization (NFV) reference architecture to bring virtualization and cloud-like benefits to the service provider networks. They are starting to do proof of concept (POC) projects and seeing how NFV will benefit their environments. But, they are looking at very specific services and instances where they can virtualize parts of the network.

There are projects for vCPE (virtual customer premise equipment) such as one’s cable modem or DSL router. Other projects are designed to create a vEPC (virtual evolved packet core) or vIMS (virtual IP multimedia subsystem). All of these virtualized components interact with each other for everyday client-generated traffic, but since the POCs are designed and run independent of each other, there is little interaction across the different virtualized systems.

[You might also like: Automation – Virtualizing the Human Factor]

One ecosystem, not islands

Virtualization offers agility and elasticity for application deployment and availability. These benefits are achieved when the entire application delivery infrastructure is virtualized using a common architecture. It is more often the case than not that the activity and performance of one application impacts one or other applications within the network.

The best way to create this virtualized ecosystem is to develop a consistent application delivery infrastructure that is aware of the application and its interactions with other components in the network. This application delivery infrastructure must have the orchestration and analytics to be able to: 1) collect the application performance data, 2) analyze and interpret the data, and 3) enact changes to other components within the virtualized infrastructure.

Bridging the gap

Application delivery controllers (ADC) are in a key position in the network architecture to perform and initiate these tasks. To work within the agile and elastic virtualized networks, it is important to have ADCs that are virtualized either through true software solutions or true virtual instances within the existing ADC hardware platforms.

Because ADCs act as reverse proxies for the applications they host, they inherently virtualize the application availability. More importantly, because they are managing all of the application’s connections, the ADC has unique insight into the availability, performance, and health of the application as well as how the application interacts with other components within the network infrastructure.

The ADC must maintain consistency of management and integration across all versions and platforms. Also, it must integrate into the management and orchestration systems that govern the virtual network ecosystems. When the ADC technology is pervasive and appropriately integrated into the network infrastructure, businesses can fully benefit from the virtualization of their different application environments.

6_tips_sla_document_cover

Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.

Download Now

Frank Yue

Frank Yue is Director of Solution Marketing, Application Delivery for Radware. In this role, he is responsible for evangelizing Radware technologies and products before they come to market. He also writes blogs, produces white papers, and speaks at conferences and events related to application networking technologies. Mr. Yue has over 20 years of experience building large-scale networks and working with high performance application technologies including deep packet inspection, network security, and application delivery. Prior to joining Radware, Mr. Yue was at F5 Networks, covering their global service provider messaging. He has a degree in Biology from the University of Pennsylvania.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center