Virtualization Requires New Models for Old Technologies


Driving a car is like riding a bike, if one refers to the old expression. It is fairly easy to recall how to do it if there has been some time since the last time one has been behind the steering wheel. Of course, this old adage does not apply if the way cars are driven has changed. It can be disconcerting going from automatic to manual transmissions or driving on the right side of the road instead of the left.

In technology, we come across similar situations. While the base technologies we use are familiar, the environment and architecture changes the way we utilize these functions. IT networks have evolved to the point where they look nothing like the structures we built 20 years ago. But, at the same time, we are using the same tools to build them.

Some things stay the same

Network protocols like spanning tree (STP), ethernet, OSPF, and BGP are core components within almost all network designs. On top of these protocols, we add network services such as firewalls, application gateways, proxies, server load balancers (SLB), and other functions.

These protocols and functions have evolved and matured, but their core function and purpose has not changed much. OSPFv2 was defined in 1998 and OSPFv3 was introduced in 2008 for IPv6 support. If one understood the protocol 10 years ago, they would have no problem understanding it today.

Consistency and stability of these technologies is critical for network architectures to mature and evolve. IT architectures maintain their reliability and availability when the foundations are built using proven and tested components. The strong foundations have enabled architects to evolve network designs from 20+ years ago to today.

[You might also like: Application Virtualization – Seeing the Forest Instead of Trees]

Evolving means relearning

Virtualization through public and private clouds, software-defined anything (SDx) architectures, and software-defined data centers (SDDC) have presented a big challenge that introduces a new adage – you cannot teach an old dog new tricks. Virtualization is challenging the way networks are designed and how applications are delivered end-to-end.

Traditionally, most services are applied at the server-side of the connection such as load balancing, security, content inspection, or compression. In the virtualized architectures, the location and state of the application interface is often transient and rides across network infrastructure that is not owned and managed by the application provider. The lack of ownership and control of the infrastructure makes it hard to insert these critical services into the path of the client-server connection.

Adjustments must be made to the service and how it is delivered to ensure that the application delivery process stays reliable, secure, and optimized. Technology tools (hacks in today’s vernacular) like DNS redirects, generic routing encapsulation (GRE) tunnels, and global server load balancing (GSLB) are used to help control and predict the path of the client-server communications and enable network functions to be applied to the content stream.

One step at a time

Network architectures are constantly changing. The evolution towards virtualized designs has dramatically changed how applications are delivered. Especially when we look at how the chain of application and network services are applied to the application and its content.

This may be disconcerting if one compared the traditional networks of 1990 to the networks today. But, the changes are not tectonic plate shifting events, but a series of smaller changes that add to the new models, further evolving the network architectures. The cars we drive today are very different from the ones we drove decades ago, but at the same time, there is a familiarity and comfort felt when interacting with the traditional, though repurposed functions.

6_tips_sla_document_cover

Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.

Download Now

Frank Yue

Frank Yue is Director of Solution Marketing, Application Delivery for Radware. In this role, he is responsible for evangelizing Radware technologies and products before they come to market. He also writes blogs, produces white papers, and speaks at conferences and events related to application networking technologies. Mr. Yue has over 20 years of experience building large-scale networks and working with high performance application technologies including deep packet inspection, network security, and application delivery. Prior to joining Radware, Mr. Yue was at F5 Networks, covering their global service provider messaging. He has a degree in Biology from the University of Pennsylvania.

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

CyberPedia

An Online Encyclopedia Of Cyberattack and Cybersecurity Terms

CyberPedia
What is WAF?
What is DDoS?
Bot Detection
ARP Spoofing

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center