Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

HPE, Intel & 6WIND DemoFriday Q&A: High Performance vADCs & OpenStack Virtual Infrastructure


July 21, 2016 03:00 PM

Frank Jimenez, Product Marketing Manager at 6WIND answers your post event questions. This DemoFriday sponsored by the HPE & Intel CSP Partner Ecosystem, and featuring Radware took on the challenge of high network performance requirements in OpenStack environments.

Why not use bare metal and SR-IOV/PCI-passthrough to maximize performance?

6WIND: This may be a two part question. The first is about bare metal and SR-IOV/PCI passthrough to maximize performance. When using a bare metal approach there is no need for OpenStack or virtualization. The VNF is no longer a function on a virtual machine. The networking function is hosted on a physical machine and is in control of the machine. In this case, SR-IOV or PCI passthrough is no longer required since the networking function has direct access to all interfaces (no hypervisor layer). However, to increase throughput, the networking function throughput would increase greatly if DPDK drivers are use.

The second part is about a virtual environment (hypervisor) and using SR-IOV/PCI passthrough to maximize performance. I’ve talked to a few customers who insist SR-IOV or PCI passthrough is the only way to increase VNF throughput performance. It is a valid alternative but, it comes at a price of complexity and shortsightedness. NFVpromises to bring the advantages of virtualization to service provider networks. NFV will allow service providers to turn up or turn down services quickly. One of the important characteristics of the NFV infrastructure is the abstraction of the underlying pools of storage, compute and networking resources that make up the infrastructure. Because of the abstraction, the VNF is not hardware-bound or even hardware-aware. SR-IOV and PCI passthrough changes this since they bypass the hypervisor vSwitch and tie the VNF to a specific NIC or interface. This makes NFV advantages like VM migration, Service Chaining and multi-tenancy more complex to execute and can affect throughput performance due to PCIe Bandwidth boundaries. A better solution would be to increase the performance of the vSwitch and that is where solutions like 6WIND Virtual Accelerator come in.

What advantage does this architecture have over other models?

6WIND: From the 6WIND Virtual Accelerator perspective, adding Virtual Accelerator increases hypervisor packet processing performance without changing the VMs, management or the current virtual switch deployment, Linux bridge or OVS. In fact, Virtual Accelerator leverages the OVS control plane to control forwarding from VNF to VNF to NIC. Finally, Virtual Accelerator is scalable and increases efficient use of hypervisor hardware resources.

Virtual Accelerator uses dedicated cores that are not available to the hypervisor. How does taking away resources allow the hypervisor to support more VMs, applications and services?

6WIND: Linux IO drivers are not focused on high performance and in an NFV environment VNF throughput performance is a major goal. In our tests we find that Linux scales throughput at about 4 Gbps per core (in a KVM environment this is non-linear since resources are shared). 6WIND Virtual Accelerator scales 20 Gbps per core. Let’s create a simple example where we deploy VNFs with the following requirements on an 8-core platform: each VNF requires one core and 2.5 Gbps of throughput. In the virtual environment 2.5 Gbps of throughput translates to 5 Gbps of switch capacity. In Linux the maximum VNF count would be three (cores) with 7.5 Gbps of aggregate throughput and require 15 Gbps of switch capacity. Five cores associated with Linux provides 20 Gbps of switch capacity – enough to support the three VNFs. So with three cores for three VNFs and five cores for Linux, all eight cores of the platform are in use. This is not a very efficient use of platform resources.

Now let’s add 6WIND Virtual Accelerator to the hypervisor and configure 6WIND Virtual Accelerator with two dedicated cores for a total of 40 Gbps of switch capacity. Now the maximum VNF count is six (cores) with 15 Gbps of aggregate throughput and requires 30 Gbps of switch capacity – 75% of the available switch capacity. By providing two cores to Virtual Accelerator, switching capacity is doubled, VNF density is doubled and aggregate throughput is doubled.

The ecosystem has a section ‘Third party VNF.’ How does a VNF get added to this section? The answer is the 6WIND Speed Certification program?
6WIND: Please visit our 6WIND Speed Certification webpage for further information. You can then select the ‘Contact Us’ link or send me an email (frank.jimenez@6wind.com)
Is the accelerator an alternative to open vSwitch?

6WIND: Think of 6WIND Virtual Accelerator as the ‘fast path’ forwarding section of the hypervisor and vSwitch as the ‘slow path.’ They coexist and 99% of the traffic will flow through Virtual Accelerator. Exception traffic (traffic destined to the hypervisor itself for example) is sent to the vSwitch.

Last question from my side: Is there a list of supported other (FW, lB) vendors available?

6WIND: Any virtual appliance will work on a hypervisor with 6WIND Virtual Accelerator installed. Getting the performance boost may require help. You can visit our 6WIND Speed Certification webpage for a list of VNFs we have certified.

What about IPv6 in SDN and NFV environments?

6WIND: 6WIND Virtual Accelerator has native support for IPv6. Virtual Accelerator has no control plane and synchronizes its forwarding table with the Linux control plane. Thus, Virtual Accelerator works transparently. The Linux control plane must have support for SDN environments.

Which Hypervisors are supported to use this virtual accelerator?

6WIND: All Linux based hypervisors are supported is the general answer. We have tested with the following Linux distributions: RedHat, Ubuntu and CentOS. For more information please see the Virtual Accelerator data sheet.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center
CyberPedia