How to optimize Virtual Machine performance for NFV load balancing

How to optimize Virtual Machine performance for NFV load balancing

Description #

Network Function Virtualization, as known as NFV, is a new paradigm that defends the use of networking devices to virtual environments in order to gain flexibility and easier maintenance. But every case of use should be studied in order to determine if the best option is a hardware or virtual appliance according to the requirements, budget and available resources.

In this article we’re going to state the differences between network hardware appliances, to define some VM tunning optimizations for networking and load balancing and performance between hypervisors vendors.

Performance differences between hardware and virtual appliances #

Once we need to deploy a new load balancer the main technical reason to go for a hardware appliance is to gather the most performance and lower network latency as possible, but the virtual appliance would provide more flexibility and easier management of infrastructure.

In an ideal world, deploying a virtual machine with all the resources available from the host, we can obtain in the VM between 96% to 97% of the CPU of the host, between 70% to 90% of network performance of the host and between 40% to 70% of storage I/O performance of the host, all of them due to the overhead of the hypervisor.

The benchmarking of Virtual Machines are not an easy task as different possible configurations can produce the lack of accurate numbers and there are too many factors that can affect the the performance of a VM, like:

The hypervisor vendor and version used
The host optimizations
Allocated resources per VM
Number of VMs running per host
Network traffic, CPU or Disk I/O load in the hypervisor
Network drivers configured
Shared resources between VMs
Task performed (routing, content switching, SSL offload, etc.)
among others…

This article is dedicated to networking optimization for load balancing in virtual environments, so it’s focused on CPU load and networking I/O tunning to gather the most from your load balancing VMs. Disk storage performance is not so critical as this kind of NFV applications doesn’t require a high load of disk I/O.

VM optimizations for networking and load balancing #

In order to boost your NFV (and specifically for load balancing) in your virtual infrastructure, we recommend to follow the next instructions.

1. Modern and updated Hardware Host. Newest hardware platforms include already several processor acceleration and software techniques at BIOS or firmware level to perform better with virtualization. The maintenance of firmwares and BIOS up to date is usually a good practice to enable new features and be safe of known problems.

2. Select your preferred hypervisor. The hypervisor to run in the host is very important in regards to networking performance. Our benchmark study of the most used hypervisors are described in the next section. It’ll give you a broad overview what are the most optimized virtual platform for networking performance and load balancing. In addition, some vendors unlocks several performance capabilities and scalabilities features in their non free solutions that should be enabled for NFV solutions.

3. Updated hypervisor. Maintaining the host up to date will benefit from all the optimization features and improvements of resources that are applied into the hypervisor as well as security flaws fixed.

4. Enable Intel VT-x or AMD-V. Generally, newer Intel and AMD processors include this acceleration flag but not enabled by default in the BIOS. Once ensured that this option is enabled in the BIOS you need to enable it at VM level.

5. Dedicated network for maintenance. During the network setup of a Virtual Machine it’s important to create isolated network for production services and for maintenance tasks in an internal private network with the host that could be used for Motions (moving workloads between hosts). This private network will be faster and more secure, but also, it won’t affect to your production services during maintenance.

6. Select improved network drivers. Ensure to use the most performance virtual network driver for every hypervisor and for your specific NIC. Maintaining the most suitable and updated network driver for your host will reduce the latency and will perform better in cases of high network traffic loads.

7. Dedicated vCPU. Under a performance point of view it’s better to have less vCPU assigned to a certain VM but dedicated to it. Avoiding to share CPU resources decreases the change of context and wait status in the host as well as to avoid the workloads to affect from one VM to another.

8. Optimized Templates and ready to deploy. It’s important to have optimized templates according to the certain hypervisor and version, that includes the appropriated tools, drivers and operating system tunned for networking in the guest side. To have a template ready to deploy increases efficiency, management and time.

Performance between hypervisors #

According to the load balancing benchmarks and networking high loads in our lab, we can state that newer versions of Vmware ESXi performs better than Xen Server, Hyper-V or other hypervisors in the market.

Defining right resource allocation for RELIANOID virtual appliances #

Taking into account that we use the most performance hypervisor in the market according to our lab test, we can gather a performance in optimal RELIANOID Load Balancer virtual environments from 7% to 20% of penalty than the same physical configuration.

Per dedicated vCPU we can estimate:

~18k HTTP requests per second with LSLB HTTP farm.
~220k HTTP requests per second with LSLB L4XNAT farm.

If session persistence is enabled, we should take care of the memory resources of the VM:

512 MB of RAM per virtual service or farm instantiated in the VM.
Additional 512 MB of RAM per virtual service or farm with more than 10,000 users.

In regards to the storage, RELIANOID Virtual Appliances allocate 8GB of disk that could be resized if needed but in the most cases it should be enough.

SHARE ON:

Powered by BetterDocs