Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

SR-IOV Network I/O enhancement in a virtualized environment in Windows Server 2012

Summary: Dell OS and Applications Solutions on Dell TechCenter - Project Sputnik, Microsoft Windows, Red Hat Linux, SUSE, Ubuntu, and more

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Symptoms

This post was originally written by Abhijit Khande & Vineeth V Acharya from DELL Windows Engineering Team.

Comments are welcome! To suggest a topic or make other comments, contact
WinServerBlogs@dell.com.


With the Microsoft® Windows Server® 2012 Beta operating system, Microsoft has introduced support for a number of features in the networking space. One such significant and interesting feature is Single Root I/O Virtualization (SR-IOV), which allows virtual machines to share a single PCI-e device. This post provides a basic overview of SR-IOV in Windows Server 2012 Beta and how it drastically reduces the CPU utilization in a virtualized environment during network I/O.

The SR-IOV interface is an extension to the PCI Express (PCIe) specification. SR-IOV allows a single PCIe device, such as a network adapter, to provide multiple lightweight hardware surfaces on the PCI bus and separate access to its resources among the various instances. It is achieved by the use of multiple virtual functions (VF) in addition to the (usual) physical function (PF). Since it is hardware-dependent, the PCIe device and the platform must both support this feature. 

Traditionally, a packet destined for the virtual machine is received by the physical network adapter (using Physical Function) present in the host OS. This packet is handled by the NDIS (Network Driver Interface Specification for NICs) driver module. The packet is then provided to the Hyper-V switch, which processes the packet (such as routing, VLAN filtering) and forwards the packet to the destined VM via VMBus, as shown in Figure-1. The reverse-path will be followed when the VM has to send a packet.

 With SR-IOV, the VM can use a Virtual Function (VF) to send and receive packets directly to the physical network adapter bypassing the traditional path completely as shown in Figure-1. This not only boosts the network I/O but also reduces the overhead on the host machine’s CPU.
SLN312240_en_US__1i_OSandApplications_Virtualized_Environment1_2012_N_V1
Figure 1: Data path with and without SR-IOV

Live Migration & SR-IOV

In Windows Server 2012 Beta, Live Migration can be performed with SR-IOV being used by a VM. If the source and target systems support SR-IOV and the target has an available VF, the VM will use the virtual function. If not, the VM will revert to the traditional path (VM-Bus).

Each SR-IOV capable network adapter exposes a fixed number of Virtual Functions, which can be obtained by running the PowerShell command "Get-NetAdapterSriov".


Performance Analysis

We performed a few tests in our lab to compare the performance with and without SR-IOV. The test environment consists of one test server (on which the tests are executed), and one file server. The test server is a Dell PowerEdge™ R710 II with an Intel® X520 10GB Ethernet adapter and Windows Server 2012 Beta OS. The file server hosts multiple SMB shares and is connected to the test server using a 10GB network via a Dell PowerConnect™ 8024 switch.

We captured the performance data in terms of the CPU used by the DPCs (Deferred Procedure Call) scheduled by the different driver modules taking part in the network data transfer. This data was captured both in the guest and host OS, as described in the following scenarios.The test server has 4 identical virtual machines with Windows Server 2012 Beta as the Guest OS. The 4 VMs are connected to the 10GB network via a virtual switch. This test configuration is shown in Figure 2.
SLN312240_en_US__2i_OSandApplications_Virtualized_Environment2_2012_N_V1
Figure 2: Test configuration
Before we showcase the performance data, it is important to introduce a very critical parameter used for this testing. In the Microsoft Windows OS there exists a system mechanism called Deferred Procedure Call (DPC), which allows high-priority tasks (e.g. an interrupt handler) to defer required-but-lower-priority tasks for later execution. This permits device drivers and other low-level event consumers to perform the high-priority part of their processing quickly, and schedule non-critical additional processing for execution at a lower priority.


Scenario 1 – Performance Results in Virtual Machine (Guest OS):

We used four virtual machines (VM) for this scenario (VM 1 through VM 4). We enabled SR-IOV on VM 1 and VM 2 (by checking the "Enable SR-IOV" option in VM settings), and disabled SR-IOV on VM 3 and VM 4 (by unchecking the same option). Therefore, VM 1 and VM 2 will use the Virtual Function (exposed by the Intel adapter), while VM 3 and VM 4 will use the synthetic path (VMBus) for any network communication, as shown in Figure-2.

We started copying data (of size 20GB) from a single SMB share to the VMs and captured the system CPU Usage logs in the Guest OS.
SLN312240_en_US__3i_OSandApplications_Virtualized_Environment3_2012_N_V1
Figure 3: DPC CPU usage in Virtual Machine

In Figure 3, SR-IOV refers to the average DPC CPU usage in VM-1 and VM-2 and Non-SR-IOV refers to the average DPC CPU usage in VM-3 and VM-4. If we look at the graph, we see that there is not much difference in the CPU usage between the two cases, but we haven’t accounted for the CPU Usage in the host machine yet.


Scenario 2 – Performance Result in Host Machine (Host OS):

We used one virtual machine for this scenario. We started copying data (of size 20GB) from the SMB share to the VM and captured the DPC CPU Usage of 2 modules, NDIS and VMBUS, which are used in network I/O.
SLN312240_en_US__4i_OSandApplications_Virtualized_Environment4_2012_N_V1
Figure 4: DPC CPU usage in Host Machine
The results are shown in Figure 4. As expected, there is a vast difference in the CPU usage between the two cases (SR-IOV and Non-SR-IOV). The CPU usage is of the order of 102 in the case of SR-IOV, and of the order 103 - 104 in the case of Non-SR-IOV. This is mainly because, in the case of SR-IOV, the virtual machine directly communicates with the physical NIC via the virtual function. Hence, the CPU cycles of the host are not utilized for processing any network packets. In the case of Non-SR-IOV (as shown in Figure-1), the Guest OS communicates with the Host OS via the VM Bus, which in turn processes the packets and sends them via the physical NIC. Hence, modules like VM Bus and NDIS are extensively used.

On calculating the total CPU Usage, we observe that the CPU utilization during network I/O is far less when using SR-IOV. Hence, Windows Server 2012 Beta with SR-IOV enabled will help customers reduce the CPU overhead during network I/O and thereby improve the overall system performance.

 

Cause

Configuring SR-IOV using PowerShell

You can use the following PowerShell commands to create a new virtual switch with SR-IOV enabled and attach the virtual switch to the virtual network adapter of an existing virtual machine.

NOTE: Before running the following commands, the following options should be enabled in the BIOS
  • Virtualization Technology
  • SR-IOV Global Enable
(Assuming a single network adapter (Intel X520) is connected)

$NetAdap = Get-NetAdapter | Where-Object { $_.Status -eq "Up"}

(The -EnableIov switch is used to enable SR-IOV in the virtual switch)

New-VMSwitch -Name "SRIOV Switch" -NetAdapterName $NetAdap.Name -AllowManagementOS $True -Notes "SRIOV Switch on X520" -EnableIov $True

$VMSw = Get-VMSwitch


Adding a new network adapter to VM-1 and connecting it to the Virtual Switch.

Add-VMNetworkAdapter -SwitchName $VMSw.Name -VMName VM-1 -Name "SRIOV Adapter"


NOTE:For this command to work, the VM must be turned off)
You can also use an already existing VM network adapter by using the following commands)

$VMNet = Get-VMNetworkAdapter -VMName VM-1

Connect-VMNetworkAdapter -VMName VM-1 -SwitchName $VMSw.Name -Name $VMNet.Name

 Each VM network adapter has two properties, IovWeight & VmqWeight, which correspond to SR-IOV and VMQ respectively. Adjusting these weights enables or disables the features.

Resolution

To enable SR-IOV, set the IovWeight to 100. To disable SR-IOV, set the IovWeight to 0 (default)

Set-VMNetworkAdapter -VMName VM-1 -VMNetworkAdapterName $VMNetName -IovWeight 100

where $VMNetName is the name of the VM Network Adapter connected to the SRIOV Switch.

 
Visit the following links for additional information:

Article Properties


Affected Product

Servers

Last Published Date

20 Sep 2021

Version

4

Article Type

Solution