TechCenter Blogs

NIC Partitioning in VMware ESXi

TechCenter

TechCenter
DellTechCenter.com is a community for IT professionals that focuses on Data Center and End User Computing best practices. Here you can learn about and share knowledge about Dell products and solutions.

NIC Partitioning in VMware ESXi

This blog is written by Varun Joshi from Dell Hypervisor Engineering Team.

What is NPAR?

NPAR stands for NIC partitioning, that helps user to break up or split the 10GbE network adapter into 4 static partitions. NPAR allows users to split up  the 10GbE port on the network adapter into 4 separate partitions or physical functions and allocate the bandwidth and resources as needed.  Each of the four partitions appear as a separate  PCI Express function (separate physical NIC) on the Server and to the OS. NPAR is a method of dividing a single physical 10GbE Ethernet port into four PCI physical functions or partitions with flexible bandwidth capacity allocation. Each partitions can be NIC , FCoE or iSCSI.

Each partition can support networking features such as:

  • TCP checksum offload
  • Large send offload
  • Transparent Packet Aggregation (TPA)
  • Multiqueue receive-side scaling
  • Internet SCSI (iSCSI) HBA
  • Fibre Channel over Ethernet (FCoE) HBA.

Recent generations of Dell PowerEdge come with NDC (Network Daughter Card) which gives flexibility to choose from the wide range of network adapter portfolio. Dell PowerEdge Server network adapter portfolio also consists of  NPAR enabled NDC(s) like BCM57810 NetXtreme II 10 GB Ethernet, Broadcom BCM57712 NetXtreme II 10 GB Ethernet, Qlogic QME8262-k Dual-port 10Gbps Ethernet-to-PCIe Adapter, QLE8262 Dual-port 10Gbps Ethernet-to-PCIe Adapter. 

Steps to configure NPAR on Dell PowerEdge Servers :

  1. Go to Device Setting in boot menu and select the 10Gb Broadcom 57810 card

.

2. Select NIC 1 and choose network partitioning configuration

  

3. Select Global Bandwidth Menu and ensure Flow control is set to auto.

4. Each of the four partitions can be set up with a specific size and a specific weight.

5. Configuring each partition as requirement for iSCSi, FCOE or Ethernet

Use Cases of NPAR in VMware vSphere Environment

In VMware vSphere environment, users may assign each individual NIC partition (NPAR) to specific kind of traffic (e.g., Management traffic, vMotion traffic, iSCSI traffic, FCoE traffic, VM Network traffic) with customized allocation of bandwidth and optimize the usage on the underlying physical adapters.

 Example configuration of traffic allocation on NPAR in vSphere environment:-

  • Partition 1  = 4Gbps, running as an iSCSI HBA 
  • Partition 2  = 4Gbps, running as an FCoE HBA 
  • Partition 3  = 2Gbps, running as vMotion and management network
  • Partition 4  = 2Gbps, running as VM Network.

vSphere enumerates these partitions from NPAR network according to their PCI function number.

For example port 0 has device functions 0/2/4/6 (which are vmnics 6/8/10/12) and port 1 has device functions 1/3/5/7 (which are vmnics 7/9/11/13).

 

References 

http://en.community.dell.com/techcenter/b/techcenter/archive/2012/08/21/some-thoughts-about-nic-partitioning-npar.aspx

 http://www.dell.com/downloads/global/products/pedge/en/Dell-Broadcom-NPAR-White-Paper.pdf

 http://www.dell.com/downloads/global/products/pwcnt/en/broadcom-57712-k-faq.pdf

To post a comment login or create an account

Comment Reminder

Unrelated comments or requests for service will be unpublished. Please post your technical questions in the Support Forums or for direct assistance contact Dell Customer Service or Dell Technical Support.. All comments must adhere to the Dell Community Terms of Use.

  • If i have a Dell R720 and I enable Npar on a Dual 10g nic, with the onboard Quad Nic for iScsi will i exceed the config maxims for vmware?

  • Am using 5.1 Btw.

  • I think the limitation of the config maximums is w.r.t the number of physical ports present in the system. Using NPAR, you just enable the partitions of a single physical port to split the traffic.

    However, could you please provide vmkernel.log from your system with NPAR enabled and with NPAR disabled? From this we can make out if you really exceed the config maximums when NPAR is enabled.

  • Krishnaprasad,

    Thanks for your response.  According to a vmware support rep it's not based on physical but what vmware 'sees'.  So if you carve the nics up then you can easily exceed the maxims.  Also be very careful if like me you mix the 2. I'm awaiting confirmation

  • Hello, could you please provide vmkernel.log from your system with NPAR enabled and with NPAR disabled? From this we can make out if you really exceed the config maximums when NPAR is enabled.

  • Krishnaprasad,

    Thanks so much again for your response.  I have sent you over the logs.

  • any news on this?

  • Regarding configuration maximums when NPAR is enabled ? If yes, the partitions also are counted as separate 10Gig port and hence considered while counting configuration maximums. Darren took SR-IOV route to avoid this at this time.