This blog is written by Varun Joshi from Dell Hypervisor Engineering Team.
What is NPAR?
NPAR stands for NIC partitioning, that helps user to break up or split the 10GbE network adapter into 4 static partitions. NPAR allows users to split up the 10GbE port on the network adapter into 4 separate partitions or physical functions and allocate the bandwidth and resources as needed. Each of the four partitions appear as a separate PCI Express function (separate physical NIC) on the Server and to the OS. NPAR is a method of dividing a single physical 10GbE Ethernet port into four PCI physical functions or partitions with flexible bandwidth capacity allocation. Each partitions can be NIC , FCoE or iSCSI.
Each partition can support networking features such as:
Recent generations of Dell PowerEdge come with NDC (Network Daughter Card) which gives flexibility to choose from the wide range of network adapter portfolio. Dell PowerEdge Server network adapter portfolio also consists of NPAR enabled NDC(s) like BCM57810 NetXtreme II 10 GB Ethernet, Broadcom BCM57712 NetXtreme II 10 GB Ethernet, Qlogic QME8262-k Dual-port 10Gbps Ethernet-to-PCIe Adapter, QLE8262 Dual-port 10Gbps Ethernet-to-PCIe Adapter.
Steps to configure NPAR on Dell PowerEdge Servers :
2. Select NIC 1 and choose network partitioning configuration
3. Select Global Bandwidth Menu and ensure Flow control is set to auto.
4. Each of the four partitions can be set up with a specific size and a specific weight.
5. Configuring each partition as requirement for iSCSi, FCOE or Ethernet
Use Cases of NPAR in VMware vSphere Environment
In VMware vSphere environment, users may assign each individual NIC partition (NPAR) to specific kind of traffic (e.g., Management traffic, vMotion traffic, iSCSI traffic, FCoE traffic, VM Network traffic) with customized allocation of bandwidth and optimize the usage on the underlying physical adapters.
Example configuration of traffic allocation on NPAR in vSphere environment:-
vSphere enumerates these partitions from NPAR network according to their PCI function number.
For example port 0 has device functions 0/2/4/6 (which are vmnics 6/8/10/12) and port 1 has device functions 1/3/5/7 (which are vmnics 7/9/11/13).
If i have a Dell R720 and I enable Npar on a Dual 10g nic, with the onboard Quad Nic for iScsi will i exceed the config maxims for vmware?
Am using 5.1 Btw.
I think the limitation of the config maximums is w.r.t the number of physical ports present in the system. Using NPAR, you just enable the partitions of a single physical port to split the traffic.
However, could you please provide vmkernel.log from your system with NPAR enabled and with NPAR disabled? From this we can make out if you really exceed the config maximums when NPAR is enabled.
Thanks for your response. According to a vmware support rep it's not based on physical but what vmware 'sees'. So if you carve the nics up then you can easily exceed the maxims. Also be very careful if like me you mix the 2. I'm awaiting confirmation
Hello, could you please provide vmkernel.log from your system with NPAR enabled and with NPAR disabled? From this we can make out if you really exceed the config maximums when NPAR is enabled.
Thanks so much again for your response. I have sent you over the logs.
any news on this?
Regarding configuration maximums when NPAR is enabled ? If yes, the partitions also are counted as separate 10Gig port and hence considered while counting configuration maximums. Darren took SR-IOV route to avoid this at this time.