This is a common question the SC/SA group often hears from the end-user community.  Over time and after multiple customer deployments using EqualLogic and VMware, we have come to understand specific considerations and guidelines to help determine the right mix of vm's to volumes/datastores. Additionally, VMware has continued to make huge leaps in optimizing the storage IO and ESX relationship.

First, some ground rules. Although there is an absurdly upper maximum of 3011 vm's in a single VMFS datastore, in practice, it is much lower. In most cases, it is the underlying storage array(s) that will dictate the performance of the VMFS datastore and the vm's that reside within them. This brings up the topic of ESX's storage queues. The default value can be increased but 128 should be sufficient in nearly all cases. Placing multiple vm's within a volume/datastore will result in the vm's sharing the device queue, whereas a single vm within a single datastore will have complete and uninterrupted access to the queue.  Although structuring your storage by creating a 1:1 relationship for volume->datastores->vm may be good for 1 vm, it may decrease overall performance as well as create a large number of VMFS datastores, iSCSI connection counts, and management overhead. So, the end result is: keep it simple and leave the defaults in place.

One important improvement in vSphere over previous releases is the ability to drive more throughput from the ESX host to a VMFS datastore. In VI3 Infrastructure, the maximum throughput from a given ESX host to an iSCSI target was limited to 1Gb due to the single TCP session limitation in the vmkernel iSCSI initiator. Today, vSphere users can accomplish multiple threads and TCP sessions from ESX to target and leverage native path management techniques such as Fixed, Most Recently Used, Round-Robin, and in EqualLogic environments, Path Selection Plugins (PSP) providing Windows-like MPIO connection load balancing. Furthermore, 10Gb Ethernet provides an abundant of options and, when properly constructed with Storage IO Controls (SIOC) found in vSphere 4.1, can really enhance overall efficiencies.

Additionally, beginning with vSphere4.1, scalability of VMFS datastores were further enhanced with the vStorage API for Array Integration (VAAI). Previous releases faced the real possibility of a SCSI reservation/locking issue during certain operations (vm snapshot, for example)). In large-scale vm deployments, locking a resource, even for a minute, is unacceptable.

The result is this: you are not limited by any hard rule other than the configuration and type of storage. With EqualLogic using firmware version 5.x or above, the arrays fully support VAAI and scale very easily to accommodate any capacity or disk IO need.  Additionally, EqualLogic provides in-depth SAN monitoring tools with SANHQ to allow administrators to track and monitor how the virtual datacenter is performing.

Final Analysis:

  • Start with single extents and single LUN
  • Do not use spanned VMFS just because you want one big VMFS
  • Do not span over single volume/single partition of 2TB-512 bytes
  • Monitor disk queues in SANHQ/VCenter Performance reports