The following is from Will Urban, and is an extension of this blog.

One of the sessions that I gave was on Best Practices in a VMware and EqualLogic environment. I was joined by Andy Banta from VMware and we talked about all sorts of best practices and Andy talked a lot about the new iSCSI initiator and how it has changed and how to configure it. One section that we had a lot of feedback on was around Data Drives in VMware. More specifically, there are 3 ways to configure data drives for your virtual machines and each has some benefits and disadvantages. A lot of customers came back and said that it helped them in understanding how they can integrate all the free tools from EqualLogic in their virtual environment. There are three primary ways to connect Data Drives to your VMs in your VMware and EqualLogic environment. This is based off the assumption that your OS is running its C:/ or /root off a VMFS volume. · VMDK on VMFS · iSCSI in the Guest (Sometimes called Direct Connect) · Raw Device Mapping (RDM)

VMDK on VMFS

VMDK disks on VMFS volumes is probably the most commonly used practice for data drives. This is when you carve out some additional space on a VMFS datastore and assign it to a virtual machine.

Advantages

· Easy to configure - Add more storage to a VM from free space in existing Datastores or provision another Datastore

· Viewable in Virtual Center (clones, templates, storage vMotion, etc) - This keeps the administration overhead light as you always know where the data drive resides

· Doesn’t require Storage Team interaction - As long as the Datastore has free space you can add data drives to a VM without any storage interaction

· Allows for tiering of data drives to different VMFS volumes/pools based on workload - This allows you to have a Datastore on say 10k R50 drives and a database drive residing on a 15k R10 volume

· Uses vSphere MPIO (New in 4.x) for high bandwidth Apps - Back in 3.5 when you couldn't get MPIO to a volume, with iSCSI you usually were limited to a single path, now with vSphere MPIO these limitations are gone

Disadvantages

· No host integration with Auto Snapshot Manager/Microsoft Edition - Dell provides a free VSS application consistent snapshot tool for NTFS volumes, SQL and Exchange. However this tool cannot be leveraged on drives that are VMDK on VMFS volumes

· If separated from the Datastore that OS resides on no ASM/VE integration - Dell provides a free VMware Hypervisor consistent snapshot tool for VMs. It takes snapshots of the VM but if the OS drive and the data drive reside on different datastore volumes smartcopies aren't currently supported · No real isolation of Data for Data Protection/DR strategies - If you have a Tier 3 database F drive on the same datastore volume as a Tier 1 app G drive you can't segment out the protection so you must protect the entire volume for the highest level of protection needed

VMDK on VMFS Best Practices

Configuring a VMDK on VMFS is pretty easy but we have a couple of best practices for performance. Click on the virtual machine you want to add the data drive too, click Edit Virtual Machine Settings. Under hardware choose Add -> Hard Disk -> Next. Choose to Create a new virtual disk-> Next. Choose the size that you want the VM to see, choose the disk provisioning options you want and here is where you can choose to keep it on the same VMFS Datastore as the VM or browse for another one. On the next screen this is an important step that is often overlooked. We want to change the virtual scsi adapter to be a different one than what the base OS drive is residing on. Support has seen tremendous performance improvements by just making these few changes. Change the Virtual Device Node to something like 1:0 or 2,3 etc for each additional device past the OS drive sitting on 0:0.


Data Drives in VMware - The Dell TechCenter

This will also add a new SCSI Controller to the virtual machine. For VMs that are supported (I think 2008/2003/RHEL5) you can actually change it from LSI to Paravirtual SCSI Adapter for better performance.


Data Drives in VMware - The Dell TechCenter Also, for Operating systems that do not do a 64k alignment offset (2003 for example) when you format the drive we have seen improvements in performance from that as well.

iSCSI in the Guest VM

Because iSCSI traffic is just standard network traffic, you can take advantage of iSCSI in the guest VM by using the guest VM’s iSCSI software initiator. This also allows vMotion and all of the other tools to work because to the VM, it’s just network traffic.

Advantages

· Overcome 2TB Limits of VMFS/RDM - If you are using large file systems in your environment this ability allows your VMs to talk to a volume much larger than a 2TB volume or avoid using extents

· Utilize host integration software such as Auto Snapshot Manger/Microsoft Edition and off host backup - By using iSCSI in the guest VM with iSCSI software initiator attached volumes, you can take advantage of the data protection snapshot tools. Also, this gives you the benefit of taking advantage of transportable snapshot technologies with backup vendors to offload snapshots to a backup server to eliminate LAN traffic and backup window contention

· Isolates the data drives for Data Protection/DR strategies - This technique also allows you to isolate your data drives to a volume that may have different replication and snapshot strategies than the volume that houses the parent VM

· Can be mounted on physical or other virtual machines - If you have a VM crash or just want to dismount and remount the data you can easily do this through the iSCSI initiator

· Uses same best practices from physical environments - This technique is exactly the same as from the physical environment. In fact if you P2V a server that has iSCSI attached volumes, you can continue to run the VM with very little change to the server

· Allows for tiering of data drives to different volumes/pools based on workload - Since the data drives reside on a volume, you can have faster performing or larger data size data residing on the SAN tier that they require which may be different than the VMs

Disadvantages

· Isn’t visible to Virtual Center - Because the volume is managed from the VM itself, vCenter will not see it in the storage tab and you won’t see it connected to the VM when editing the properties. This can cause some additional management overhead

· Requires Storage Team intervention - Since you are creating a brand new volume, you need to create the volume to be seen by the VM not the ESX environment. So this means installing and configuring the iSCSI software initiator, connecting to the SAN with proper pathing and configuring the volume to be seen by the VM

· Needs to be considered for DR Plans separate from VMs - Because the volume isn’t seen by vCenter or any of the VMware tools like SRM, you need to have different considerations for the protection of these volumes. If you forget to replicate the data drive and there is a DR situation and all you have is the C: drive, yeah that’s not cool

iSCSI in the Guest VM Best Practices

Configuring iSCSI in the guest is a pretty straightforward thing. You treat it just like a physical server. Install the iSCSI initiator into the guest VM, connect to the SAN and away you go. However, there are some considerations to think about on the ESX side of the house. Normally the iSCSI network and Public LAN network are isolated so you first have to make sure that the VM can see the PS Series SAN. Then you want to make sure that you are configured properly and take advantage of MPIO inside the guest if you need it. In this example I’m going to show you what changes to make to a standard vSwitch. There are probably a dozen ways to accomplish this but this is one way. The first thing is to add additional Virtual Machine port groups to your vSwitch. In this example I’m calling them iSCSI Guest 1 and iSCSI Guest 2. So select the vSwitch that has iSCSI connectivity (you could also assign brand new nics if you wanted to as long as they are connected to the SAN) and select properties. Click Add and select Virtual Machine and then Next (Note that because we are using iSCSI in the guest VM and not iSCSI from ESX, you don’t want to select VMkernel). Create two of these for the two iSCSI guest nics (I’m using 2, you can create as many as you have iSCSI nics if you want, this is just an example).

Data Drives in VMware - The Dell TechCenter

Now once those are created, the next thing to do is to guarantee traffic across the physical nics. If you monitor esxtop even if you give a VM two virtual nics, ESX doesn’t guarantee the traffic across all the physical nics from a single VM so we have to force it to do that. But you don’t want to explicitly set it so that you lose failover ability so we just take advantage of the vSwitch NIC Teaming. So inside the vSwitch, select the first iSCSI Guest and click properties. Click the NIC Teaming tab and put a check box in Override vSwitch Failover order. Then what you want to do is make one of the physical nics in active and the other one in standby (not unused). Again, this is a different set of configuration changes than iSCSI in ESX which is why I’m pointing out the differences. Do this for the other iSCSI Guest and use the other adapter.

Data Drives in VMware - The Dell TechCenter

The last thing to do is then assign new Ethernet adapter and choose the Named network as one of the networks you just created. Use MPIO inside the VM with either the Dell DSM for Microsoft of whatever the guest VM uses and whalla!

Raw Device Maps

Raw Device Maps (RDMs) are used when you want to isolate the data onto volumes but still retain its view inside vCenter.

Advantages

· Easy to configure – One volume = One RDM

· Viewable in Virtual Center Storage View - The volume will show up inside vCenter under storage and allows for vCenter integration with things like SRM

· Allows for tiering of data drives to different volumes/pools based on workload - You can isolate the data drive RDMs to another tier of storage for performance

· Uses vSphere MPIO (New in 4.x) for high bandwidth Apps - Still get high bandwidth with native MPIO for performance needing applications

· Isolates data from OS and a lot of 3rd party apps work with RDMs - There are some 3rd party apps that require some form of RDM for either backup or protection or tools

Disadvantages

· No guest integration with Auto Snapshot Manager/Microsoft Edition - ASM/ME cannot recognize RDM volumes for smart copy operations · No VM integration with Auto Snapshot Manager/VMware Edition - ASM/VE doesn’t recognize RDM volumes for smart copy operations for VM snapshots

· Each ESX server needs connectivity so connection counts need to be evaluated - Since each ESX server needs to see the RDM volume you will have multiple connections to the RDM even though only one server is using the volume at a time When configuring RDMs, follow the same best practices as VMDK on VMFS with the new SCSI adapter and paravirtualized adapter if supported.

Now, if I still have you reading then you must be wondering, which one of these do I use? Honestly the answer is “it depends on your needs.” People use all 3 for various reasons and there isn’t a situation that says you can only do one way and exclude all the others. Keep in mind your business goals, your protection strategies and the integration with other Dell EqualLogic software. This will help guide your decision but also keep in mind the extra tasks in management that you may need to do depending on the choice you use. When we talked to people at the User Conference, we got a good idea of how people were going to move forward with this information. For most of the NTFS/SQL/Exchange systems that they wanted to use ASM/ME for they used iSCSI in the guest, for virtual machines that just needed a 20GB F drive for storage, VMFS meets their requirement.

Until next time!