Storage DRS Followup

Over the last week, I’ve had some great feedback and conversations regarding my previous post on Storage DRS in vSphere 5. I thought I’d take a few minutes to expand upon the topic a bit more, and revisit a few of the points that I made, in response to some of this feedback.

To start, let’s look at the fundamental value prop of SDRS in a little more depth…

pod.png

1) SDRS advises the datastore where a VM should be deployed based on the available capacity and I/O utilization of datastores within a cluster .

This new capability (some are dubbing it “initial placement”) allows admins to stop the guessing game around where to deploy a new VM by introducing historical data into the decision-making process. By providing recommendations as to the I/O capabilities and storage capacity in the datastore, we can deploy a VM and avoid the unexpected migration of the storage to a different datastore better able to meet the needs of the VM.

2) SDRS automates the migration of VMs if a Datastore is put into Maintenance Mode.

Any tool which reduces the time and steps associated in executing on a maintenance plan is music to the ears of any VI administrator.  Less time in the data center resolving issues is the goal we all strive for at the end of the day.

3) SDRS clusters enable VMDK affinity and anti-affinity rules. These rules keep the VMDKs of a VM either in a single datastore, or located on separate datastores.

This feature is very valuable for a number ofreasons. From the perspective of VMware and NetApp, VMDK affinity rules allow admins to insure faster recovery times by collecting VMs to SRM protection groups, or whatever protection software you use might (in the future) be able to point and reference this single layer to grab all child objects underneath.

In contrast, the anti-affintiy policies add the ability for, let’s say a MS SQL VM, to keep it’s VMDKs holding logs and datafiles on separate datastores, and potentially different disk groups underneath to which the datastore references.  Ask any DBA and this is always a best practice.  It doesn’t necessarily affect every array vendor the same way, and I want to emphasize this.

Note:  Due to the way NetApp virtualizes I/O writing to the spinning disk, it truly will not make a negative difference if your VMDKs are in the same Datastores.  These are merely logical abstracted containers on top of the underlying disks.  In fact, you will likely LOSE deduplication and snapshot efficiency by doing so.  BUT!  The ability to have a more intelligent mechanism insuring the logical separation of data with SDRS is a promising concept not to be overlooked and unevaluated.

vmdk-affinity.jpg

As you can see, SDRS does advance the intelligence and integration between VMware and all storage arrays, including NetApp.  However, there’s a big difference in JBOD storage and avanced storage arrays that I think warrant some further conversations. As such, I’d like to revisit some of the technical guidance I presented in my previous blog. I’d like to add some clarity and details to a few of the points I made in order to help those planning to enable SRDS in the near future.

  • Datastore clusters are a new concept in vSphere 5, and they are comprised of a collection of traditional datastores. To restate, datastore clusters should be comprised of the same type of storage media (SAS, SATA, SSD etc.), residing on the same array, and configured with identical replication and protection settings. By following this guidance, the storage platform can ensure consistent delivery of services (performance and protection) to this new abstracted and pooled layer. Array-based technologies like FlashCache offer a means to dynamically accelerate the performance of the datastores comprising the datastore cluster, and are unknown at the vSphere level. As such their use does not factor into this equation, and can sometimes help more if used exclusively, or without interference from non-array-based third-parties without the intelligence to manage your storage as well as your actual storage array is able to.
  • When using SDRS with Thin Provisioned VMFS LUNs or NFS FlexVols, you should be advised that there is always the possibility that there may not be enough physical storage capacity underneath the destination datastore.  VAAI is used to insure there is enough capacity prior to intiating the migration,  but if during the migration the amount of physical capacity changes, the migration would fail.

Note – this issue is a concern for every storage vendor offering thin provisioning.  This is also an area where NFS datastores excel as they can be provisioned with policies that enable the automatic growth of the datastore in order to accommodate unexpected capacity growth. If you haven’t checked out NFS, maybe now is a good time.   

(Disclaimer:  I am a huge 10GbE NFS fanboy.)

  • While migrating data between datastores will cause one to lose the deduplication savings for that VM, the savings will return upon the next scheduled dedupe update. Dedupe updates are usually scheduled to run either nightly or before snapshot backups and replication updates run.  This is a best practice that hopefully all have adopted by now, and if not, reach out to me, and I can point you in the right direction.

After re-reading my last post several times, I do feel that while the content and data points are accurate,  it reads with a bit of negative overtone to it.  It was never intended that way, and should have never been taken out of the context of merely protecting our customers’ best interests.  If it was taken that way, then I hope you will take this followup into account as an additional resource.  The two are one, both containing valid, factual points.

I hope to both protect our customers in their virtualization journey, as well as continue to promote VMware as the datacenter platform of choice, as I always have.

-Nick

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Login
Loading...
Sign Up

New membership are not allowed.

Loading...