This is the first post in a series on the latest storage innovation from VMware known as Virtual Volumes, or VVols.
As the biggest change in virtual storage since VMFS has landed, I wanted to start at the foundational beginning, and review what’s different about datastores in the world of VMware Virtual Volumes.
For years people have described VMFS and NFS datastores as logical. Abstracted entities leveraged by the hypervisor that remove the need for the systems and virtualization administrator to have to care about the storage. This had unintended consequences, as admins ultimately had to care about storage more than anything in the virtual datacenter.
In my opinion, traditional datastores are a lot like aliases or symlinks exposed to the hypervisor that ultimately led to all kinds of necessary innovations to enable VM admins to manage storage. And boy did they have to manage storage. It’s no secret to anyone that’s managed a virtual datacenter that storage management can be overwhelming.
VMFS. VAAI. VADP. SIOC. VAIO. The list goes on …
We witnessed the birth of the generalist. They became mainstream, as virtualization admins had to know more and more about storage in order to efficiently run their virtual infrastructure. Storage vendors and other third parties developed entire business units around virtualization and created plugin after plugin.
All for VMware. All for YOU. And we loved every minute of it. All of a sudden, we ruled your datacenter, and storage became priority one to insure you were able to run and protect your virtual workloads.
Some time in 2010, VMware gathered several of the biggest storage vendors together and told them about this new idea they had called “Virtual Volumes.” When I joined NetApp in 2011, I distinctly remember hearing about it for the first time, as VMware had approached NetApp to build and develop with them the NFS portion of the spec. I remember being completely confused and borderline standoff-ish, because at the time I was such a hardcore VMware fanboy that I couldn’t even fathom things being any better with storage than they already were.
So, a dev tiger team went off and started working our core Data ONTAP team because certain things would need to be built into the storage OS as well. This was bigger than just a plugin.
Where we have been for the last two years is a bit of a waiting game, but it gave us time to really hone how we went about things.
VMware Storage in a VVol world
I would like to posit today that Virtual Volumes are the true logical abstraction of storage to the ESXi hypervisor. Gone are the days of a datastore being tied to a particular protocol, and certain features only being supported by one protocol or the other. VVols allows all I/O to flow through a new construct called the Protocol Endpoint (PE). For NetApp specifically, we line it up like this:
One “PE” per host per protocol per container.
Here is an early concept diagram that I’m working on to help you understand a bit better…
That might take a minute to sink in, but VVols introduces all kinds of new layers and boundaries you need to be aware of. For more details, you can check out our recent podcast interview with Rawlinson Rivera where we do a deep-dive on the architecture and why it matters.
- Our VASA Provider runs as a virtual appliance, not a webservice running on-box local to the storage array. This affords us some nice flexibility around deployment options, which we’ll discuss even further in another post.
- While we maintain the “container” spec on the storage controller, we expose that inside the vSphere client in what we call a “VVOL Datastore,” which maps 1:1 to a storage container on the array. As of this writing, we are the only storage company supporting more than one storage container. This has huge benefits for promoting the policy-based discussion even further, and allows us to reach an unmatched scale, all with the simplicity of selecting a policy and picking a compatible storage source to house your VM. Neat, huh?
- NetApp’s approach to protocol endpoint deployment is also truly unique (remember: one PE per protocol per host per container!). All I/O is now virtualized across the PEs, and not tied to the limitations of a particular protocol or the storage controller it’s mounted from. This is agility at scale, and we have embraced it wholly.
What you should take away from this post is that things, as we know it, are about to change. VVols is not a simpler infrastructure by any stretch of the imagination. There is definitely a learning curve to it, so don’t be too frustrated if you don’t “get it” right away. That’s one of the goals of this series is to help shed some light on the architecture, how it interplays with the storage array, and specifically, what we’re doing at NetApp to take advantage of this new way of life in virtual storage.
Datastores are finally TRULY logical in nature, and simply a conduit for the VM admin to be able to stop caring about storage to the extent of what’s in their policies, all while completely exposing proprietary storage array value-add directly to the VM admins.
- 4 Comments