Storage: Anatomy of a Vserver (Part 3)

By on 04/23/2012.

(This post is the third in a series of posts around NetApp DataONTAP Cluster-Mode.  You can read the first two posts here and here.)

In the last post, we talked a lot about a high-level overview of what Cluster-Mode brings to the table.  To understand much further, we’re going to have to get our hands dirty and talk techie for a bit.  Fear not, but this will be a little more techie than the others.

Let’s start with my own definition of a Vserver.  This is not necessarily NetApp’s definition, but this is how I like to describe it to customers.

Datacenter Dude definition:

A Vserver is a memory-bound, cluster-wide, entity that fully abstracts all aspects of connecting backend storage from the physical array.  Vservers act as a “front-end” to the array, and virtualize access by way of a per-protocol, per-node interface known as a LIF, or logical interface.  Correlations can be drawn between Vservers and traditional virtual machines in the way that they take physical hardware and resources and abstract access to it, allowing for more efficient uses to be made of the physical investments.

So let’s break down a Vserver.  To understand Vservers, we need to understand some key things:

  • ifgrp’s
  • LIFs
  • Junction Paths (next post)
  • Export Policies (next post)

If you’re a NetApp customer, or familiar with the architecture, you would know that we have traditional done physical port aggregation into VIFs.  These have been “rebranded” to be referred to as ifgrp’s, similarly to how they are managed in Linux.  I guess someone along the way thought VIF and LIF would be a little confusing.  Probably right.

Anyway, an ifgrp is the consolidation of one or more physical ports into a logical interface on the storage controller/node.  In it’s simplest form, it would look something like this:

 

Single port on the node, e0a, mapped to a LIF on the Vserver being used for management.  It is important to note that LIFs on Vservers can be designated for Data, Mgmt, or Both.  I would never recommend that you use “Both,” but certain controllers may have a limited number of ports, and you could always get away with doing VLAN’ing instead of segregated interface management.

If we were to expand on what this would look like in a more realistic scenario, it would look something like this:

These could be two 10GbE ports, e2a & e3a, both on separate cards (to cover card failure) and combined into ifgrp a0a.  From that ifgrp, you can further VLAN to a0a-1166 and a0a-527, and those VLANs would attach to specific LIFs in the Vserver.

LIFs.  Let’s talk about these LIFs a little more.  This is one of the hardest concepts for most people to grasp.  But the reality is, it’s just another logically abstracted layer of management and segregation.  As I mentioned earlier in my definition, a LIF is a per-node, per-protocol logical interface.  It’s just software.  Traffic cops, if you will.  Their sole purpose in life is to connect clients to their data, regardless of where it is currently residing on the cluster.

If VMware’s VMs were to span entire clusters of ESXi hosts, we could draw some closer correlations between the two, but essentially, we’re doing for storage what VMware has done for servers [and more].  We could equate the volumes and LUNs, and their mobility to an svMotion, but we don’t actually move the Vservers, we just re-home the LIF ports of the data paths.

In the next post, we’ll continue the conversation to discuss Junction Paths and Export Policies.  Export Policies?!  Curious, isn’t it.  :)

As always, comments welcome, and it gives me things to add into followup posts as well!  Thanks for reading!  Let me know what you’re excited about, and if there’s anything that isn’t clear enough, or that you need further detail on, please don’t hesitate to speak up!

-Nick

Source: Images designed by my teammate, Peter Learmonth of NetApp, and were used with his permission.  Images may not be re-used without express written consent.

8 Comments