Storage: Anatomy of a Vserver (Part 3)

(This post is the third in a series of posts around NetApp DataONTAP Cluster-Mode.  You can read the first two posts here and here.)

In the last post, we talked a lot about a high-level overview of what Cluster-Mode brings to the table.  To understand much further, we’re going to have to get our hands dirty and talk techie for a bit.  Fear not, but this will be a little more techie than the others.

Let’s start with my own definition of a Vserver.  This is not necessarily NetApp’s definition, but this is how I like to describe it to customers.

Datacenter Dude definition:

A Vserver is a memory-bound, cluster-wide, entity that fully abstracts all aspects of connecting backend storage from the physical array.  Vservers act as a “front-end” to the array, and virtualize access by way of a per-protocol, per-node interface known as a LIF, or logical interface.  Correlations can be drawn between Vservers and traditional virtual machines in the way that they take physical hardware and resources and abstract access to it, allowing for more efficient uses to be made of the physical investments.

So let’s break down a Vserver.  To understand Vservers, we need to understand some key things:

  • ifgrp’s
  • LIFs
  • Junction Paths (next post)
  • Export Policies (next post)

If you’re a NetApp customer, or familiar with the architecture, you would know that we have traditional done physical port aggregation into VIFs.  These have been “rebranded” to be referred to as ifgrp’s, similarly to how they are managed in Linux.  I guess someone along the way thought VIF and LIF would be a little confusing.  Probably right.

Anyway, an ifgrp is the consolidation of one or more physical ports into a logical interface on the storage controller/node.  In it’s simplest form, it would look something like this:


Single port on the node, e0a, mapped to a LIF on the Vserver being used for management.  It is important to note that LIFs on Vservers can be designated for Data, Mgmt, or Both.  I would never recommend that you use “Both,” but certain controllers may have a limited number of ports, and you could always get away with doing VLAN’ing instead of segregated interface management.

If we were to expand on what this would look like in a more realistic scenario, it would look something like this:

These could be two 10GbE ports, e2a & e3a, both on separate cards (to cover card failure) and combined into ifgrp a0a.  From that ifgrp, you can further VLAN to a0a-1166 and a0a-527, and those VLANs would attach to specific LIFs in the Vserver.

LIFs.  Let’s talk about these LIFs a little more.  This is one of the hardest concepts for most people to grasp.  But the reality is, it’s just another logically abstracted layer of management and segregation.  As I mentioned earlier in my definition, a LIF is a per-node, per-protocol logical interface.  It’s just software.  Traffic cops, if you will.  Their sole purpose in life is to connect clients to their data, regardless of where it is currently residing on the cluster.

If VMware’s VMs were to span entire clusters of ESXi hosts, we could draw some closer correlations between the two, but essentially, we’re doing for storage what VMware has done for servers [and more].  We could equate the volumes and LUNs, and their mobility to an svMotion, but we don’t actually move the Vservers, we just re-home the LIF ports of the data paths.

In the next post, we’ll continue the conversation to discuss Junction Paths and Export Policies.  Export Policies?!  Curious, isn’t it.  :)

As always, comments welcome, and it gives me things to add into followup posts as well!  Thanks for reading!  Let me know what you’re excited about, and if there’s anything that isn’t clear enough, or that you need further detail on, please don’t hesitate to speak up!


Source: Images designed by my teammate, Peter Learmonth of NetApp, and were used with his permission.  Images may not be re-used without express written consent.

0 0 votes
Article Rating
Notify of
Newest Most Voted
Inline Feedbacks
View all comments
Julian Wood
04/24/2012 05:02

Keep going, these are excellent, having a hard time having to wait for the next instalment!

Reply to  Julian Wood
04/24/2012 14:55

Thanks for the feedback Julian!  I’m trying to space them out a bit rather than just brain dump everything all at once.  It’s also giving me more time to work on the complimentary buildout/whiteboard videos!

Reply to  that1guynick
05/14/2012 12:01

Didn’t you forget about cluster LIFs? A LIF is either data, mgmt or cluster?

10/16/2012 15:37


If you have an existing Netapp system running 7 mode…you cant just convert it to cluster mode correct? You would actually have to migrate the data off of the 7 mode system to a system running cluster mode because the meta data in a 7 mode volume is different from the meta data in a cluster mode volume….correct?

Reply to  Jason
11/15/2012 00:15

You can currently have NetApp PS help you with the move to a new clustered ONTAP system (only they have access to the “Volume Transition Wizard”). As it sits, there is no “in place” upgrade from 7-Mode to Clustered ONTAP. svMotion, SME, SMSQL, etc. is going to be your friend when it comes to migrating data from 7-mode to Clustered ONTAP.

05/01/2013 11:37

Hi Nick,

did you write about Junction Paths and Export Policies. Export Policies?


Would love your thoughts, please comment.x
Sign Up

New membership are not allowed.