NetApp Clustered ONTAP Design Deep Dive

By on 12/11/2012.

I’ve been waiting a long time to get to this post.  I knew it needed to be talked about, I wanted to make sure we did it right, and I finally found the right person to speak as an SME on the content.  This is, to me, the culmination of understanding one of the most important parts of Clustered ONTAP…

Vservers.

I’d like to introduce you all to Karl Rautenstrauch.  He is a Solutions Architect with NetApp, focusing on Clustered ONTAP.  I first met Karl last year, and the first thing you notice about this man is his outgoing passion.  We got into a conversation about his session at Insight while in Dublin, and turns out he was presenting a session on one of the most popular questions of late:

How do I design my Vserver layout?

Karl is supporting our largest customers as they move to Clustered ONTAP.  He eats, sleeps, and breathes Clustered ONTAP, and has done so for the last two years.  That’s earned him the name “Klustered Karl” amongst his peers.

With that, I’m going to hand you over to Karl…  Enjoy!

 

1st of all, please allow me to air some dirty laundry.

I hate the name Vserver.

Ok, hate is not a strong enough word.  How about – loathe?  Detest?  Despise?

Why does one simple word rile me up so much?  It does an injustice to what we have delivered to our customers with Clustered ONTAP.  What we have built and delivered is a virtual storage array that abstracts data access (networking – SAN/NAS) and data layout (volumes/LUNs) from the client.

Let’s start with a brief review of Clustered ONTAP.  A cluster is anywhere from one to twelve highly available storage pairs connected via a high-speed low-latency private fabric (aka Cluster Interconnect).  The Vserver (or Virtual Array –-> pass it on) is the storage array, as the client sees it, that resides on top of all the available hardware.  It maps to the IP Addresses, host name, and/or Fibre Channel addresses they attach to. The addresses they see are mapped to virtual adapters that can share physical interfaces or be placed on dedicated interfaces.  We call these virtual adapters or LIFs (short for Logical Interfaces).  Mapping a LUN is done through the Virtual Array.  A file share or export is mapped or mounted via the Virtual Array.  Those LUNs and NAS volumes?  They can reside on any piece of physical hardware in the cluster.  The Virtual Array abstracts where they physically reside from the NAS or SAN client thanks to the federated namespace each Vserver offers.  This allows us to provide powerful and transparent data mobility capabilities no matter what protocol you use to access your data.

clustered ontap virtual storage array

 

Oh, and as the next diagram illustrates, you can have one or more Virtual Arrays sharing some / all / or none of that backend clustered hardware.  Think of the power and flexibility you can provide to your end users by creating secure logically separated storage arrays versus deploying separate physical arrays.  Isolated islands versus a dynamic pool that can make the best use of your available pooled capacity and performance headroom.  Is it better to distribute the workload evenly or isolate portions of it to limit impact and/or guarantee a level of performance unaffected by the remainder of apps in the cluster?  That’s the power an Agile Data Infrastructure delivers.  The power to choose the best solution without having to purchase separate products to deliver it.

 

clustered ontap san nas client connections

So, how can you best reap the benefits of what clustered ONTAP, and more specifically the Virtual Array, offers?  When it is time to help a customer architect their clustered ONTAP solution, I focus on a few important areas of Virtual Arrays:

  • The number of Virtual Arrays to create
  • Whether or not to physically isolate some physical components
  • Data Access patterns
  • Naming conventions – boring, I know, but I am a former admin.  Old habits die hard!

 

The Number of Virtual Arrays

The answer is four.  Wait, the question was not how many Super Bowls in a row my beloved Buffalo Bills lost consecutively??  I’m kidding of course!  The real answer is one I always hate to give any customer, but here it rings true.  “It depends.”  Do you host infrastructure for customers of your own?  Do you need to provide secure separation between lines of business or divisions in your corporation?  Do you separate management of SAN and file resources between different teams?  Or, are you consolidating multiple storage systems into one?  All are excellent reasons to have more than one Virtual Array.

Each Virtual Array can join a separate Active Directory or LDAP domain.  Each can have different admin accounts defined.  The volumes, LIFs, and LUNs on one Virtual Array cannot be seen or accessed from another.  Access to storage pools can be limited to a specific Virtual Array (or arrays).  The Virtual Arrays in clustered ONTAP are truly separate from one another even though they share the same hardware.

 

Physical Segmentation

A Virtual Array can access all storage pools and interfaces (LAN and SAN) that are available in a cluster.  But what if you have SLA’s for an application or consumer that require guaranteed available capacity or performance?  Dedicating storage pools, interfaces, and even storage controllers to that application will allow you to meet those requirements.  I’m sure you asking, “But wait a minute.  Why would I not just return to the old days of dedicating an entire array to that application?”  At some point, you could run out of capacity in that dedicated array and be forced to add a second.  That would present a new array to manage and new connections from the client to the array.  Clustered ONTAP allows you to expand the existing Virtual Array and offer the application more of what it needs from the same array.  Good bye sheet metal boundaries – the beauty of the Virtual Array!

 

Data Access Patterns

Let’s stay on the above example for one more minute.  What if the application I was discussing above changed over time?  It went from a gusher of reads and writes to a trickle.  My handy OnCommand suite will help me see that and I can take advantage of the resources in the cluster as a whole to move the data set to a lesser cost / lower performance storage pool, on a lesser cost / lower performance node, with a lesser cost / lower performance interface – or any combination of the three!

NAS-specific workloads offer another design decision.  Will I use all the network interfaces at my disposal or access my share / export in a direct manner?  Remember, a Virtual Array is a cluster-wide entity and CAN use resources from anywhere in the cluster.  It also abstracts the volume location from the client.  So in a cluster, I can access a volume from any node where I have created a LIF in that Vserver.  In diagram two, LIFs 1, 2, 3, or 4 can all provide read/write access to the volumes belonging to V1 on the backend.  The Cluster Interconnect, or backend fabric, and Namespace abstraction allow me to get from “A to Z,“ no matter where I start out.  The cluster knows where the data resides and how to get you there.

“But Karl, wouldn’t that add extra latency?”  Not really.  Take a look at our SPECsfs results (http://www.spec.org/sfs2008/results/res2011q4/sfs2008-20111003-00198.html).   23 out of every 24 read and write requests traversed the interconnect.  High-speed and low-latency.  Nothing in life is free though.  There’s always a price to pay, right?  In this case, we prioritize data access requests over system-level activities.  Shame on us for wanting to service reads and writes as fast as possible!  As a result, heavy use of the interconnect means less available bandwidth for volume mobility.  It will take longer to move a volume between nodes in the cluster.  So, I advise customers to map high bandwidth workloads to direct data paths (LIF and Volume on the same node) and use the interconnect for high client, but low bandwidth workloads like home directories and departmental shares.  Again, the beauty of the Agile Data Infrastructure shines.  Two different workload requirements serviced by one shared infrastructure.  Workload separation and distributed workloads in one platform!

SAN workloads leverage the magic of ALUA to find a LUN and do not factor into Vserver component design.  We can focus on SAN best practices at another time.

 

Naming Conventions

Traditional storage systems have one thing in common.  When you create something, it stays put.  So why not put the location in the name?  Controller1_Aggr1_Vol1 makes sense right?  I can produce an easy to read report that way!

LUNs, Volumes, and even LIFs can be moved in clustered ONTAP.  Why put a physical location in their name?  They do belong to a Virtual Array, so feel free to use that name in their title and you can reap the rewards of Virtual Array immortality during a hardware refresh.  You replaced a Year 2012 storage controller pair with a Year 2015, but the Virtual Array stayed the same.  No one knows you did what you did and the naming convention still applies.

Now some resources are still attached to a physical location, so naming accordingly makes sense to me.  It makes the provisioning process easier.  Node1_Aggr1_FlashPool tells me EXACTLY where I am going to place a new or existing LUN.  Node1_LACP_10Gb lets me know my new LIF will offer smokin’ performance for an application if I place it on that channel group.

 

Goodbye for now

Thanks for taking the time to read this post and thanks to Nick for granting me some virtual real estate on his blog.  Look for more posts going forward on clustered ONTAP best practices.  I’m happy to share what I have learned with you.

 

Klustered Karl, eh?  I think with continued content like this, it is certainly going to stick.

Many thanks to Karl for this kind of content.  We need it more and more these days in a world filled with Buzzword Bingo.  I’m just as guilty of it as anyone, so it makes me very happy when I can return to more technical depths such as this.

If you have any Clustered ONTAP questions for Karl, I’m sure he’ll be monitoring the comments section!

14 Comments