…and boy are we at NetApp ever on the front lines!
It seems fitting that a post regarding a “revolution” would be posted around Independence Day. Rich Castagna, Editorial Director over at searchstorage.techtarget.com, posted a nice writeup a few days ago, and as a NetApp customer turned employee, I’d like to respond to some of his points.
“Something’s afoot with data storage, and it looks like some big changes may be looming on the not-too-distant horizon.”
Big changes have been on the horizon at NetApp for some time, going all the way back to 2003 with the acquisition of Spinnaker. While I cannot reveal too many details at this point, if you’ve been paying attention since ONTAP 8 was released a couple of years ago, as well as some of our most recent Cloud-based press releases, you can begin to connect the dots.
“The signs of a real shakeup are emerging, with some core storage technologies and disciplines finally being scrutinized and questioned. Even bedrock storage principles seem a little iffy these days. Are file systems relevant anymore? Is RAID really the best way to protect data?”
Like I said above, the signs are there. Connect the dots. Rich is spot-on with his assessment here. Personally, I believe that file systems will always be relevant. These are typically application- or OS-driven constructs/necessities, but at the same time, you look at things like iOS, where an app is just an app, and you have no idea where the data is, or how it’s getting to your device, you just know it’s getting to your device. Things like this, to me, is what will make file systems irrellevant. The focus will begin to shift to PaaS, such as what VMware released recently with Cloud Foundry, or Apple’s iCloud. As Howard Hughes said in The Aviator, it’s “The way of the future…”
With specific regards to RAID, I think Rich brings up a touchy subject that a lot of other more traditional storage vendors are still struggling to deal with. They still argue the need to having many different optional layers of RAID, while at NetApp, we settled on one long ago. RAID-DP. Our own proprietary closest cousin to RAID6, providing double-disk parity. But the secret sauce is in on the software layer and the way we stripe across the underlying RAID groups. To answer Rich directly, I believe RAID will always have its place as a hardware layer of fault tolerance. Even 100 years from now when spinning disks are relics of a former era, you’ll still be building in some level of fault tolerance at the physical media layer, even if thats redundant chips of flash, or whatever our grandkids will be provisioning then.
“For storage, the hardware part is nearly as basic; the components that make up the storage environment have pretty much remained the same since the idea of networked storage surfaced nearly 20 years ago. Everything is bigger, faster and safer, but the model has essentially remained the same. Some of the most profound changes that have come to storage, like IP-based networks and solid-state storage devices, seem more evolutionary than revolutionary.”
Look, the data is always going to have to live somewhere. And whether it’s spinning disk, or that 1PB flash drive we’ll all have eventually, the concept is the same. This will always be storage in it’s simplest form. Data at rest. I would disagree that the model has remained the same, with specific regards to NetApp. While most other vendors settled on a silo’ed purpose-built model, developing/purchasing/acquiring individual platforms for each layer of connectivity/workload, NetApp decided long ago that both connectivity as well as access and throughput should be ubiquitous. You wanna work over NFS? Cool. Are you one of those cats that still thinks Fiber-Channel is better? Cool. How about iSCSI? Yup. OK. We’re going to give you all of that in a single Operating System that functions the exact same way on every platform in our line, from the smallest VSA all the way up to our Enterprise class 6200-series. A universal operating system (DataONTAP) that treats metal only as a means to scale. And with FCoE as the latest in this line of protocols as the only exception, it’s been that way for NetApp for an extremely long time.
While I agree that both IP-connectivity and solid-state media are more evolutionary than revolutionary, I think we need to look at the bigger picture, and how those are simply smaller tools in a much bigger shed, and how they’re being aggregated and used in larger enterprise platforms to complete a much bigger picture than simply identifying individual components. THAT, to me, is what is revolutionary.
“…thin provisioning, automated tiering and storage virtualization are tangible steps in the right direction. As data storage software plays a bigger role, storage hardware becomes more of a commodity, which is fine for storage managers but not a very comfortable situation for storage vendors.”
I would ask Rich, were I able, what his definition of “storage virtualization” is? If I were to guess, I would reckon he would be referring to basically what I like to describe as, “doing for storage what VMware did for servers.” If this is the case, then please take a look at ONTAP 8. With the introduction of cluster-mode, and the addition of DataMotion (I like to call it “nMotion”), we can now dynamically move volumes from one set of disks/controllers to another, non-disruptively. This can cover many scenarios, such as failures, hardware refreshes, or planned DR. I can tell you that this is a big deal over at NetApp and you’re going to continue to see innovation from us in this area in a big way.
Thin Provisioning? Done. Long ago. And I would argue our model far outweighs others. Most are just getting around to this, or have had to go out and purchase entire companies to accomplish it. Again, more WAFL/ONTAP secret sauce winning here.
Automated Tiering? How about not having to tier. Doesn’t that sound much easier? I prefer layers of cache myself. From the hardware all the way up to the app itself. We coined the phrase Virtual Storage Tiering to describe the way we rely on levels of cache throughout the stack, and are able to achieve comparable performance to expensive FC disks, when SATA + FlashCache is used, all without having to move data between different disk types (a la: what is commonly referred to as “tiering”). The argument over tiering is done in my book. Next topic please.
With regards to hardware being commoditized, and that being a little unsettling for storage vendors, I will politely and wholeheartedly disagree. Every single major storage vendor likely sells as much, if not more, software as they do hardware, and with all of the technologies we’re putting in place to save you from having to buy so much hardware, and be as efficient and flexible as possible, having the right toolset to manage all of that is almost a pre-requisite. With our new 3200 and 6200 line of systems, we’ve included the ONTAP Essentials ToolKit (this is ProtectionMgr, ProvisioningMgr, and OpsMgr, for those familiar), providing you this software needed to analyze, manage, and provision your entire environment, and get the best use out of your gear. Does this present any concern to us? We just reported a very large increase in market share, and revenues were up $1B this past fiscal year. I would argue, No.
But this was my favorite part of Rich’s post…
“What they don’t want to acknowledge is that we all know that most of that Web 2.0, social networking and big-data stuff is useless junk. And if we continue to collect and protect all those billions of bits of digital detritus that come our way, we’ll just get buried in it or distracted with the process of making believe we’re actually managing it all.”
I’d love to hear what Mark Zuckerberg has to say about this one. Pretty bold claim there, Rich. All of the stuff you listed as useless detritus has completely changed (read: revolutionized), for better or worse, society as a whole. And will only continue to do so.
And I can guarantee you, we’ll be right there on the front lines of the revolution with them.
- 0 Comment