Many of you know Glenn from the podcast, but I’d like to introduce him to you all as the newest author and contributor here at DatacenterDude.com! When he showed me this op-ed style piece, I begged him to let me be the one to publish it, and he was gracious enough to say yes. Enjoy! -Nick-
My name is Glenn Sizemore, and I’ve had a saying for the past 15+ years of my career…
When it comes to technology I have no religion. I will use ANYTHING that accomplishes the stated goal.
Prior to joining NetApp, I was responsible for a rather large datacenter belonging to a particular U.S. government agency. My job required me to constantly evaluate new technology and duct tape disparate systems together. It was this need to advance that brought me from Networking to Windows… from Windows to Virtualization… from Virtualization to Storage… and finally from Storage to Cloud. This has been a long journey. Along the way, I’ve forgotten more than I care to admit, but it taught me something:
There is no such thing as “The Way” to do ANYTHING.
This is a problem that has always plagued our entire industry. You might say I’m writing this in an effort to make NetApp look good, and to a certain extent, that would be true. NetApp seems to always take the path less traveled, and have largely developed all of its technology from scratch. Thus, our systems — in particular Data ONTAP — function very differently than the rest of the industry. Therefore, the vast majority of common knowledge just doesn’t apply to us. Similarly, we’re sensitive to issues others never think about. This is an artifact of being an “engineering-first” organization. Over the years, we’ve been first-to-market with a very long list of capabilities. I use the word capabilities because over time our competitors have added similar capabilities to their kit. This then leads to an annoying arms race, doesn’t it?
Let’s play this out…
First, it is declared that no one needs capability XYZ, so we fight an uphill battle to get it out there. Once we do successfully get it out there, everyone else seems to show up with a similar capability. We will then point to inventing capability XYZ in the first place, sometimes even alongside patents, while the competition chooses to highlight how their specific implementation is superior in some meaningless way.
For instance, remember all the FUD about block in Data ONTAP not being “real block”?
How about storage tiers?
Inline vs post process deduplication?
So why bring it up in the first place?
Customers, this is where you need to pay attention…
Vendors use these stupid little implementation details to convince you to buy their product. It’s a sales tactic, one that I will admit I have caught myself using in the past. It’s not typically a very effective tactic, but unfortunately it remains a popular one. I actually put most of this on the customers themselves. As a representative of NetApp I am rarely asked how we would solve a problem. I am instead almost always asked a loaded question like:
“Why is product X better then product Y?!”
“Do you have <insert-proprietary-patented-technology-that-a-vendor-told-the-customer-they-must-have>?!”
Answering such questions were always difficult until I realized something: Product names are just words, what’s important is the capabilities the product provides.
However, when taking this more productive road, I sometimes run into what I would compare to “religious fervor”. Lately, this has simply gotten out of control. Normally, I would ignore this… well, crap… and focus on just doing my job to the best of my ability, but I’m really starting to get sick of all the debates that are distracting us from the real goal. And you should be too, Mr. Customer. Advancing our industry and finding new ways to solve problems is our core charter. Disruption is supposed to be our job right?!
That’s what I keep hearing from the “experts”…
I don’t want you to just take my word for it, so let’s take a look a couple concrete examples. Over the last year, NetApp has made a game-changing string of announcements. But, each of them have been met with some amount of this aforementioned religious fervor.
The irony here is that we had supported all-flash aggregates for some time, but its use was limited due to costs and the hardware we had at the time. That all changed with the 8000 series when all of a sudden, Data ONTAP was running on latest generation of hardware. Lo and behold, the performance was nothing short of breathtaking. I will share that prior to seeing the data, we in the TME community were all somewhat skeptical, and it was a common water-cooler conversation internally to discuss Data ONTAP, Santricity, and MarsOS. But when faced with the facts, it was impossible to argue. To quote myself when I first saw an 8060 AFF in action: “AFF is BOSS as F%&*!!!”. Excited about the performance and enhancements, we went on to joyously release our baby into the world, where it was met with endless FUDdy-duddy about WAFL. No data. No facts. Just uninformed assumptions and speculation. This is what I call religion in technology. Declaring that Data ONTAP simply cannot be used for an All-Flash workload because it’s not “built from the ground up” is quite possibly the dumbest thing I have ever heard. Bottom line, how a solution is executed only matters when you’re trying to explain how it works.
The how should be the very last thing discussed. We as an industry desperately need to move the conversation back to the WHY. Why are you looking at all-flash in the first place? You didn’t drink the “all-flash datacenter kool-aid” from the vendors that ONLY sell all-flash, did you? That, my friends, is an equally silly notion. There is no such thing as bad technology, and every idea has its use. Everything has something that it’s great at. However, there is no such thing as a panacea in this industry. There is no one technology that will replace everything. Starting a conversation like that is a fool’s errand, and it breaks my heart every time a customer does so.
So what should the conversation revolve around? How about…
Why would a business need such a capability?
What would it be used for?
What problem would it solve?
How would it make your operations more efficient?
When a vendor starts the conversation with “technology X is good and technology Y is bad!” it is devoid of the real world. It highlights a fundamental lack of experience designing, building, and implementation. It is an attribute of academia, and is luxury we can no longer afford, and that I can no longer tolerate.
Let’s move past the Flash debacle that haunts our current advancement as a species. How about we move onto something less controversial. How about…
2) Hyper-Converged Infrastructure (HCI)
What is that, you ask? Well, I’d love to tell you! But so would every other vendor! They’ve all filtered the definition to the point where it is almost meaningless. The generic HCI definition has been tailored by each vendor to highlight something that is specific to their solution. It’s like Cloud in 2004…
Me, personally? I consider the business driver behind HCI to be simple economics. People are the most expensive component in any architecture. HCI is about not having a dedicated management admin assigned to manage the infrastructure for a project. It has absolutely nothing to do with the physical. It has nothing to do with which technologies are used. Why isn’t the conversation cast in this light? Well, if you’re not removing the people, all you’re doing is paying more for the kit itself, and that is something that every business understands. Actually spelling out the benefit of something would limit its addressable market. By leaving it vague it can be sold to almost anyone, at least once.
HCI rant aside, this is relevant in the larger discussion, so let’s continue down this rabbit hole for a moment…
We hadn’t even finished distributing the press release before the pundits pounced. Calling it “…not really HCI” because it used a physical FAS. This, ladies and gentlemen, is religion. Our implementation doesn’t match your self-fulfilling filtered definition and thus is wrong. This line of thinking is poisonous to this industry. It’s not productive. You want to challenge our ability to match the TCO/ROI using a physical FAS? Yes absolutely, we would be happy to answer that question!
Given my economics-based premise on the problem HCI solves, I would answer that by simply deploying Data ONTAP, we’re enabling the infrastructure efficiency HCI demands. Remember in the EVO case, we’re talking about VMware vSphere. We’ve had the ability for a virtualization admin to completely manage his/her storage from within vSphere for YEARS. The missing element was initial setup. You still needed a storage guy to do the initial deployment. NIERS removes this requirement meaning a smaller site with limited staff can deploy without the enhanced expertise. Speaking of business benefit, doing so in this manner would allow even greater operational efficiency when dealing with Data Management. That is, after all, what Data ONTAP is great at! It also allows NetApp customers to deploy these administrator-light HCI configurations while the storage team back at corporate can still gain all of our industry-leading space efficiencies like Deduplication and Compression, and continue to utilize features such as SnapMirror, SnapVault, FlexClone, and more. These are but a few of a very long list of business benefits that led to the development of NIERS in the first place.
“How can we add onto the concept of EVO:RAIL, and still empower our customers as we always have?”
Unfortunately that’s not the conversation we’re having on the inter-tubes… because such a conversation would require people to challenge their own premise. Instead some of the louder pundits have chosen to just simply dismiss it altogether. To be fair, they are a minority. Many have dismissed it, claiming it’s not HCI, effectively removing it from any and all conversation in the space, but if PEX proved anything this year, we can’t ship NIERS fast enough. However, like any other belief-based debate, facts mean nothing to these sort.
Finally, I arrive at the catalyst for writing this…
Last week, we announced the NetApp NFS connector for Hadoop. A simple client-side plug-in that was originally prototyped by a summer intern. This was not a play to duct tape Data ONTAP onto Hadoop. No product manager looked at a market segment and said, “How do we get Data ONTAP into that market?!” It was the intuition of an intern, who realized that a business could gather valuable information contained within existing datasets if they could directly access the mountains upon mountains of data that currently reside in pre-existing NFS-connected storage. Like almost every technology at NetApp, it was born from a realization that we could help our customers develop a competitive advantage. We do of course have a world-class business team as well as some of the best engineering in the industry. Those same teams then realized we could potentially lower the upfront cost for Hadoop, and introduce new provisioning models that could improve the ROI and performance for Hadoop. Thus the project was green-lit and moved from the hands of an intern to a team of craftsmen, and last week, we announced the realization of that idea. Which was then met by — you guessed it — the religious fervor I’ve grown to despise. It was viewed by some as a blatant attempt to stay relevant, and other such nonsense, in a series of short-sighted failures to see the forest for the trees. Instead of considering the capabilities that it could provide, it was attacked for not conforming to what everyone claims is “the right way to do it”.
- Never mind the fact it can be used in conjunction with traditional JBOD HDFS.
- Never mind the implications of combining NetApp Private Storage with AWS, Azure, or Soft Layer for affordable temporary Hadoop farms.
- Never mind the fact it’s going to be an open source project that actually works with any NFS target!
It didn’t conform to the preconceived conventions and thus must be shunned and dismissed. This type of binary thought is only appropriate when considering the influence of a higher entity and one’s own soul. That’s why we refer to such arguments as religious debates. They simply cannot be won because belief and faith are so intimate that it is impossible to separate logic from emotion. Couching a discussion in this light is a shortcut to anger — the most powerful of emotions — and one that is most likely to lead to action.
Think about it. How much stupid crap do you read on the internet every day? How many times do you actually comment on something? Are YOU, personally, more likely to leave a comment if you love a post or hate one? Vendors know this, they depend on the hardwiring that makes us human. They turn us into puppets that fight proxy wars over things that fundamentally just don’t matter. Not to us, and not to the companies we work for. I include myself in this generalization. However, I have grown to realize it is fundamentally not productive. It feels great! It is a lot of fun! But it leads to no real progress…
Real progress is only achievable with open and honest debate.
I work for NetApp because I truly believe our products are superior. However, you shouldn’t buy anything from NetApp because of my beliefs. You should instead demand an understanding of the business impact of our products and solutions, not just a list of tech specs and XYZ capabilities. Challenge us to describe how we can help. Dare us to prove to you how we can change your business. It’s entirely possible that the Account Rep or reseller misunderstood your requirements based on those lists of tech specs. Maybe you don’t need a MetroCluster after all … then again maybe you really do!
I don’t expect any of this to change as it’s just part of the game. Always has been. Always will be. However, I would ask anyone who made it this far to really think about the core premise…
If All-Flash FAS can hit the performance required, at the price point requested, and those extra features make a meaningful impact to daily operations, in what world is that not the right fit? If it’s prohibitively expensive, or if said features will never be utilized it’s a poor fit, and shouldn’t be considered. However, those assessments should be made on their impact to daily operations and the fundamentals of the business, not the religious beliefs of the implementer or some [arugably marginalized] pundit on the Internet. Technology is not the place for religious debates. It is simply math… technology takes no sides. Organizations who treat it as such are most likely to find that breakthrough that leads to real progress. In the meantime the rest of us will be over here…
…sitting on the anchor of humanity, being shitty to each other on the internet.