This post…where do we start. I’ve intentionally held off on posting about NetApp’s recent SPC-1 results for the past couple of weeks. I needed some time to fact-check, do some research, and eat some popcorn while reading through all of the FUD that has gone around. To understand, you need to do some quick homework and what I would consider “required reading”…
Here’s a link to the Full Disclosure report from June 18, 2012, directly from storageperformance.org, and here’s a link to a blog by Dimitris Krekoukias of NetApp outlining the details of the results. Chris Mellor from The Register also has a pretty nice writeup comparing latency and $/IOPS, in addition to some all-flash-arrays.
It can essentially be boiled down to the following:
- HP thinks their 3PAR results are superior, but no one cares about a million IOPS if you’re delivering them at an arguably unusable latency (i.e. > 10ms).
- EMC “LOL’s” at our results, even though they have never posted results themselves. Perhaps they’re too chicken to show what their real list prices of goods are? Or perhaps they are too scared to participate in a test that truly levels the playing field to a point where they themselves cannot control the output? It’s easy to throw popcorn from the bleachers.
- Opponents are obsessed with the idea that we only had 40% used capacity, when the number of IOPS per spindle at 84% used capacity versus 40% used capacity was PROVEN to be almost identical.
There’s plenty more, and we’ll cover as much as possible, but the reason I am writing this post is to make one thing CRYSTAL CLEAR…
NetApp IS a tier-1 Enterprise Storage Array. PERIOD.
(…oh and we also do SAN/NAS in the same box, not to mention the industry’s only true scale-out platform with Cluster-Mode)
I’m really tired of hearing the following…
- NetApp is good for tier3 and Backup (as if to passively imply we aren’t good at anything else)
- NetApp is good for SMB and mid-range NAS (as if to passively imply we’re not good enough for tier 1)
- NetApp is not a true SAN, it’s just a NAS with SAN “bolted-on.”
Bolted-on? More like Baked-In, which is more than can be said for the “Celeriion” Celerra-Clariion DART-FLARE relationship. (Don’t get me started…I’ve resisted posting about this for the longest time, but maybe that scab needs to be scratched off.)
Usable Capacity versus Performance Degradation
There were a lot of claims upon release of the results that we “padded” the number of disks used, because everyone knows that the more spindles you have spinning, the more IOPS/disk you can drive out of the system. Or can you?
Here’s the 2010 SPC-1 report, showing an older version of DataONTAP, smaller mid-range hardware, at 84% capacity utilization, and reflecting 567 IOPS/disk.
In the latest report from June 2012, we demonstrated the latest DataONTAP 8.1.1, running our flagship FAS6240 in a 6-node Cluster-Mode deployment, at 40% utilization, and reflecting 579 IOPS/disk.
Just because your disks don’t run as well and as efficiently in your RAID6/10 pools at full utilization doesn’t mean ours don’t either.
All this chatter about “you only used 40% capacity,” and that that somehow magically discredits the results, and no one paying attention to us saying that there was arguably no difference between 40% used and 84% used, AND ACTUALLY SHOWING YOU, rather than just posting speculative nit-picking, unsubstantiated FUD, is getting old.
Go run your own tests. Set it up the same way that ours was done if you want to, compare apples-to-apples, and come back and we’ll have a good chat.
Until then, thank you for your recognition of our success, whether direct or indirect. :)
- 9 Comments