This is a common thread in messaging from a number of storage vendors in recent weeks. IBM points at SAN Volume Controller (SVC) as evidence of their long term leadership in software defined storage (for more see the Info-Tech Blog from IBM Edge “Addition by Subtraction? IBM and Software Defined Storage Commoditization“). HP points to StoreVirtual as part of a “six year lead in software defined storage.”
And now NetApp was talking this week at their Industry Analyst Summit about their established leadership in software defined storage centered around Clustered ONTAP, the latest version of their core storage operating system Data ONTAP. Through node clustering, ONTAP is going beyond individual NetApp arrays to manage storage as a service across multiple storage devices including other vendor’s arrays and even commodity server hardware (with a product called ONTAP Edge).
NetApp CEO Tom Georgans told the assembled industry analysts that NetApp isn’t going after software defined storage but, rather, with Clustered ONTAP, “the software defined storage story is coming to us.”
Software defined is virtually new
They are all correct, each in their own way, when they claim to have years of leadership in software defined storage. We just didn’t call it software defined storage years ago. We called it storage virtualization. What is new is that developments from cloud to flash to big data are forcing storage vendors out of their comfortable (and high margin) boxes.
One reason “software defined” is the new buzzword is that “virtualization” was too associated with server virtualization and got murky when applied to networks and storage. There are several different ways that storage can be, and is, virtualized. So, instead of the virtual data center (VDC), we’re hearing about the software defined data center or SDDC.
Over the past decade we’ve seen server virtualization turn processing into something that can be managed as a service on open standard (x86) hardware. But the storage side of the house has fought against such commoditization, insisting on hardwiring their unique intellectual property into storage arrays. I would argue that they have fought against the trend of “software defined” for more than a decade.
Storage virtualization has been around this whole time but I’ve typically seen it as a tactical device to ease customer transition from the other guys’ arrays to your own. I wrote about this seven years ago, comparing storage virtualization to a competitive neutron bomb – kill your enemies but leave their assets for your use (see the link at the end of this blog).
It always was the software, stupid
As a term, “software defined” carries its own fuzziness. I mean storage (at least beyond the most basic definition of a spinning disk, tape, or flash) has always been software defined. Disk arrays have something called a controller. A controller is nothing more than a processor running an operating system and software that manages and serves up the storage capacity on the disks. Controllers are often based on the same x86 processors found in other computers and servers (and can be virtualized just as easily as any x86 server).
But at the end of the day, the storage business has been about selling and supporting those boxes of spinning disk. The differentiating software needs to be hardwired into the box to justify why “enterprise” storage needs to be so expensive.
I’m reminded a little of how Apple made waves a few years ago when they started to base the Mac on x86 processors. For years, Apple fans had been convinced that Apple hardware was special, one of the reasons why it was more expensive than commodity PCs. So how could the Mac be based on the same commodity hardware and still be special?
It was never about the hardware. The uniqueness of the Mac was its user interface and operating system. The Mac is software. It could run on any old x86 box. The fact that you still can only use the Mac on an Apple device is enforced exclusivity. Mac OS could run on any PC.
Forced out of the box
NetApp storage is defined by Data ONTAP. The value of ONTAP is in how it inserts an abstraction layer between the port and the spinning disk (or solid state flash). Essentially, ONTAP virtualized the storage inside the NetApp array enabling all kinds of nifty features from deduplication to snapshots and replication.
But NetApp recognizes that to survive and thrive it has to take their version of software defined storage beyond the box with the stylized blue N on it. Forces compelling this move include the success of server virtualization and cloud computing. Customers are comfortable with the idea of compute being abstracted and consumed as a service from a grid of servers in the data center or in the cloud. Why can’t data center based storage also be abstracted as a service from aggregate resources?
Clustered ONTAP 8.2 includes a raft of new features specifically about managing storage as a service across those aggregate resources. These include storage virtual machines (SVMs) for matching specific storage pools with specific workloads and quality of service features for setting SLAs for specific workloads.
This is the right way to go but it is going to pose challenges for NetApp. They are going to have to manage the challenge of shift of focus from a hardware company to a software and services company. They haven’t had to worry as much about valuating the software and hardware as separate layers when it was all in the one box.
You can see this struggle right now with the ONTAP Edge product (basically ONTAP running on a virtual machine on any VMware host server). NetApp is limiting the size of Edge storage appliance and targeting it mainly at the small branch office use case. They still want you to buy that box with the N on it and are not quite ready to set ONTAP free.
Note: Hard to believe the Neutron Bomb article was seven years ago. Have a look and see how much things have (or have not) changed. Note that IBM SVC is mentioned as is LeftHand SAN/IQ which is now HP StoreVirtual. See Why Virtualization is Like a Neutron Bomb at InfoStor.