At last year’s IBM Edge in Orlando, I said that IBM was looking to increase investment in storage while simultaneously aligning and focusing its then complex storage portfolio.
Was I wrong?
Well, I was at least half right. Instead of simplifying, IBM went out and bought another storage vendor, Texas Memory Systems (TMS), adding another product to its already complex portfolio. In addition to the initial purchase, IBM has committed to investing one billion dollars – one sixth of IBM’s annual R&D budget – on flash research and development in systems and software. So it has definitely increased investment in storage.
“We’ve tripled the size of the FlashSystem development team since the acquisition of TMS,” said Jan Janick, Vice President of Flash Systems and Technology at IBM. Moreover, IBM has committed two billion dollars thus far to its PureSystems research and development. The addition of TMS RamSan flash arrays, rebranded by IBM as FlashSystems, is the answer to what many believed was a missing component in IBM’s portfolio.
In my opinion, the biggest differentiator for IBM is its ability to move data off the array. In a recent Info-Tech solution set on how to Evaluate the Role of Solid State in the Next Storage Purchase, I point out the importance of understanding your organization’s requirements for data movement off of the all-flash array. If you’re just trying to provide consistently ultra high performance storage for a specific application, an all-flash array may be fine. But if you’re looking for a broader deployment with unpredictable workloads or data that degrades in value over time, eventual movement of data off the array is critical to keeping down total cost of ownership.
IBM accomplishes off the array movement by putting FlashSystem behind IBM System SAN Volume Controller (SVC), enabling movement of data from FlashSystem to slower, more cost effective storage. They call it the FlashSystem Solution. Flash System Solution also enables the use of storage services, such as snapshots and replication, which only adds 100 microseconds to system latency to the FlashSystem without SVC. Real-time Compression (RtC) can also be enabled on SVC, maximizing usable FlashSystem capacity (although the additional impact of RtC on latency has not yet been published). This improves the overall value of flash within the context of the larger system; it’s all about the economics of flash.
So, what does this mean for the future of IBM? Like I said, it has added complexity to its portfolio. Compound this with future plans for adding unified capabilities to XIV and the many updates to its other storage products (such as V7000 Unified, XIV, SONAS, N-series) and its storage solutions all start to overlap considerably in features, functionality, and management. In the long run, however, IBM has set itself up to simplify, by taking the storage media out of the equation; or rather, it will move much of the value and margins from hardware to software (of course the 1s and 0s have to get stored somewhere).
IBM argues that, right now, they are ahead of the competition in Software Defined Storage (read: Storage Virtualization) with SVC. While many vendors, including IBM, have developed the capability to abstract and virtualize underlying storage and add storage services, the key to what IBM calls Software Defined Storage 2.0 is about industry-led openness. IBM has invested heavily in OpenStack. It is the number two contributor (250 employees contributing) behind Red Hat, and it also made significant contributions to OpenDaylight for Software Defined Networking, which led to significant community sourced innovation in this space.
Nonetheless, most people’s reaction to this push for openness is “Why are you supporting the commoditization of storage? You’re a storage vendor?” The answer is that the value proposition is in the software and data services. By supporting open initiatives, IBM (in a sense) enables the extension of its Storwize platform to OpenStack, so that others can deploy applications directly on IBM’s platform. Thus, through the OpenStack GUI, organizations can now leverage IBM’s storage services capabilities (snapshots, replication, data movement, Real Time Compression).
The next step for IBM, and where it really plans to deliver value, is all about the data. With what it calls Software Defined Storage 3.0, data storage will be simplified. This will be done by delivering a single protected pool of data with all the automated management occurring on the backend, observing patterns in the data to dynamically and automatically create policy on the fly and match data with the right storage medium (solid state, spinning disk, or tape). Further, because movement of large volumes of data across the network is inefficient, it will intelligently move applications to the data. By making it easier for others to use its platform, IBM positions itself to capitalize on its strong support services capabilities and strong portfolio of analytics software, from which it can then derive its margins.
While storage virtualization…ahem, Software Defined Networking, really isn’t new, IBM has pushed the boundaries in terms of where it is headed. By abstracting storage services away from the hardware and opening them up, IBM will simplify decisions for customers around what storage hardware to buy from IBM. We are still pretty far out from this, however, and it will initially be of most benefit to large organizations leveraging OpenStack or for partners looking to develop new services through integration. However, it will change the way we purchase and utilize storage in the future. The only question, I suppose, is whether the software portfolio, where the value resides, will be equally complex to navigate.