Hitachi Vantara recently announced a new midrange storage solution that offers latency figures lower than their new enterprise platform. This metric was a surprise and caused me to reflect as to whether “enterprise-class” systems will move to be niche as midrange becomes the dominant data centre storage platform.
I’m sure many folks believe that midrange has been the dominant storage platform for many years. Only high-end enterprises can afford the luxury of storage platforms priced at $1 million and above. While this may be true, enterprise organisations have focused on high-end systems like Dell EMC PowerMax (formerly VMAX/Symmetrix) and Hitachi VSP because they offered much higher reliability than midrange products. Enterprise storage platforms were and still are engineered for reliability and uptime. Of course, this level of availability came at a cost, because most systems had bespoke engineering and hardware components.
Two developments in the industry have changed this perception. First, hardware has become much more reliable over time, as components have been specifically designed and built for server and data centre requirements. The hardware we have today is light years away from that of ten years ago. Processor clock speeds haven’t increased significantly, but we do have multi-core and multi-socket systems capable of supporting hundreds of logical cores and terabytes of DRAM. Bus speeds have increased exponentially. PCI Express 4 will deliver almost 2GB/s throughput , with PCIe 5.0 not far away and expected to double that again. NVMe has reduced storage I/O latency and increased throughput to levels not possible with SAS/SATA, eventually becoming the dominant storage interface.
The second change has been the migration of features to software. The availability of improved processing power has been accompanied by the integration of new instruction sets to offload or at least accelerate common complex tasks like encryption. Storage platforms are moving to container-based designs where individual functions are managed through code packaged in containers. This process provides much greater modularity in code development and better internal scaling.
The net result of these developments in technology has been to see a reduction in bespoke hardware design and more use of commodity components. Of course, the entire market hasn’t moved in this direction. We still see strategic use of custom hardware where necessary. In general, though, the transition means that those high-enterprise solutions of the last 20 years are starting to look more and more like today’s midrange appliances.
You could look that the change in two ways;
- Midrange platforms have evolved upwards, with greater reliability, resiliency and performance
- Enterprise platforms have evolved downwards, making use of more commodity hardware
Although these two product categories are becoming more alike, there are still significant differences. Enterprise solutions typically have greater scale-out capabilities and smaller failure domains and of course, support mainframe connectivity.
The next question, then is whether midrange products are good enough, even for the modern enterprise. As more functionality gets pushed up into the hypervisor layer, the remaining work for shared storage becomes focused around resiliency, reliability and predictable performance. Some vendors, such as HPE Nimble, already quote 6 nine’s of availability. On average, this translates to 32 seconds of downtime per year. Remember though, that this is an average. One rogue system could account for 99% of outages (with the rest remaining at 100%), which wouldn’t be great for that one customer. Such is the challenge with using averaging.
But what if midrange is becoming good enough to use across the enterprise? In this instance, high-end storage starts to get replaced, as described in The Innovator’s Dilemma. This process is arguably already happening, with SDS products and public cloud chipping away at the lower end of the market.
I think the transition to midrange has been in place for some time. FlashArray from Pure Storage is essentially a dual controller architecture. SolidFire from NetApp uses commodity 1U servers and storage. The value is in the software. There are lots of examples where server-based appliances are the standard design.
As with any move to commodity, this puts a greater emphasis on software. Storage features have been moving to software for years, as those bespoke components were gradually removed. Today we see a minor renaissance in the use of some custom hardware (computational storage, FPGA offload). However, even these components are still commodity and can be purchased off-the-shelf.
If the focus is to look at software, what do current platforms offer? Hitachi Vantara recently announced the VSP E990, a midrange NVMe array. This solution provides throughput of (up to) 5.8 million IOPS and latency as low as 64µs. These hero numbers are at the level we could only dream about in high-end enterprise storage even five years ago.
The VSP E990 uses the SVOS operating system, the software that runs across the entire portfolio of Hitachi (block-based) storage platforms. Customers can expect the same look and feel (and features) on the midrange platforms as those on the enterprise arrays. This strategy may seem risky (as customers could choose to migrate down), but as discussed earlier, there’s less of an airgap today between midrange and enterprise, depending on the vendor and customer requirements.
Dell EMC recently announced PowerStore, a new midrange platform that will eventually replace the existing portfolio products including XtremIO, SC Series and legacy Unity/VNX. This new solution is allegedly built from scratch, with a “clean sheet”. However, for a new platform, the implementation looks surprisingly short on features. The RAID offering, for example, is RAID-5, 4+1 or 8+1. Dell EMC already has other portfolio solutions (like XtremIO), with much more mature and advanced RAID. If all the value is expected to be in software, the new platform seems to fall a little short in areas.
The Architect’s View
What can we conclude about the direction of midrange storage? The progression towards high-end enterprise arrays becoming a niche market is well under way. Like the mainframe, there will be a place for mission-critical systems, but that requirement will continue to shrink. I see midrange taking the place of most high-end requirements. The next challenger is SDS. We’ve seen significant inroads being made by VMware vSAN and other solutions. As enterprises move to cloud-like models, this really could be the decade of software-defined storage.
Copyright (c) 2007-2020 Brookend Limited. No reproduction without permission in part or whole. Post #92a4.