This week, VAST Data announced Gemini, a new way to buy storage that disaggregates the hardware purchasing cycle from the software licensing. The company is touting this model as offering more value for customers and helping to eliminate forklift upgrades. But is that really the reason, or is this about positioning for the public cloud and making the company balance sheet look more attractive for IPO?
As everyone knows, Gemini derives from the Latin for twin. In the VAST context, the term presumably refers to the twin components of storage and hardware, where hardware is now undoubtedly the evil twin to be kept in the attic. With Gemini, VAST customers can purchase their hardware directly and pay separately for a software licence. The aim of this disaggregation is to provide transparency on hardware costs and break the forklift upgrade cycle.
We’ll come back to the idea of disaggregation in a moment, but let’s first cover the forklift discussion. In this post from three years ago, we covered the concept of what performing a forklift storage replacement looks like. This post also references a guest post for Kaminario from 2015, which is no longer available. Kaminario has morphed into Silk, which will be appropriate for the discussion as we continue.
Unless you’re on legacy technology, repeated forklift upgrades have already been mitigated as an issue. Server virtualisation has enabled transparent VM migration. Most modern object stores are scale-out in design and allow hardware resources to be added and removed dynamically in asymmetric configurations. Pure Storage, as an example, has fixed most forklift upgrades by enabling online replacement of every storage component, excluding the chassis. Even that problem can be overcome through metro replication and active-active LUNs.
- Managing Data Migration Challenges
- Revisiting Scale-up vs Scale-out Architectures
- What is a Forklift Upgrade?
In the area of file servers, the challenges of forklift migrations are more apparent but can easily be mitigated with abstracted global namespaces like Microsoft DFS.
Financial Forklift Factors
The technical aspects of avoiding the forklift upgrade are not what they were. However, I don’t think this is really what VAST Data is alluding to in their product announcement. Legacy vendors like EMC were notorious for creating a treadmill of hardware upgrades, offering inclusive 3-year hardware, software and maintenance, then charging excessively in years four onwards in order to make the refresh process look more financially attractive.
To be fair, in the days of hard drive dominance, the rate of HDD capacity improvements meant that significant TCO savings could be achieved through a refresh. Space, power, cooling (and weight) all factor into a TCO model where any opportunity to put a cap on physical hardware growth (compared to logical data growth) would be welcomed. Rapidly increasing HDD capacities provided plenty of opportunity for hardware consolidation.
As we move into an all-flash world, the economics of the TCO look somewhat different (for now, at least). The capacity sweet spot for NAND SSDs is around the 8-16TB mark unless you’re using custom designs like IBM and Pure Storage. Improvements in SSDs are focused on reducing cost rather than pushing capacity gains. This isn’t a surprise because the cost of a single NVMe SSD can be expensive and puts a lot of investment risk into a single device.
So, if drive capacity is stabilising, the long-term benefits are achieved with cost-saving on new hardware acquisitions. This means all-flash systems already on the floor (sunk costs) will live longer without needing replacement.
Simply put, forklift upgrades are going to be harder for vendors to justify in the future. There will be fewer incentives to gain from margins that are earned from the replacement of hardware.
Let’s talk more about the idea of disaggregation. Hardware and software are historically intrinsically linked in storage as most features were initially implemented in microcode or highly customised operating systems. The gradual commoditisation of storage systems, software-defined storage and the reliability of components has led to the ability to disaggregate.
I think that end-users had become aware of price gouging because many vendors started using commodity components that were readily available in the market. It became very easy to make cost comparisons and see where vendors were taking advantage.
There are some justifications for vendors adding margin to sales – for example, holding inventory, doing product testing and covering warranty costs. However, much of the mark-up exceeded what many customers would accept as reasonable.
Let’s also not forget about the impact of the public cloud. Cloud pricing provides another angle of comparison, but more importantly, offers a platform onto which storage (and other) services can be layered. Nutanix made the move away from hardware a few years ago, then recently released Nutanix Clusters, running on top of cloud instances.
NetApp has been in transition for several years, offering on-premises and cloud-native storage. Nasuni moved away from appliances to offer virtual NAS filers or the hardware at cost. Kaminario recently pivoted to Silk and stopped selling hardware, with a much greater focus on the public cloud. Pure Storage released the Cloud Block Store in October 2019, with licensing transferable between on-premises and cloud, leaving the customer to pick up the cost of the “hardware” in the form of virtual instances.
Eye on the Prize
Perhaps the long-term strategy for VAST is two-fold. Firstly, to position their products for the public cloud, which we’ve mentioned, has been done already by others. If licensing can be made equitable between on-premises and cloud, then prospective customers could choose either route and VAST are agnostic and win either way.
Second is IPO. Software revenue margins look much better than hardware, especially when hardware appliance sales impact on revenue and margin. Hardware ties up capital in inventory and exposes the vendor to inventory risk when the next hardware model comes along. Physical product has to be stored, shipped and supported for break/fix. That process generally requires a big support organisation or partners (or both).
The Architect’s View™
Where does that leave us in the analysis of Gemini? Theoretically, the disaggregation of hardware and software does make sense. However, for the customer, what’s been gained if there are now “two throats to choke” and the end cost isn’t any different? In addition, currently there appears to be only one hardware supplier (Avnet) which is acting more like an outsourcer than independent supplier. In other markets (data protection for example) the vendors have offered hardware from multiple sources. It also seems like the VAST hardware design wouldn’t be practical for re-use elsewhere in the enterprise, for example on test/dev systems, because it is essentially just shelves of disk.
I’d like to see a typical use-case explained in more detail, especially with an understanding of what happens in the equivalent of the “year four plus” period. Today, vendors entice a refresh with TCO, incentives, buybacks and discounts. But none of that would seem to apply to a disaggregated model.
The challenge of understanding exactly what’s on offer is similar to that of the “storage as a service” market. There are so many questions about “who pays for what”, “who does what”, “who’s responsible for what” that without more detailed explanation, vendors like VAST will spent a lot of time explaining the detail to prospective customers. Surely the idea is to make storage purchases easier, not more complicated? At the moment I feel that I have too many unanswered questions.
Copyright (c) 2007-2021 Brookend Ltd. No reproduction in part or whole without permission. Post #bced.