As the original HCI pioneer sells off assets to Quantum, Pivot3 leaves a market that now only has Nutanix and Scale Computing as the two original standing hyper-converged vendors. Was HCI ever actually a market segment or just another way to deploy technology in the broader ecosystem?
HCI or hyper-converged infrastructure is an architectural choice that collapses storage, networking and virtualisation into a series of scale-out nodes or servers. HCI users don’t have to understand storage or deploy dedicated arrays. Instead, the original designs incorporated the storage functionality into a self-managed abstraction layer. This storage layer is implemented either in the hypervisor (in the vSAN and Scale Computing models) or as virtual instances running across all nodes (as in Nutanix, Simplivity and others).
Both NetApp and HPE have dabbled with “disaggregated HCI” solutions that separate the scaling process of compute and storage, effectively breaking the model that HCI was initially meant to represent.
Ultimately, HCI aims to offer simplification by removing the need to manage dedicated resources such as storage. In reality, HCI is just about simplifying storage, as the networking and virtualisation components exist whether a solution is HCI or not.
When Nutanix first proposed the “no SAN” model, the market was clearly intrigued by the idea of an architectural choice where capacity growth could be managed by simply adding more servers with their own internal storage. Now there was no need to pay excessive storage costs the array vendors had been charging for many years.
Unfortunately, there’s no free lunch in technology. The dedicated hardware saved by moving to HCI results in additional CPU, memory and networking traffic consumption across the HCI nodes. In some instances, these resource requirements are significant. The scaling capability also introduces issues where (for example) storage needs to scale but compute doesn’t. This can lead to purchasing hardware for only one purpose and so create inefficiencies.
The positive aspect of HCI is that it offers customers choice. Done right, the entry point for computing with virtualisation can be remarkably cheap, as seen with the Scale Computing HE150 offering; a minimum of three nodes for around $5000. In edge solutions, this model offers the capability to roll out highly resilient infrastructure and manage the operational aspects much more efficiently than deploying multiple servers and dedicated storage.
Is HCI actually a market segment or just an architectural choice for storage? Vendors like Dell Technologies, for example, have used the HCI model as a packaging exercise (VxRAIL) and an alternative route to market in the same way that converged infrastructure did a decade ago. However, today, Dell PowerStore with AppsON could arguably be called HCI as it combines storage and virtual compute into a scale-out architecture.
- Building Private Cloud Storage – HCI or Dedicated Array
- The Impact of Meltdown/Spectre for Storage & HCI
- Making Money in Hyper-Converged Infrastructure
Similarly, we could look at AWS Outposts as HCI or, indeed, any public cloud vendor that in any way runs compute and storage on the same hardware platform. There’s no true definition of HCI, despite some vendors’ attitude towards the dHCI and NetApp models we discussed in this post a few years ago. We’re also seeing a range of hardware solutions such as Nebulon, which creates an HCI-like solution, offloading the storage to a dedicated SmartNIC. Do we call this HCI or not? If we do, then there are at least three sub-categories of HCI – storage in the hypervisor, storage in a VM and storage in a SmartNIC. Oh, and let’s not forget containerised HCI in the form of container-attached storage.
The Architect’s View™
In the original appliance-based model of HCI that we saw introduced a decade ago, there was arguably a benefit in highlighting solutions that could combine storage into a single platform. Over the last ten years, storage has become much more software-focused with implementations (as we just discussed) in multiple locations. At the same time, the original HCI vendors have moved away from focusing on the implementation specifics. Instead, they now focus on ecosystems that include the core components of application deployment. This transition has, in part, occurred due to the influence of the public cloud.
HCI is an architectural choice or an implementation model for storage that can exist across the entire IT infrastructure. As such, I don’t see it as a market segment, because ultimately, it offers no unique capabilities that couldn’t be achieved using today’s standard building blocks.
Post #81a4. Copyright (c) 2021 Brookend Ltd. No reproduction in whole or part without permission.