Evolving Enterprise Storage Buying Models

Evolving Enterprise Storage Buying Models

Chris EvansCloud Storage, Storage Management, Uncategorized

One of the most tedious of tasks I’ve had to undertake over the years has been the planning and deployment of new storage technology.  In that respect, I’m not referring to the physical installation, but rather the process of capacity planning, sourcing and agreeing terms on new storage capacity.  Increasingly, internal businesses consume storage on-demand as a service, whereas vendors mostly continue to sell hardware as a capital expenditure.

With the continued influence of public cloud and focus on “as a service” purchasing models, are we seeing this methodology filter down to the enterprise?

Capex vs Opex

Most storage acquisitions are capital purchases.  The customer decides what needs to be purchased, perhaps runs an RFP, picks a vendor and buys the hardware.  Although the actual spending of money could be offset through a leasing type agreement, generally, the hardware is on the asset register of the company and depreciated over time.

What makes this task time consuming is the whole process of designing a platform, comparing vendor solutions to ensure they offer “like for like” in terms of not just capacity, but performance, expandability and features.  To be fair, in some of the larger organisations I worked in, we designed reference architectures with each of the major vendors that supplied the company, creating building blocks that could be deployed and expanded over time.


However, with traditional storage, there were a number of issues that meant capacity always increased but never decreased.  Partly this was because demand was there, but also, with wide striped architectures, taking small sets of disks or a shelf out of a configuration would have been hard to impossible, even ignoring the risks of such a procedure.  Bear in mind also that when you own the equipment, upgrades (like controllers) become more complex, with buyback schemes, rather than simple hardware replacement.

Opex Solutions

Some vendors did offer Opex-type solutions.  EMC had OpenScale, for example where capacity was added to a deployment and charged per GB.  However, OpenScale charging never went down – each increase resulted in a higher watermark for capacity that would be the new charging level.

To see why this “on-demand” charging was such an issue, we need to look at how older storage was managed.  If an array was deployed with a minimum amount of capacity, say 100TB, it may take time to consume that resource.  During this period, the effective $/GB cost is higher than it should be, because the cost is already sunk, even if the hardware isn’t being fully utilised.  Similarly, at the end of life for a storage array, data is moved elsewhere and the utilisation decreases again until the equipment is decommissioned.  The result is that the anticipated $/GB cost is much higher than expected.

With solutions like OpenScale, some of the initial deployment cost is mitigated, as storage can be added over time.  However, the high watermark on charging means that quickly migrating away from an array to be decommissioned becomes really important.  Of course, this kind of “deploy in chunks” scenario is one of the benefits of hyper-converged solutions.  Each node brings more storage to a configuration that is then rebalanced automatically.  You can then also decommission older nodes by simply evacuating the data and removing the hardware.


As businesses migrate to flash storage, does anything change?  The most obvious difference of all-flash compared to disk-based or hybrid systems is the removal of any dependence on spindle counts.  With so few IOPS per HDD, the number of spindles in a configuration becomes a critical factor.  Deploy too few disks per tier, and performance will really suffer.  Flash effectively eliminates that, so once a minimum number of drives have been deployed for resiliency, capacity and performance can be increased linearly by adding more SSDs.  If the hardware allows for media evacuation in place, then drives could be removed too.

Operational or Opex

Of course, a lot of the discussion so far has really been talking about operational efficiencies, irrespective of how the hardware is acquired.  Purchasing storage on an Opex-basis – e.g. paying a monthly charge per GB is a different proposition for both the customer and the vendor.

First, the customer will not own the equipment and has to continue paying for use.  I’ve seen plenty of scenarios where older equipment was needed to run one or two legacy applications.  This storage was even taken off maintenance and serviced “on-demand”.  This isn’t practical with the Opex model – unless you want to seriously overpay.

Second, the vendor now owns the asset and has to seed customer locations.  This is easy if the customer has a single data centre, but much more challenging for larger customers with multiple locations.  There’s a risk of deploying storage that remains partially used for quite a time period, depending on in which location the customer sees data growth.  Vendors locking up capital in unused assets isn’t great for the bottom line as typically until it’s used/charged for, the asset can’t be revenue recognised (unless you go through a third party leasing arrangement).


So although there are challenges, some vendors are offering storage on demand.

Pure Storage introduced ES2 (Evergreen Storage Service) in May 2018.  The service is run by Pure and charged as space utilised, not provisioned.  This distinction is subtle but important.  In public cloud, for example, a provisioned 50GB LUN is charged for the full 50GB, even if only 10GB is used.  The cloud provider benefits from thin provisioning savings that aren’t given back to the customer.  Pure is not doing this and only charging for what’s used on a daily basis.  The cost also includes controller upgrades, which can be done in-place.

Zadara Storage offers opex-based solutions that can run on-premises, on public cloud (through co-location) or in a co-location data centre.  VPSA (Virtual Private Storage Array) technology implements multi-tenancy in the public cloud and also allows on-premises deployments to scale while dividing up storage administratively.

There are also other vendors implementing sovereign cloud storage (e.g. NetApp, INFINIDAT), but we’ll cover that in a separate post.


So surely, it’s not a case of deciding that a few hundred gigabytes are needed on-premises and paying for that level of capacity?  Unfortunately, things aren’t that simple.  Vendors typically have minimum commit levels in both capacity and time, in order to justify the deployment of hardware.  Remember also that the bundled price will include costs that would have otherwise been covered elsewhere, such as capacity planning, maintenance, upgrades and so on.  The result is that a final $/GB cost could be more expensive than expected.  But if your business has dynamic storage requirements, the overall storage bill at the end of the year could be lower than acquiring the hardware.

The Architect’s View

I’m really keen on the idea of Opex charging for storage.  Even though the $/GB price could be higher than buying a storage platform, when you consider how much storage is wasted in populating and decommissioning an array, then paying only for used capacity could work out much cheaper.  As always though, this kind of model needs management, even if the costs are offset to the business.

Like public cloud, it’s easy to leave storage provisioned when it’s not needed.  Having a finite limit to storage available forces storage administrators to clean up old resources and reclaim storage where no longer needed.  That task may fall by the wayside when the charges are being pushed back to the business.  At this point, someone in the line of business IT team now needs to take responsibility and that might be a sharp learning curve for that person.

Harmonising charging models is one aspect of building out a hybrid cloud effectively.  Maybe with the advent of all-flash data centres, we may see this purchasing model becoming more common over time.

Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.

Copyright (c) 2007-2018 – Post #23e4 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.