StorONE introduces drive-based pricing – is this a better deal for customers?

StorONE introduces drive-based pricing – is this a better deal for customers?

Chris EvansData Practice: Data Storage, Enterprise, Storage, Storage Hardware, StorONE

StorONE recently announced a move towards drive-based pricing.  This model isn’t new (Pavilion tried it) but clearly reflects demand from customers – and possibly a technical angle too.  So, across the enterprise, how should we charge for storage resources?

Background

Traditional storage pricing has always been focused on price per TB (which we’ll refer to as $/TB in this article).  Back when we had small-capacity drives, this logic made sense.  Systems needed to be constructed from dozens of HDDs (or spindles) because the capacity per drive was relatively small.  Every terabyte of capacity deployed in a system would be used by host applications (except for system overhead, of course).

As drives have continued to increase in capacity, the IOPS/TB ratio has declined massively.  This represents a problem for maintaining performance, as 15K RPM drives have gradually been replaced by SSDs, leaving the HDD market to focus on capacity.  The initial approach to this problem was to short-stroke drives, using only the outside portion of the drive where the read/write head moves the fastest in linear terms (rotational speed is constant).  HDDs are too complex these days for that practice to be effective.  New recording techniques also throw a spanner in the works, as features like SMR demand more host-based awareness of the I/O profile to get the best out of the media.

Note: I could have taken these comments out of a presentation I gave in 2013.  We were concerned about I/O density with HDDs ten years ago; the problem is exponentially worse today. 

Flash Drives

Tiering has always been used by storage vendors to differentiate price and performance.  The mass introduction of SSDs into storage arrays in the early 2010s provided a way to fix the performance problem by having hot data on flash and colder data on HDD.  The drawback to this design is that the data placement algorithms are always reactive.  Any process that moves data after the I/O needing the increased performance has completed, has missed the opportunity to accelerate that I/O effectively.  Dynamic tiering can give some benefit, but in general, it’s always behind the curve. 

StorONE

The correct solution to the tiering and performance problem is either to build systems out of one tier only (the Pure Storage approach) or rewrite the storage I/O stack to cater for the challenges experienced with using tiered media.  The all-flash route certainly works for some customers, but the economics becomes a significant factor, with flash still way more expensive than hard drive media on a simple $/TB basis. 

Note: we can and have debated the pros and cons of all-flash systems.  This isn’t the time to go into that level of philosophical discussion.

Rewriting the I/O stack is the route followed by StorONE, using persistent memory, flash, and HDDs in combination.  In the simplest terms, the StorONE engine receives all write I/O into the fastest media and then stacks it down to the most cost-effective media over time.  This process is still a compromise compared to all-flash but vastly superior to reactive dynamic tiering.  Read I/O can be prefetched or simply left in the upper tier until the data starts to go cold. 

There’s also a secondary benefit from the StorONE design.  Customers can use the entire capacity of the latest HDDs, which currently top out at 22TB.  No more short stroking.

Pay for What?

So surely, with the StorONE engine, and efficient use of capacity, it’s a win-win for everyone, right?  Well, yes and no.  StorONE, in common with all resilient storage systems, has a minimum deployment configuration for each tier of media.  With 22TB drives, a 6-drive system would have 100TB available, using only six spindles.  As we’ve mentioned before, HDDs are much slower (randomly) than SSDs, so a customer may choose to start with 12 HDDs, providing 200TB and doubling the performance.  But what if the customer only has a requirement for 100TB today?

The customer has two choices.  First, to overbuy on capacity and grow into the spare space.  This is, of course, an expensive option and doesn’t offer value for money to the customer (but does benefit the vendor).  The second option is to deploy drives with smaller capacities.  This benefits the customer, but the vendor sees less initial revenue.  In our example above, the use of 10TB drives might be more appropriate, giving greater I/O density for a similar $/TB.

Upgrade

What happens when capacity is exhausted, and the customer must upgrade?  In a typical $/TB model, the extra capacity is now a chargeable feature.  In the StorONE “Scale-for-Free” model, each drive can be replaced with one of higher capacity.  This increases the overall system capacity, with no additional licence charge from StorONE.  Who benefits from this approach?

With Scale-for-Free, StorONE front-loads the cost on the customer by creating a per-drive charge.  Capacity expansion down the line is free.  In a traditional model, a vendor (Dell, for example) front-loads the capacity and charges for it. Later, the customer has to pay again to expand.  In both models, if the $/TB cost of HDDs was constant, there would be no overall difference, only a time factor as to when charges are applied.

Except, of course, HDDs aren’t priced constantly, and neither are SSDs.  Storage media $/TB costs are on a constant decline, with a typical HDD street price around $600 when first introduced to the market, irrespective of the new capacity.  It’s only when the fundamental internals of a drive change (like introducing a new read/write head or improved actuators) that the unit cost of an HDD changes.  SSDs are following a similar path, as vendors increase the layer count with each generation. 

So, as a vendor, charging upfront for capacity is a better choice than charging for expansion later.  For a customer, paying per slot could be cheaper than paying for capacity, depending on economies of scale.

Right or Wrong?

At the top of this article, we asked what the right way to charge for storage should be in the enterprise.  As with everything, the answer is it depends.  In large enterprises, where new capacity is deployed frequently, the effect of price declines is likely to be less of an issue.  In smaller companies, where capacity growth and costs are more sensitive, then a drive-based charging model might be more effective.  It really does come down to building a personal TCO to see how the numbers stack up.

However, many vendors don’t want businesses to buy using a capex-based model.  “as a service” is the new mantra with consumption-based pricing.  How does that work with the design of solutions like StorONE?  The answer is probably, quite well.  StorONE can replace drives in place over time, increasing the capacity without increasing the system footprint.  Of course, we must consider the power of the storage controllers in any growth plans, but for storage capacity, the winners in the consumption-based models will be systems with the capability to deliver incremental capacity growth in place and without invasive data migrations.

The Architect’s View®

StorONE is a relatively small player in the storage industry.  As such, the company knows that persuading customers to move from traditional vendors like Dell and HPE will be a challenge.  The answer is to offer flexibility in the pricing model that hopefully will be attractive to a broader base of potential customers.  StorONE isn’t about to go into business to compete with consumption-based pricing because the capital outlay is significant.  However, if the company can make potential customers think twice about using a more efficient capex option, then that could result in increased sales.

Storage as a service is not automatically cheaper than other options, but vendors sense an opportunity to charge a recurring monthly fee for storage that, over time, has a declining capital cost.  If a customer commits to a 3-year fixed price $/TB scheme, a smart vendor can add capacity just-in-time and benefit from declining component pricing. 

What this tells us is that every large enterprise or small business should be aware of its internal demand/consumption profile and build a purchasing model around it.  For vendors, the route forward is efficient and dynamic hardware architectures that can meet the needs of the customer, whether buying capex, opex or as a service. 


Copyright (c) 2007-2022 – Post #5aaa – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.