Changing Consumption Models in Enterprise Storage

Changing Consumption Models in Enterprise Storage

Chris EvansStorage

Anyone who is involved in purchasing enterprise or midrange storage will recognise the story being painted in a recent blog post on the Pure Storage website.  The post introduces the concept of “Forever Flash” by first discussing the way in which storage is typically purchased.

It’s true that vendors have manipulated the storage refresh cycle to their own advantage, but the story isn’t entirely that simple.  Disk technology changes rapidly, and three years’ evolution of disk arrays means much larger capacity drives and faster controllers.  Refreshing every three years can seem attractive if other factors are more relevant, such as high growth requirements, data centre space constraints or high costs for power/cooling.

However setting that aside for discussion on another day, one thing the Pure announcement does highlight is the changing consumption model for enterprise storage.  Pure Storage are looking to break that relentless cycle of deploy & migrate by aligning acquistion costs closer to a consumption model.  This is a trend we’ve seen starting to occur, mainly with cloud storage providers, but with mainstream vendors too.

  • VMAX Cloud Edition from EMC provides a consumption model approach for buying storage, although it doesn’t necessarily address the migration issue.
  • EMC Project Nile – again from EMC promises to package storage in a consumption model, that may scale up or down on customer requirements.
  • NetApp, according to this article by Chris Mellor are also rumoured to be looking at amending their pricing structure.

I’m sure there are other examples and I welcome any comments from vendors I’ve missed off (obviously there are the cloud vendors like StorSimple and Nasuni out there, but they have a slightly different model).

The Architect’s View

Old purchasing cycles can’t survive forever and this way of working may be another change flash storage brings to the enterprise.  The problem for array vendors is a tricky one though.  Scaling up capacity is easy – ship product and charge the customer more; but how do you scale down these solutions?  In reality most customers will probably never scale down unless they are moving to another vendor, but even offering a scale down service would be a differentiator.  Platforms that can manage the migration problem have a distinct advantage (for example Peer Persistence on HP 3PAR, UVM on Hitachi, Live Volume on Dell Compellent) and there is storage virtualisation (EMC VPLEX, Hitachi UVM again, IBM SVC). Of course these solutions could all be an outdated approach to the problem.  Scale-out systems promise to allow storage to be added and removed at the node level of granularity, which means the array/appliance simply manages the data migration as part of the architecture.  This is the SolidFire approach and the promise we see from technologies like ScaleIO and Ceph. However we end up, a move away from relentless three-year big-bang migrations is surely a good thing.

Related Links

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2020 – Post #DFE7 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.