A Ten Year Flash Array?  No Thanks!

A Ten Year Flash Array? No Thanks!

Chris EvansAll-Flash Storage, Opinion

George Crump makes some interesting comments in his recent article on whether today’s breed of all-flash storage arrays could survive for 10 years in the data centre.  Whilst the logic of his comments make sense, I think there’s little desire to be using today’s all flash devices in 10 years’ time.

There’s no reason to think that the all flash storage array market won’t imitate what we’ve seen for the last 10-15 years with spinning disk devices.  There are reasons why end users have chosen to upgrade on regular refresh cycles:

  • Capacity Improvement – disk drives have broadly followed a Moore’s Law type curve with drive capacities doubling every couple of years.  There’s no reason to think SSDs won’t do the same; all the flash vendors are releasing larger capacity products on regular cycles.  Samsung recently announced this 1TB flash drive using mSATA format and weighing in at just 8.5g.  This device may not be Enterprise capable but is shows what can be done.  A ten-year period represents 5 improvement iteration cycles giving us potential capacities of 32TB  (25 = 32).
  • Performance Improvement – this doesn’t mean performance improvements of the flash but of the array and surrounding components.  Again going back to Moore’s Law, we see processor, memory and bus speeds follow the same curve of improvement.  This means we can expect processors to be 32x faster (which may not be achieved by a single monolithic chip) and the interconnected between array and server to have similar growth.  Current projections for Fibre Channel show we could be looking at 256Gb/s by 2022 and FCoE reaching 400GB/s.  These seem like fantastical speeds but looking backwards, the performance we see today would have looked equally ambitious ten years ago.  So even though today’s all-flash arrays look massively more powerful than we need, we will quickly see this performance being used to capacity within a few years.
  • Compatibility – going back to the Fibre Channel discussion, ten years ago we were looking at 2Gb/s speeds, where today we have 16Gb/s.  The most recent switches on the market today still support 2Gb/s but connecting this speed of device represents a huge waste of the capability of the switch.  There’s no guarantee that in ten years time, switching will support the current device speeds in use today.  Remember that although 16Gb/s is available, most all-flash devices are yet to support it and 8Gb/s is most common.  So, retaining access to your all-flash array could be an issue.  Consider also driver and firmware testing.  Anyone who has a ten year old array today will know the pain of dealing with complex compatibility matrices.
  • Financial Viability – most purchase and maintenance cycles for storage use a 3-5 year cycle.  Vendors want customers to upgrade and spend more money and make it financially attractive to do so by increasing maintenance costs.  From the vendor support perspective, maintaining an inventory of parts for replacement gets harder each year as drives are superseded and new models are released.  Products therefore have a natural “end of life”, usually short of the ten year period. In my experience, storage arrays that are more than seven years old have been retained because there are other associated issues with moving off the technology, including compatibility with servers and applications.  Few, if any, customers want to have arrays this old, but if they have, it’s usually for reasons other than cost of replacement.
  • Environmental – as data volumes continue to grow, the increase in drive density has helped to offset the additional floor, power & cooling space needed to cater for that growth.  Even so, companies like Facebook have massive challenges in dealing with cold storage.  Today’s flash arrays look puny in capacity compared to spinning disk arrays, but take a look back at the early EMC Symmetrix devices and you’ll see a similar scenario where the early products had tiny capacity but huge footprints.  Environmental savings will always continue to be an issue.

The Architect’s View®

The reasons for refresh apply to all-flash arrays as much as they did for spinning disk devices.  But perhaps we should look outside of the array to the bigger picture.  Storage and compute have gone through a number of evolutions where the pendulum swings between centralised and distributed data models.  The mainframe era was centralised; client server was distributed; SANs brought the storage back together.  In the next ten years we’ll see a move to distributed storage again as data needs to move closer to compute to reduce latency.  Some of this exists today with hyper-converged solutions (think Nutanix), distributed data models (Hadoop) and NVDIMMs.  With technologies like Intel’s Rack Scale Architecture, perhaps in ten years we may have moved away all together from active centralised storage, retaining it only for managing cold or inactive data and this discussion becomes as irrelevant as the devices themselves.

Related Links

Copyright (c) 2009-2022 – Post #0D2A – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.