Seagate Ups The Ante with 60TB SSD

Seagate Ups The Ante with 60TB SSD

Chris EvansStorage

Hot on the heels of Samsung’s 15TB SSD, Seagate has announced their own hyper-capacity drive, an (as yet) unnamed 60TB SAS solid state disk.  The new product was announced at the recent Flash Memory Summit in Santa Clara, California, USA.  So is 60TB setting a new standard?  Is it a practical device for every day usage, or just the equivalent of the automotive industry’s concept cars that never see the light of day?

Specs

The new drive is based on a 3.5″ form factor, which isn’t surprising, bearing in mind the number of NAND chips the device will contain.  The external interface is standard 12Gb/s dual SAS; there are no environmental metrics quoted (like power consumption).  In terms of performance, the drive offers up to 150,000 random read IOPS at queue depth of 32 (no quote on write figures) with throughput of 1500MB/s (read) and 1000MB/s write – both using 128KB block size.  The performance figures aren’t good or bad compared to other SSD drives and obviously throughput will be limited by the speed of the SAS connections.

Outside of the specifications, no real details have been provided on how this drive has been built.  Chris Mellor speculates that the drive contains greater than a thousand NAND chips, which would make sense – using 512Gbit chips would require over 1000 in a single package to deliver 60TB.  It will be fascinating to see what the power draw and heat dissipation figures look like.

Practicalities

From my perspective, the major issue I see is in the practicalities of having so much capacity in a single device.  Problems include:

  • Parallel access – how many parallel I/O queues can be managed to affectively make use of the capacity capability of the drive?  As hard drive capacities have increased, so HDDs have moved to being archive devices with lots of inactive data as the interface becomes one of the bottlenecks to storing and retrieving data.
  • Reliability – how will the drive cope with component failure?  With more NAND chips to manage, there are more components to fail.  Can the drive effectively turn off failing chips and reduce capacity?  At what point does a drive get replaced because capacity is less than desired?  For example, if the drive lost 10% of capacity, would that be a good time to replace (assuming the capacity wasn’t being used)?
  • Performance – how will internal performance be managed at scale?  How will garbage collection and other internal processes be managed with such a large amount of capacity?  Does this improve or reduce the device performance?
  • Reusability – have these drives been designed with reuse/recycle in mind?  If only a few NAND chips fail, can the drive be recycled by the manufacturer?   if the controller fails, can it be replaced/repaired?  Encryption may be an issue here – customers may be unwilling to return unencrypted drives, which could make them prohibitively expensive to use.

Revisiting the Past

As traditional hard drives increased in capacity, the I/O density profile of those drives became a problem for tier 1 applications.  Drive capacities increased exponentially over time whereas drive performance increased linearly.  In addition, drives were rarely limited by the speed of the external interface.  With multi-terabyte SSDs, we could be moving back into the same territory of having to manage issues of device performance versus capacity, but with the added issue that 12Gb SAS may not provide enough bandwidth.

The Architect’s View

At this stage, Seagate’s 60TB drive could well be a concept.   However as with most concept cars, the concept doesn’t come to market directly, instead the technology goes into more “market appropriate” vehicles.  Similarly, I expect Seagate may be planning the same approach.  A 60TB demonstrates capability, but actual products may well be much lower specified – perhaps up to 20TB in size.  This isn’t necessarily a bad approach as the market almost certainly can’t bear the cost of 60TB in a single unit.  Samsung’s 15TB drives will be expensive enough, so it will take time to reach the 60TB mark – but we will get there.

Further Reading

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2020 – Post #C863 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.