Home | Cloud | Storage Innovation – Diablo Technologies
Storage Innovation – Diablo Technologies

Storage Innovation – Diablo Technologies

19 Flares Twitter 0 Facebook 6 Google+ 4 StumbleUpon 0 Buffer 8 LinkedIn 1 Filament.io 19 Flares ×

Although it doesn’t appear to be well taught these days, the fundamentals of computer architecture are important for understanding how our industry will evolve over time.  This knowledge is important in understanding some of the innovations we are seeing come to market.  I hear people talk about lack of innovation in the storage industry, however that couldn’t be further from the truth and one of the latest companies to emerge from “stealth mode” is Diablo Technologies.

The Working Set

First of all, let’s touch on a subject discussed in a recent post, the working set.  Given infinite resources at zero cost, all data would be stored in memory directly on the processor bus as this is the fastest place to access it.  This data would be non-volatile, so we could drop and re-establish the power at any time without data loss.  Using Intel’s architecture, this RAM memory would be connected to the Northbridge chipset.

However desire isn’t always reality and in fact, directly connected memory is expensive and volatile.  So, in the early computing days, when memory was even more expensive than today, data that became inactive was destaged or swapped out to disk in place of other active data.  Each “user” has an “address space” representing their memory map; as far as the user is concerned, their data is all in memory, but in reality, it is swapped back and forth between physical memory and disk based on activity.

Now there are a couple of things to bear in mind here.  Firstly, disk is incredibly slow compared to memory and a read/write operation involves many stages, including going through storage protocols and other communication layers like Fibre Channel and SCSI.  So, you only want to read or write from disk when absolutely necessary because the penalty is so high.  However, data must be written to disk periodically, as memory is volatile and could be lost at any moment, so there’s a limit to the benefits of having all data in memory, the limiting factor being the amount and frequency that the data must be replicated to a permanent storage medium.

One final thing to bear in mind.  Over the years, operating systems and applications have been developed to cater for a hierarchy of data storage options, from RAM/memory, flash, solid state storage, hard drives and tape.  Each of these has benefits and disadvantages and applications such as databases have evolved with logging and journalling techniques that keep as much data in memory while writing data to permanent media for ongoing integrity, for instance when tables or rows are updated.

Memory Channel Storage

With their recent announcement, Diablo have brought to the market Memory Channel Storage and TERADIMM, a device that looks like typical DIMM memory for servers but is in fact non-volatile flash.  The device acts like typical DDR3 memory, but is obviously subtly different in that the contents aren’t lost when switched off.  Each DIMM has 400GB of capacity, far more than traditional memory in the same form factor and has a response time of less than 5μs, significantly faster than PCIe SSDs.  Expect this capacity to increase with 1TB DIMMs on the horizon.

Now I’m sure some people will be saying that we already have equivalent products like PCIe SSD.  However TERADIMMs are faster and don’t take up precious PCIe slots, so could be much more suited to highly dense blade configurations where space is a premium.

There are issues of course.  DIMM slots are expected to be filled with volatile memory and the BIOS and operating system recognise it as such.  This means BIOS modifications are required to understand and recognise TERADIMMs and Diablo claims to be in conversation with the server manufacturers to implement this.  We also have to remember that these devices would definitely not be hot swappable and have the same isolation issues associated with PCIe SSD.

The Architect’s View

So where and how would this technology be useful? Well, other than the obvious ability to provide fast local cache, I think it offers us an opportunity to rethink how we implement applications and databases.  Today in-memory databases are only a niche platform as they can’t meet the durability aspect of retaining data when the power is lost.  TERADIMM and other NVDIMM (non-volatile DIMMs) offers the opportunity to fix that problem.  There are also options for things like scale out and distributed storage; the characteristics of EMC’s ScaleIO could work well with NVDIMMs.

Memory Channel Storage is a continuing evolution within the industry towards moving data storage closer to compute and reducing latency.  Storage continues to be non-boring and as usual, a major driver for the rest of the industry.


Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).

Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.


About Chris M Evans

  • John

    A logical early choice I’m seeing for this is vSAN. I’m getting my lab up this week, I’m already hearing interest in vSAN from even some large shops who are looking for a cost effective way to stuff a LOT of flash into a server.

    • http://thestoragearchitect.com/ Chris M Evans

      John, agreed. I haven’t had chance to look at vSAN in detail/eval mode yet. Would be interested to hear your results.


  • Pingback: Ilja Coolen ICSS » Blog Archive » Flash Cache evolution()

19 Flares Twitter 0 Facebook 6 Google+ 4 StumbleUpon 0 Buffer 8 LinkedIn 1 Filament.io 19 Flares ×