The Practicality of In-Situ Processing

The Practicality of In-Situ Processing

Chris EvansIoT, Storage Hardware

On last week’s Storage Unpacked podcast, we talked to Chris Mellor about IoT.  He raised the subject of “in-situ processing” or basically taking compute to storage.  On the face of it, this seems like a good idea.  In fact, last year I talked to Enrico Signoretti, Head of Product Strategy at OpenIO about the company’s Nano nodes.  These are basically add-on cards that piggy-back the SAS/SATA interface and create an intelligent hard drive.

In-Situ Processing

The concepts described in the article Chris published on The Register this week goes into more detail on the two companies mentioned in the podcast – ScaleFlux and Next Generation Data Systems (NGDS).  Both are focused on bringing compute to storage using custom hardware based on flash and FPGAs.  I recommend reading Chris’ article, but essentially the idea is that performing compute at the storage significantly reduces latency.  Now, these kinds of products aren’t general compute for the data centre.  Let’s face it, with NVDIMM and in-memory compute, plus hardware platforms like Pure Storage FlashBlade, centralised analytics is a problem generally solved, at least from the hardware perspective.

Instead, these kinds of solutions are able to provide distributed processing for requirements like analytics or AI.  Just to be clear, I’m not saying they can’t be used in the data centre, but the additional programming needed has to be weighed against any benefit.  And there’s the challenge with this type of technology – being able to distribute compute algorithms/code to make best use of the data on the devices.  This, of course, is only one aspect, another being security.  IT organisations are already pretty lax at protecting data that’s in well-defined boundaries, like private data centres and public cloud.  Imagine how easy it will be for data on portable devices to be stolen.

The Architect’s View

As already discussed, the technology seems really neat, but the real challenge as always, comes in making the software deliver to the promise of the hardware.  Then there are the real application use cases.  How many companies have truly distributed processing requirements that will generate new business or give them competitive advantage?  I have a feeling that these types of products will be like the current market for object storage – a small number of companies will buy thousands of them.  Instead of being mass-market, they will be niche, addressing a very specific set of use cases.

I’m off to the Flash Memory Summit in August this year, so perhaps we’ll see an update on the technology.  By then we can review Chris’ predictions from the podcast and see if this year was the year of NVMeoF and/or in-situ processing.  What do you think?

Further Reading/Listening

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2020 – Post #B40D – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.