Are DataCore’s SPC Benchmarks Unfair?

Are DataCore’s SPC Benchmarks Unfair?

Chris EvansStorage Performance

DataCore has been active over recent months with benchmarks based on their new SANsymphony Parallel Server offering.  The most recent of these claims 5.1 million SPC-1 IOPS at $0.08/SPC-1 IOPS and 0.32 millisecond response time.  Other vendors are crying foul on these results, claiming they don’t represent a true test because all of the data is held in memory.  So, is it fair to put all of your data in DRAM or is this simply gaming the test?

In this discussion I can see a number of clear issues:

  • Putting all of your data in cache isn’t cheating.  In fact, if cost/benefit analysis can justify it, we should be caching as much data as possible.  In-memory databases and products like PernixData FVP and Infinio Accelerator specifically aim to keep as much data as possible in the cache (as DRAM or flash) rather than write to external storage.
  • Cache Miss Is an Issue.  What we have to look at is what happens to I/O response time for data not in the cache or when the cache becomes fully loaded.  If we never reach this point though, then who cares if all the data is in memory?  This would be a good testing point for the DataCore solution.
  • Caching isn’t persistent storage.  In general, caching I/O isn’t the same as serving it off persistent storage.  Cache is volatile and needs warmup time as well as additional protection.  If data isn’t in cache and has to be retrieved from the backing store, then that I/O could suffer.  If I/O response time has to be 100% guaranteed, then data should sit on flash.
  • With Benchmarks, Caveat Emptor.  All benchmarks can be gamed in one way or another.  Benchmark workload profiles rarely match real world applications and there’s no replacement for running proofs of concept to validate vendor claims (check out my posts on storage performance).

In an ideal world,  all of our data would sit on the fastest media possible.  However compromises have to be made; servers will only hold a certain amount of DRAM; DRAM is volatile; DRAM is (relatively) expensive; we like persistence in our data; we have mobility requirements for our data.  For all of these reasons, keeping everything in DRAM and nowhere else isn’t practical.  However if we can serve the vast majority of I/O requests from cache, then we’re in a good place.  This is what storage arrays have been doing since EMC introduced the ICDA (e.g. Symmetrix) in the early 1990’s.

The Architect’s View

Naturally DataCore is presenting their product in the best light possible.  Every vendor bar none does this and will highlight the benefits of their offerings without discussing the shortcomings.  Benchmarks, including SPC-1 are far from perfect, for example, systems that have always-on data optimisation features aren’t supported for testing.  However it also wouldn’t be practical to continually update the benchmark specification.  Testing is expensive and vendors can’t afford to be running benchmarks regularly, which they’d have to do if the specification was continually changing.  Otherwise, there’d be no way to do realistic vendor to vendor comparisons.

Just remember, there’s no substitute for doing your own testing, preferably with your own workload.  Use the benchmarks in the way they were intended – as a guideline rather than a definitive statement of capability.

Further Reading

You can find more details on the SPC results from the links below, as well as details from DataCore on their results.  I’ve also included some links to recent posts on performance testing.

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2021 – Post #e790 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.