FlashArray//X Gets Optane Acceleration with DirectMemory

FlashArray//X Gets Optane Acceleration with DirectMemory

Chris EvansAll-Flash Storage, Pure Accelerate 2019, Pure Storage, Storage Performance

At Accelerate 2019, Pure Storage announced DirectMemory, a read-only cache module for FlashArray//X. DirectMemory provides acceleration for workloads with a read bias, but is it worth the cost?

Cache Acceleration

The use of fast storage or DRAM is a pretty common tool for accelerating the performance of storage systems (and also within servers). In the days of spinning disk, read caches provided both improved read and write throughput and lower latency with a small increase in cost.

The premise of caching is based on the assumption that only a portion of an application’s data is active at any one time, the so-called “working set”. Data is retained in cache in case it may be re-read either just after writing or a repeatedly with read-intensive workloads.

As a cache is a subset of an application workload it can’t hold an entire data set. Therefore some method (or algorithm) is needed to make the most efficient use of cache capacity, as new cache space is required.

There are many cache algorithms available to determine how data should be retained or discarded over time. Least Recently Used (LRU) is arguably the most simple of these, where the cache pages with the greatest time since last reference are released.

Cache Issues

The use of caching technology is always a trade-off. Some application I/O will be accelerated, but the results will not be consistent. Occasionally data will need to come from persistent storage with a resulting higher response time. The question to ask when deciding to use a cache is whether the performance improvements and subsequent variability are worth the cost.

DirectMemory

DirectMemory Drive

DirectMemory is a cache acceleration solution for high-end FlashArray//X70R2 and //X90R2 models. The product is offered with two options – 3TB using four 750GB Optane drives or 6TB using eight drives. In both options, each drive occupies a single drive bay. Pure claims DirectMemory will deliver read I/O responses as low as 150µs (from a typical 250µs).

Caching Effects

There are a few points to note when using DirectMemory.

  • Data isn’t compressed in the DirectMemory modules, so this should save some latency overhead. I expect this has been done because (a) the data isn’t permanent and (b) Optane latency is very low and would be impacted if data needed to be decompressed on every read.
  • There is no failure protection across drives. If a drive fails, the available cache is simply reduced. Again this design choice improves performance as no parity calculations are required.
  • Write I/O goes directly to NVRAM so DirectMemory only improves the performance of read requests.
  • DirectMemory uses a simple LRU algorithm, which is likely to be improved over time, as field usage data becomes available.

Performance Improvement

It’s difficult to estimate the improvements in I/O performance that can be gained by adding caching. Fortunately, Pure Storage has data available from Pure1 Meta that aims to quantify the potential improvements customers can expect.

Pure1 Performance Improvement Estimates

Pure estimates 80% of arrays already in the field could gain 20% lower latency with DirectMemory, while 40% of arrays could gain 30-50% improvement. This uptick in performance could be worth the cost of using DirectMemory – depending of course on the price (which hasn’t been announced). Pure also showed some performance calculations for customers using SAP HANA.

The Architect’s View

My initial thought on seeing Optane being used as a read acceleration layer was whether this was a good use of high endurance storage. After all, similar results could have been gained using technology like Z-NAND or XL-FLASH.

Optane has raw latency figures of around 10µs when deployed in an NVMe form factor. Could better use be made of the technology by putting these drives into the host rather than the storage? The answer is probably yes – unless this implementation of SCM is being used as a testing ground for a future all-SCM product.

All the pieces are in place for Pure Storage to deliver an SCM-based array. FlashArray now has NVMe-oF connectivity and Pure will be collecting some good field data from the performance of Optane as customers adopt DirectMemory. It’s possible that the first implementation could be volumes pinned to SCM as a tier, which could be super fast. Of course, this could also all be total speculation.

If you want to learn more about DirectMemory, check out the Tech Field Day video from Accelerate, which goes into more detail. You can also find more Pure Storage-related posts on our microsite.


Copyright (c) 2007-2019 Brookend Ltd, no reproduction without permission, in part or whole. Post #9b21.