Pure Storage has announced the DirectCompress Accelerator Card, an add-in card that offloads compression onto a custom FPGA for data written to new FlashArray//XL systems. Why has the company chosen this point in the lifetime of FlashArray to make this change?
The FlashArray platform has always performed compression on data using controller CPUs as it is written to the system. In the early days of all-flash systems, data optimisation was an essential component in order to meet a reasonable TCO and justify all-flash systems replacing hybrid arrays.
FlashArray performs two levels of compression. The first is inline and executed on data in NVRAM before it is written to persistent storage. The second “opportunistic” compression is performed during CPU idle periods and achieves a deeper level of compression and space-saving.
With the new DirectCompress accelerator card, the initial compression stage is offloaded to an FPGA-based AIC (add-in-card), an example of which is shown in figure 1 (Pure declined to say which vendor FPGA card is being used). This process enables the first pass compression to be more efficient than the CPU-based compression without compromising performance or latency. In systems with the new DirectCompress card, all compression will be offloaded to the FPGA, freeing the CPU to perform other tasks.
Why has Pure Storage chosen this point in the lifecycle of FlashArray to deploy an AIC? First, this move highlights a trend occurring elsewhere in the industry. As we discuss in our latest report looking at SmartNICs, DPUs and computational storage, offloading data-intensive tasks to a dedicated hardware component results in improved performance, potentially better use of resources (both CPU and host memory), and, in certain circumstances, can optimise licence costs.
Intelligent Data Devices 2023 Edition – A Pathfinder Report
This Architecting IT report looks at the developing market of SmartNICs, DPUs and computational storage devices, as data centres disaggregate data management processes, security and networking. Premium download – $295.00 (BRKWP0303-2023)
The idea of compression offloads is not a new one. IBM embeds compression into FlashCore modules (see this podcast and this podcast where the implementation is explained in more detail). NetApp included a Pensando DSC into the all-flash AFF A400 system for compression offload. The original implementation of Simplivity (HCI) included a dedicated card for data optimisation, while the HPE 3PAR/Primera/Alletra family has been using a custom ASIC for over 20 years.
New solutions like ScaleFlux, Pliops and GRAID Technology all use custom acceleration functionality. So, we shouldn’t be surprised that Pure Storage has followed this trend and the evolution of the storage appliance market.
The Architect’s View®
The most obvious question, of course, is why now? Recent announcements have demonstrated that Pure Storage intends to scale systems to much higher capacities. See our recent articles on the announcement of FlashBlade//S, FlashBlade//E, FlashArray//C and FlashArray//XL. As systems grow in capacity, the compression overhead on the CPU will be much higher. That overhead is now better served through dedicated hardware.
There’s also the question of how Pure Storage will deliver 300TB DFMs in the future. Part of the design architecture must include higher-density DFMs using more NAND chips. In the FlashBlade design, EX chassis DFMs won’t have the same amount of general processing available. So perhaps the move to use FPGAs now is a precursor to using FPGAs more extensively (e.g. for compression) on DFMs in the future, effectively turning them into computational storage devices.
Whatever the plans are, we’re going to see more use of SmartNICs and associated devices to offload processing. Check out our eBook for more details.
Copyright (c) 2007-2023 – Post #bbc3 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission. Pure Storage is a Tracked Vendor by Architecting IT in storage systems and software-defined storage.