Pure Storage Cloud Block Store Goes GA

Pure Storage Cloud Block Store Goes GA

Chris Evans All-Flash Storage, Cloud, Cloud Storage, Data Mobility, Enterprise, Pure Accelerate 2019, Storage 2 Comments

Update: 18 November 2019: Pure Storage has announced that Cloud Block Store running on Microsoft Azure is now in Technical Preview, including the ability to (asynchronously) replicate data between CBS in AWS and CBS in Azure. More details here – link.

Pure Storage has announced the general availability of Cloud Block Store (CBS), a cloud-native solution for implementing independent block storage in the public cloud.  Why use a separate product from the built-in block storage solutions already available in the cloud and how “cloud-native” can a 3rd party solution be?

Block Storage

Cloud block storage, such as AWS EBS (Elastic Block Store) is the foundation of compute and many other products in the public cloud.  Block volumes act as the boot drive for virtual instances and data volumes to deliver low-latency and high performance to applications.  They do, however, come with some drawbacks:

  • Internal only – most (if not all) implementations are internal only to the public cloud and not exposed to the outside world.  This makes it tricky to directly move data to/from instances.
  • Instance-based – Block volumes are only consumable by virtual instances.  This in itself isn’t a problem, but typically, volumes can’t be shared between instances and are mapped in a 1:1 configuration.  This makes them unusable for clustered or other shared requirements. 
  • Resiliency & Availability variability – the levels of resiliency and availability vary significantly between vendors.  Azure offers a 0% annualised failure rate (AFR) whereas AWS claims around 0.1% – 0.2% failure.  It’s interesting to note that by comparison, a standard unprotected hard drive has an AFR around 0.35% (0.55% for SSD).
  • Performance can be tied to instance size – In GCP, for example, the IOPS performance of a volume is dependent on the number of vCPUs assigned to the virtual instance.  The alternative is to look at storage-intensive instances, which have a different cost profile.
  • SSD performance is traded for persistence – some instances offer high performance locally connected SSD, but that data is lost if the instance is deleted or fails.

With so much variability, it would be good to have consistency in how block storage is delivered.  In hybrid environments, there’s simply no way to move block storage on/off-premises without going through an intermediary such as snapshots on object storage.  This process isn’t particularly dynamic in nature. 

Block vs File

With so many drawbacks, why use block at all?  Obviously, we need it for virtual instances.  In general, block is still faster than file and definitely faster than any object store.  Databases (and other similar applications) see performance directly affected by the latency of individual block I/O.  So, block-based storage in the public cloud is here to stay, at least for use with traditional applications and virtual instances.

Third-Party Block Solutions

There are multiple ways in which block storage could be implemented as a 3rd party solution in the public cloud.  The most obvious is simply to run a software-defined storage solution within a virtual instance.  While this would undoubtedly work, this kind of implementation runs into problems with resiliency and availability.  At best, the solution would be as reliable as a single virtual instance.  Resiliency could be gained by running a second instance and replicating data between the two.  However, as we’ve seen, instances can’t share block storage, so it would be hard to implement a configuration where each instance was a storage controller and they shared back-end storage.  Instead, the configuration would look more like two storage appliances replicating between each other.

Management

The next issue to look at in this kind of configuration is how the solution would be managed.  In a simple SDS implementation, adding new storage means mapping LUNs to a virtual instance and then incorporating those into the solution as usable storage.  This is perfectly possible to achieve but would also need integration into the cloud provider APIs to add and remove devices in a timely fashion.

Cloud Block Store

CBS Architecture (Image courtesy of Pure Storage)

What exactly has Pure Storage implemented with Cloud Block Store?  To answer that question, it’s worth looking back at the original FlashArray platform.  In that architecture, each controller was stateless.  SSDs mounted on disk shelves were supplemented with NVRAM to safely ensure write I/O was recorded on persistent media.  With FlashArray//X the architecture is still the same, however, the NVRAM modules are now part of the main controller chassis. 

This architecture is replicated in the design of Cloud Block Store (CBS).  Multiple EC2 instances are used to represent virtual drives, each of which provides storage and a portion of NVRAM.  These instances use EC2 io1 storage (for NVRAM) and instance storage for storing data.  All of the storage is then additionally protected through a copy stored in AWS S3.

Should a single instance fail, then just like a failing SSD, the instance can be replaced, and the lost data recreated.  Another two EC2 instances act as virtual controllers and expose iSCSI LUNs to AWS applications.

Scalability & Availability

The design of CBS delivers a storage solution that is more resilient than native EBS, with the capability to scale out the solution through the addition of extra virtual drive instances (or expanding the size of each virtual drive).  It’s clear that CBS is built with enterprise-class in mind.  A default configuration uses two AWS c5n.9xlarge instances and seven i3.2xlarge instances with seven 18TB EBS io1 volumes. 

Pricing

The estimated charge for this configuration is just over $7,900 per month.  This cost does not include software licence charges, which start at $2,000/month for 10 TB of capacity.  Full details can be found here in the AWS Marketplace.

Customers can also choose the Evergreen (ES2) option and use some licence capability on-premises and some for a CBS deployment.  In the scenario, the customer is effectively paying for the virtual hardware in AWS, so I’m not sure how popular this option will be. 

Use Cases

At first, it might seem counter-intuitive to implement an additional layer of storage into the design of public cloud applications.  However, there are some advantages to using CBS and these pretty much all revolve around hybrid scenarios.  Pure suggests the following configuration use cases:

  • Using CBS as a disaster recovery target, with replication from on-premises FlashArray
  • Lift & Shift migration of applications to public cloud – using CBS as a data mover
  • Using CBS as the basis of test/dev environments with on-premises to CBS replication used to seed these environments.
  • High availability implementations across multiple availability zones (increasing resiliency above EBS capabilities). 

The benefit of using FlashArray on-premises, replicating to CBS in the public cloud is the ability to use standard API/CLI commands.  CBS can also be monitored by Pure1.

VMware

CBS can be used to support VMware migrations, where VMFS-based LUNs are defined as vVOLs.  Using CBS with VMware Cloud on AWS (VMC) is a little more complex as VMC doesn’t offer native support of non-vSAN storage.  Therefore, CBS volumes have to be presented to guest operating systems using iSCSI across the guest network (which is probably a less-than ideal solution and only good for migration). 

Competition

Pure Storage isn’t the first company to migrate its storage solution into the public cloud.  We have also seen vendors implementing co-location solutions close to the public cloud vendors that provide physical storage from dedicated or shared platforms.  However, Pure has arguably created a more resilient solution than those offered by other cloud-based vendors (at the cost of multiple instances to run it).

Having said that, the implementation of CBS is probably less interesting compared to what it offers.  Firstly, the resiliency, deduplication and compression gains could make an entire solution built-in AWS cheaper than using local EBS volumes.  I’m working on how to demonstrate that premise.  In the meantime, you could choose to look at the CBS TCO Calculator on the Pure Storage Website for examples.

The Architect’s View

Then we have to think of the operational efficiencies the solution introduces.  CBS provides a much easier way to get data in and out of the public cloud.  This means being able to gain from the benefits of hybrid (on/off-premises) configurations.  At some point, if/when CBS is ported to other clouds, this could also mean multi-cloud too.

I expect that CBS will be of most use to larger enterprises that are looking to build out containerised solutions which can take advantage of persistent block storage and want data portability between regions and on/off-premises.  That’s not to discount the other use cases, but CBS needs to solve problems that can’t be easily achieved through other methods.  The buy-in is significant and only for the most committed of cloud-focused enterprises. 


Copyright (c) 2019 Brookend Ltd. No reproduction in part or whole without permission. Post #64fe.

Disclaimer: Pure Storage paid travel and accommodation for Chris to attend Pure Accelerate 2019. There is no requirement to blog or produce content from the event. Pure has no editorial direction over any content produced.