StorPool Introduces NVMe/TCP and Public Cloud Support with Release 20

StorPool Introduces NVMe/TCP and Public Cloud Support with Release 20

Chris EvansAll-Flash Storage, Cloud Practice, Cloud Storage, Data Mobility, Data Practice: Data Storage, Enterprise, HCI, NVMe, Software-Defined Storage, Storage, StorPool

StorPool recently announced Release 20 of the company’s flagship distributed storage platform.  The new software includes support for NVMe/TCP, deployment on AWS, NFS storage and many additional performance and operability improvements.  We look at the big-ticket items to see why these new features are essential in a hybrid storage world.

Background

StorPool is a scale-out, software-defined storage solution for block protocols.  Customers deploy the software as either a dedicated cluster of storage servers or as an HCI configuration.  In the former, external hosts connect to the cluster using the StorPool client.  We first discussed StorPool in this post from 2019, following a presentation at Storage Field Day 18.

Rather than go over the details of the implementation and technology in this blog post, we suggest you read our series of posts reviewing StorPool as listed here.  In this series, we took the solution for a test drive, including a performance review and looked at common implementation strategies.

These blog posts also contain links to video content that demonstrate the StorPool features.

Cloud

StorPool was born in an era of the emergence of private and public clouds.  MSPs and enterprises alike can use StorPool to build out a dedicated computing cloud with solutions such as OpenNebula and CloudStack.  It’s also possible to use StorPool on traditional platforms such as VMware vSphere.  StorPool also supports Kubernetes (as highlighted in our series above).  One of the strengths of the platform is the ability to automate the creation of storage resources and take the human out of the process.  At scale, the StorPool software delivers a highly available distributed storage layer that can implement multi-tenancy through quality-of-service (QoS) features. 

New Features – NVMe/TCP

Building any storage solution must take into consideration the addressable market from a technology perspective.  In connectivity terms, Linux clients connect to the StorPool infrastructure through the StorPool client.  Other platforms, such as Windows and VMware vSphere, must use iSCSI. 

The release of vSphere 7 Update 3 introduced support for NVMe/TCP.  This protocol delivers the performance improvements of NVMe but doesn’t need specific networking hardware (compared to Fibre Channel, for example).  We first discussed NVMe/TCP in a podcast recorded at DTW in 2019 with the folks over at Lightbits Labs.  You can read more on our thoughts about NVMe over Fabrics in this post

NVMe/TCP introduces the performance and low latency benefits of NVMe over Fabrics (NVMe-oF) without the requirement for dedicated HBAs.  This makes the technology particularly interesting to cloud deployments, where Fibre Channel has little or no presence. 

For StorPool, NVMe/TCP support increases the addressable market for non-native StorPool clients, such as VMware vSphere.  Remember that the StorPool solution can be used across heterogeneous application platforms, so having a single networking component that can support iSCSI, NVMe/TCP and StorPool client traffic is a strong efficiency play to go with the performance improvements NVMe/TCP introduces.

Hybrid Cloud

There’s no doubt that the hybrid cloud model is here and here to stay.  At Cloud Field Day 14, NetApp presented the results of a survey that showed 77% of customers intended to operate a hybrid cloud model for the foreseeable future, while 93% of those surveyed were already using multiple clouds in conjunction with on-premises infrastructure. 

If you follow the adage of “own the data, rent the cloud”, then building a consistent data experience for developers and end users, whether on-premises or in the cloud, becomes critically important.  We’ve already seen many vendors port software to run on public cloud infrastructure.  This is much easier today than ever before because all the major cloud service providers offer NVMe-enabled virtual instances or bare-metal hardware. 

For example, AWS i3en instances (introduced in 2019) deliver per-drive performance of 650MB/s and 85,000 IOPS from a single 2.5TB NVMe drive.  The successor i4i instances with Nitro NVMe SSDs deliver 700MB/s throughput for a single 937GB drive.  Both of these instance types have at least 10Gb/s or 25Gb/s networking, making them suitable for building storage clusters.

Note: we have some research publishing soon that will look at the relative performance of the different AWS storage options, including EBS and NVMe SSD disks.

So, it’s perfectly possible to build a storage or HCI cluster within AWS and other public clouds.  But why do it?  As we’ve already said, having a consistent infrastructure between clouds means a consistent experience, but this can also be achieved (with some effort) through scripting and APIs.  As we discussed over four years ago, the greatest benefit to a consistent data layer is data mobility. 

The mobility aspect is becoming even more critical, considering the results from the NetApp survey discussed earlier.  Block storage connected to public cloud virtual instances generally isn’t exposed to the outside world, so moving data between clouds isn’t easy.  Cloud vendors charge egress fees, so making the cloud-to-cloud replication process efficient is a must. 

It’s a logical evolution for StorPool to support the public cloud, but don’t underestimate the effort required to make this work.  StorPool will continue to support public cloud clusters in the same way on-premises clusters are supported today.  For the version one implementation, we see this solution being used by more established cloud customers that need long-term storage rather than those looking to do dynamic and frequent builds and teardowns. 

NFS

The support of NFS is an interesting addition to the StorPool platform.  From what we understand, the implementation is achieved through a virtual machine deployed on each StorPool host.  The initial NFS client is suited for throughput-intensive workloads (video streaming, rendering) rather than IOPS-focused applications.  This appears to be an NFS gateway using StorPool LUNs and an NFS server in KVM. 

Many existing StorPool customers will probably have an NFS solution in place.  However, the option to use the StorPool platform for a subset of NFS data (particularly streamed content or even backups) will prove useful.  This is another example of expanding the TAM of the platform.

The Architect’s View®

It’s good to see some valuable enhancements to the StorPool solution, particularly with the capability to span on-premises and the public cloud.  There are very few solutions today that replicate block storage to and from public cloud instances, but this requirement will become increasingly important.

We would like to see some improvement in data efficiency in StorPool deployments, perhaps with the implementation of some form of RAID-like or erasure coding protection.  Additionally, if StorPool intends to make a greater push into the public cloud, self-service build automation, and broader cloud support will be needed (including cross-cloud replication). 

As a relatively small company, StorPool continues to grow in capability.  The enterprise storage market is a tough one that, without constant innovation, remains a challenge to keep up.  Innovation is the key, with the need for a continuous stream of new features and functionality.   

Copyright (c) 2007-2022 – Post #5195 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission. StorPool is a Tracked Vendor by Architecting IT in software-defined storage. StorPool has been a customer of Brookend Limited prior to 2022.