On-premises infrastructure vendors are in a race to convert their product offerings into services. Across the industry, “as-a-service” is seen as a way to remain relevant in a world that is quickly migrating to the public cloud. However, there’s more to offerings like Storage-as-a-Service (StaaS) than wrapping existing products with a new API. We recently recorded a video with Prakash Darji, GM of the Digital Experience business unit at Pure Storage. We talk about how product development and architecture need to change to meet the requirements of the “as-a-service” model.
The service-based consumption model for on-premises technology has been an evolving story for many years. We wrote about the issue over eight years ago, in February 2014, with one of the first discussions about Pure Storage and Forever Flash (the precursor to today’s Evergreen programme). We also discussed the transition from capex to opex in this post from 2018.
Hardware vendors want to transition to service-based models because they offer greater revenue predictability. The customer pays monthly and typically doesn’t shrink capacity. With regular in-place technology refreshes, the forklift upgrade cycle is broken, and customers are more disinclined to swap vendors. Investors love the idea of recurring revenue for the same reasons of predictability.
For the customer, if the implementation is done right, the overhead of design, capacity planning, maintenance, management, upgrades, patching, fault diagnosis, and much more can now all be offloaded to the vendor.
The service model should be a win-win for everyone, but there are some caveats, which we discuss in this post from 2019.
Simply wrapping existing products up with a new API or marketing campaign are not solutions for efficiently delivering infrastructure in a service model. In our video with Prakash Darji, we use the Pizza-as-a-service analogy shown in figure 1. Our choices for pizza consumption range from entirely “do it yourself” made at home through to dining out at a pizza restaurant. Between the two, we can choose to buy a pizza and take home to cook or to have pizza delivered to our door.
At each stage in the continuum of pizza consumption, there are variations in choice and expertise. For example, at home, we can pick the most outlandish toppings possible and let creativity run riot. Restaurants tend to keep things more controllable. At home, we may not have a pizza oven and must compromise on a traditional cooking process. Even if we do have dedicated equipment, do we cook pizza often enough not to over or undercook it? In comparison, pizza restaurants cook hundreds every day and have the process highly refined.
Look at figure 2, and you’ll see how we applied the Pizza process to storage-as-a-service. We’ve extended the choices slightly, encompassing everything from self-managed to the public cloud. Through each transition, the vendor takes more control (with some optional components), with the public cloud representing the peak of delivery where the customer simply asks for storage that meets a set of abstracted service levels.
Picking the right place on this continuum is a matter of choice, constraints, and requirements. For example, some businesses want or are required to own the infrastructure. Some are happy to maintain management control but have a more flexible consumption process. Others want to divest all management tasks entirely.
As the vendor takes responsibility for the boxes in green, customers must understand what that delegation entails.
At the foundation of all discussions is architectural design. Without the right architecture, no hardware solution can be efficiently offered in an “as a service” model. The reasons for this are self-evident. The vendor is wholly responsible for the impact of poor design, where inefficient resource utilisation, unnecessary data migrations and lack of visibility will drive up costs. Costs that either reduce the vendor’s margin or are pushed onto the customer. In some cases, these impacts are likely to be unacceptable to the customer if they introduce additional risk or result in business interruption.
Design encompasses standardisation, modular replacement, and built-in telemetry. Scaling up and down should be transparent and enable rebalancing across larger deployments.
Key to the success of service models is the collection of telemetry data, including infrastructure status at the hardware level and utilisation information. Of course, we’ve had telemetry data collection for decades, but modern systems have to work at a much more granular level and at near real-time as possible. The most successful telemetry models are SaaS-based, with online portals and the ability to learn from the wisdom of the crowd.
As we show in figure 3, the three aspects of hardware, software and financial come together to fully deliver storage as a service. Without flexible hardware, the financial models don’t work. Without embedded software features and telemetry, the offloading of hardware management doesn’t work. The three are intrinsically linked.
The Architect’s View®
We’ve been watching Pure Storage for years, pre-dating even our 2014 blog post. We believe the company leads the storage-as-a-service market with the greatest amount of hardware innovation, software features and financial models. The key to success is in the details, so we recommend listening to Prakash to gain much more insight into what transitioning to a service model really means.
Copyright (c) 2007-2022 Brookend Limited. No reproduction without permission in part or whole. Post #98d8. The video linked in this blog was sponsored by Pure Storage, however this blog post is not sponsored and has not been reviewed by Pure Storage prior to publication.