In previous posts, I’ve outlined a framework for the transition to a hybrid cloud. In this post, we will look at NetApp Keystone and apply the same comparison that shows how an evolving purchasing model is essential to delivering a consistent hybrid cloud experience.
NetApp announced the Keystone purchasing model at Insight in 2019. Customers can now choose to buy NetApp storage products or pay for them on a consumption basis. This model spans both public clouds and private data centres.
- NetApp Fabric Orchestrator and Hybrid Cloud
- NetApp NKS and Hybrid Cloud
- Building a Model of Hybrid Cloud
Flexibility in consumption aims to remove some of the challenges in the deployment of infrastructure into private data centres. IT organisations are required to translate internal operational costs, and consumption into capital outlays that predict demand and organic growth.
This process is no easy task, although it is one that the major public cloud service providers have been doing since their inception. Those organisations benefit from vast economies of scale and the ability to absorb any initial losses in developing their processes.
NetApp is following a path that many other companies have already started. Public cloud has demonstrated that a service-based approach to resource consumption can be highly effective for businesses that see unpredictable demand or have no desire to manage significant capital outlays.
For many infrastructure vendors, public cloud is a platform against which they have to compete. The easiest way to achieve this is to provide purchasing models that offload some of the operational to capital translation. Now the vendor accepts some of the risks in technology deployments. The customer agrees that some cost increase may be necessary, as a trade-off in gaining greater flexibility.
NetApp Keystone follows a simple three-step process. Step 1 – choose a performance tier. Step 2 – pick the service type (essentially the protocol). Step 3 – select a management option – self-managed or NetApp managed. The possibilities and the choice of traditional “build your own” are shown in figure 1.
Under Cloud Consumption Services, the first option is to store and manage data in the public cloud. This choice means using solutions such as Azure NetApp Files, which is fully integrated into the Azure platform. The next two provide the customer with choice around who manages the infrastructure, either the customer (or a partner) or NetApp. We’ll come back what “manage” means in this context later. Finally, there’s the option simply to follow the traditional capital acquisition process, buy the hardware and self-manage the deployment.
How does Keystone fit into our five-step process?
The concept of abstraction may be viewed in many ways. From the perspective of Keystone, the first level of abstraction is in step 1 – choosing a performance tier. Enterprise organisations spend a considerable amount of time building internal service catalogues that map expected response times, throughput and availability to storage tiers. Keystone simplifies this process by offering the customer a choice of tiers then putting NetApp in the position of delivering those service levels.
We already expect the same level of interaction in the public cloud, where we have no idea how vendors are delivering their service offerings. In adopting solutions like Keystone, customers will need to take a leap of faith that the vendors will choose the right technology and keep it upgraded over time.
This step continues the discussion around service catalogues but extends from private to public cloud. Standardisation needs to follow traditional storage metrics, with the assumption that (at this time), internal enterprise customers have a different view of what’s expected from on-premises and cloud services. Standardisation means;
- Aligning service catalogues and service metrics.
- Aligning costs – moving to consumption billing on/off-premises.
- Aligning management reporting – a consistent view on data growth, availability and other service management aspects.
ActiveIQ is one NetApp technology that starts to bring management reporting onto a consistent footing. Businesses need management-focused dashboards that show the status of consumed resources and costs. IT teams need dashboards that show availability, uptime and spare capacity. These metrics apply to both public and private cloud delivery models.
Automation is the ability to self-provision resources. This capability is undoubtedly a strong requirement for hybrid cloud. However, the process extends further than this. Automation also includes management of resources and the granularity at which they are provisioned and billed. These features are characteristics of the Keystone offering.
Optimisation is a challenging aspect for enterprises to deliver. We’ve already discussed the translation of operational to capital expenditure as one aspect of this transition. For businesses to optimise more effectively, transparency in resource costs and efficiency is required. At a basic level, this can simply mean pricing per tier ($/GB) but becomes increasingly complicated when data mobility comes into play.
Now we have to consider data transition charges (ingress/egress), networking charges in implementing hybrid architectures, plus the “cost” of latency. The last point is particularly challenging. A cloud service provider may look cheap on paper, but the cost of moving applications closer to the data could be more expensive than using a slightly costlier storage option in the location where the applications are seated.
How does this apply to Keystone? NetApp has started to provide some transparency in platforms such as Fabric Orchestrator that attempt to deliver a consistent view over all of NetApp storage resources. Fabric Orchestrator also puts in place the foundation to move towards step 5, further innovation.
At this high level, we can see that Keystone can offer more attractive consumption models for the Enterprise. However, we need to take some caution and look at how Enterprise storage management processes operate today. The following table shows the choice of management divided either to the customer/partner or NetApp. Assume for a moment that NetApp manages the infrastructure, and the customer sits on the side of the consumer. We can create a list of some responsibilities as follows:
|Provisioning/Decommissionng||Capacity Management (& resolution)|
|Data Protection/Backup||Problem Management (& resolution)|
|Maintenance (code upgrades, patching)|
|Security Credentials Management|
Within each set of responsibilities, where will the dividing line sit? Many of these tasks have complex internal workflows. For example, change and problem management will integrate with existing ticketing systems and change management processes. Shared tasks (the last three) like capacity upgrades will still need some degree of internal sign-off to ensure budgets are in place.
Operational & Contractual
For any vendor (not just NetApp) getting the alignment operational process and contractual agreements right will be a critical factor in the success of these solutions. Buying from the public cloud is easy – you take what the vendor offers – or not. Enterprises buying from traditional infrastructure vendors expect more of a bespoke service. So, should these companies be offering more “no-frills” services as an incentive to move away from the costlier bespoke solutions?
The Architect’s View
Keystone is a good step forward in evolving NetApp’s business and adapting to changing practices in IT organisations. The ultimate goal for NetApp is to be able to offer customers their products and services, irrespective of the environment into which they are deployed. The ultimate goal for customers is to reduce the friction involved in consuming those choices. This is an ongoing story that will continue to play out over the coming decade.
Copyright (c) 2007-2020 Brookend Limited. No reproduction without permission in part or whole. Post #b49b. This content was sponsored by NetApp and has been produced without any editorial restrictions.