NetApp Introduces Keystone Purchasing Model

NetApp Introduces Keystone Purchasing Model

Chris EvansCloud, Cloud Storage, Data Management, Enterprise, HCI, NetApp, Storage, Storage Hardware, Storage Management

The cloud model of consumption is affecting the whole of IT, changing the landscape for how we buy and consume IT resources.  Traditional infrastructure vendors, including NetApp, are having to be creative in their approach to customer requirements.  One of the latest trends in the industry is to offer infrastructure as a service, based on a pay-per-use consumption model.  Can these services really work in practice?

Square Peg

The whole idea of infrastructure vendors offering their products as services seems to be a natural oxymoron.  Many of the benefits of public cloud derive from the inherent multi-tenant nature of sharing resources across multiple customers.  So, how on earth can that be achieved with equipment that sits in a single customer data centre, for single customer access?

Compromise

The answer to the question is pretty simple – compromise.  NetApp Keystone, in common with many other similar vendor solutions, expects minimum capacity and term commits from the customer.  NetApp puts hardware on the floor for (for example) one year, with (for example) a minimum 100TB deployment.  Typically, in storage at least, capacities never reduce, but grow at a steady rate.  This helps the customer mitigate against the cost of paying for commitment that doesn’t get used.

Keystone

NetApp announced Keystone at Insight 2019.  The service is simple to understand.  Choose a level of performance required and agree to a minimum commit (time and capacity).  NetApp (or a partner) installs and supports the equipment.  The customer can choose either a simple “infrastructure as a service” offering and do the ongoing administration of the system themselves.  Alternatively, a managed service offering provides the capability to provision and operate the service through a portal using tools like Fabric Orchestrator.

Transformation

From NetApp’s perspective, the company’s journey to a more cloud-like consumption experience has already set the foundation for delivering Keystone.  Azure NetApp Files (ANF) and Cloud Volumes are delivered today using NetApp ONTAP infrastructure that is obfuscated behind the GUIs and APIs of cloud service providers (AWS, Azure & GCP). 

On-premises customers are arguably operated using a similar management process.  Most will have multiple internal customers that require multi-tenancy.  Growth of capacity and performance will be managed per tenant.  This is something NetApp has already delivered through Active IQ

True Cloud Experience

Of course, the ongoing challenge for NetApp and any vendor offering similar services is getting to a real cloud-like experience.  Today, in reality most services are pseudo-cloud because of the requirement to put hardware into the customer’s data centre.  Delivering “true cloud” requires some change from the vendors in terms of product development and operation.

  • Internal cloud development/deployment model.  Software components (e.g. the ONTAP O/S) need to be capable of deployment and upgrade with little or no outage.  Vendors will need to implement these solutions remotely, as ongoing site visits are impractical.
  • Product design needs incremental capacity capabilities.  Vendors need to provide features that modularise the addition of more storage capacity and/or performance.  This also means removing capacity/performance if a customer downscales.  Some vendors will have real challenges here because their products aren’t granular enough in design.
  • Move to product commoditisation.  By this I mean move further towards the process of using generic servers or a more modular infrastructure to deliver services. 

I think the last point deserves a little more detailed discussion.

CI/HCI/Composable

Looking back over the last decade, we’ve seen the evolution of converged infrastructure (CI), hyper-converged (HCI) and composable solutions.  CI delivered more efficient packaging and little else.  HCI provided the ability to create more generic infrastructure with servers as the building block and to implement incremental growth capabilities.  Composable potentially offers the ability to build and tear down infrastructure to meet requirements in a more flexible way than either CI or HCI.

In each scenario, the building blocks are always the same – compute (with memory), networking & storage.  If we could rack/stack hardware that has these components, the physical aspect of deployment is vastly simplified.  After this we just need to configure to the customer requirements,

NetApp HCI

NetApp already has these building blocks in place.  The NetApp HCI offering provides compute and storage, using their SolidFire or Cloud Volumes running on compute and consuming SolidFire resources.  There’s no logical reason why storage couldn’t simply be delivered by a version of ONTAP Select – as long as performance could be guaranteed.

The Architect’s View

So, NetApp has the pieces of the puzzle to deliver even more flexible on-demand services than those that will be initially offered with Keystone.  Once a customer is comfortable with a guaranteed level of performance and capacity, the hardware specifics no longer matter.  The delivery model could be dedicated FAS hardware or a software-based solution using HCI.  This could also be used to deliver StorageGRID and any other infrastructure services NetApp cares to offer. 

The interesting question to ask here is how far down the line should NetApp go.  Today the company is evolving from physical storage infrastructure towards being a data management company.  There needs to be more content-focused solutions delivered with the current transition to storage-as-a-service.  Perhaps this is the next stage of evolution for the company and one we should expect to see develop from 2020 onwards.


Copyright (c) 2007-2019 Brookend Ltd, no reproduction without permission, in part or whole.  Post #e2d2.