HPE Introduces Alletra and the Data Services Cloud Console

HPE Introduces Alletra and the Data Services Cloud Console

Chris Evans Cloud Storage, Cloud-Native, HPE Leave a Comment

Today, HPE announced a new management platform called the Data Services Cloud Console (DSCC), with a revamp of their existing storage line-up, bringing Nimble and Primera closer together as a single product “family” under the brand name Alletra.  HPE is focused on highlighting the radical step forward these solutions represent.  So, what should we make of these announcements in light of the trend towards storage-as-a-service and previous attempts to standardise storage management?

Trends

The success of the Public Cloud has undoubtedly changed on-premises IT infrastructure forever.  Businesses and their IT organisations have firmly embraced the Cloud, adapting to operational expense models that reduce risk and offer greater flexibility.  Anyone with a credit card can spin up simple virtual instances for minutes, hours, or days.  In turn, these building blocks become platforms for more complex applications and services.  

This transformation represents a real problem for on-premises infrastructure vendors.  While cloud platforms offer granular billing with no commitment, building and operating a data centre is a multi-million-dollar investment.  Pick the wrong technology, and you’re stuck with it for three to four years while the assets depreciate and get written off. 

Storage-as-a-Service

On-premises vendors have started to adapt their sales models to encompass consumption-based purchasing.  HPE has committed to delivering its entire product portfolio as services by 2022.  We’ve spoken to Pure Storage and Hitachi Vantara about their StaaS strategies, and we’ve talked about the business challenges before.  However, for the vendors, their products need to be designed or adapted to work with service-based models.  These challenges including the following aspects.

Capital Outlay.  Storage vendors want to charge for consumption, but assets need to reside physically in the customer’s data centre.  This introduces a tricky balancing act for the vendor between pre-seeding a site (or multiple sites) while not overcommitting capital resources that might not get used.  

Management.  The deployment and integration of hardware resources can be challenging.  Great strides have been made in the last two decades, and most storage platforms are now 19” rack server mountable systems that are relatively easy to install and pre-configure.  Most vendors offer systems that are easy to bootstrap and don’t need days or weeks or pre-planning and design. 

Support.  Almost all vendors have introduced some form of cloud-based analytics platform to extract and analyse systems data.  These solutions (like HPE InfoSight) provide information to the customer, but more importantly, provide the vendor with tools that improve availability and aid with ongoing systems management.  At scale, no vendor can hope to deliver storage appliances without centralised management.

Upgrade/Refresh.  Storage consumption models are perpetual in the sense that the customer continues to pay a monthly charge for terabytes of capacity consumed.  At some point, hardware has to be refreshed.  This process is incumbent on the vendor, who has to provide the refresh functionality within the financial constraints of the monthly charge.  This requirement introduces both an operational challenge to achieve, with a financial aspect to ensure costs fit within the ongoing charging model.  If the customer needs capacity increases (or even capacity redistribution across multiple sites), then the technology has to be capable of granular upscaling and downscaling.

Multi-tenancy/QoS.  This requirement also covers a degree of abstraction from the underlying hardware.  Without some mechanism for implementing consistent application response times, hardware refreshes or upgrades could negatively impact applications.  Quality of Service features allow a vendor to be more precise when adding additional capacity, choosing between increasing capacity or increasing system performance. 

This is not an exhaustive list of the challenges for vendors.  The references at the end of this post provide more information and context.

HPE Data Services Cloud Console

What has HPE announced?  The new HPE Data Services Cloud Console (DSCC) provides a SaaS-based, centralised management platform, initially for HPE storage solutions.  The platform aims to provide one solution for all storage management functions – a concept HPE calls Unified DataOps.  This process is achieved using “northbound” APIs for customer management, with “southbound” APIs that talk to storage appliances.  The HPE logic in the middle will manage the decision-making process that decides how and from where storage resources should be allocated to an application. 

Predecessors

This kind of centralised storage management solution is, of course, not new. 

  • iWave Software developed a centralised management solution in the early 2010s before being acquired by EMC and used as the basis for ViPR.  iWave demonstrated their platform to me back in 2012.  At the time, only EMC systems were supported (a challenge we’ll discuss later). 
  • Creekpath Systems attempted to develop a solution for standardised storage management back in the mid-2000s.  I used the platform at Lehman Brothers in 2004 (and wasn’t impressed).  Opsware eventually acquired Creekpath in 2006.
  • HPE previously tried the “one platform to rule them all” approach with their acquisition of AppIQ in 2005, which became Storage Essentials.  I had a demonstration of AppIQ in the early 2000s while at JPMorgan Chase.  The technology looked good but lacked multi-user capabilities to ensure sites with many storage administrators would not continually step on each other’s toes.
  • EMC attempted centralised management with ECC and StorageScope, which I deployed and used in the early 2000s.  Both were unwieldy and consumed significant additional server resources to implement. 
  • The storage industry has tried to standardise storage management, first with SMI-S and most recently with Swordfish.  Neither solution has hit the mark, with vendors only paying lip service to SMI-S and Swordfish seeming to have no support at all.

From 2008 to 2010, I worked with Storage Fusion, a vendor looking to build a centralised storage reporting tool.  While we were successful in creating a SaaS platform for reporting, the solution wasn’t aimed at management because the process was (at the time), to complicated to achieve.  Why has centralised storage management never worked? 

  • There’s no incentive for vendors to share their internal APIs.  When management was performed through a GUI or CLI, storage system vendors had nothing to gain from sharing their internal APIs with the competition. 
  • SMI-S and other solutions created implementations that resulted in functionality addressing the “lowest common denominator”, for example, LUNs/volume provisioning and snapshots.  In reality, every storage platform (by necessity) implements its solution in subtly different ways, whether that’s the layout or distribution of resources or the naming and mapping of logical storage resources.
  • Centralised management results in the risk of the “tail wagging the dog”, where the central storage solution has to support any storage software release.  If there’s a lag between a vendor shipping the platform software update and the support within the central solution, then the customer can’t upgrade or has to take the system out of centralised management.  This scenario can be a critical issue if urgent patches or upgrades are needed. 

In large IT organisations, storage management tools can become a beast that demands to be fed and watered, with significant amounts of physical resources and skilled administrators used to maintain the platforms. 

Why Could HPE DSCC be Different?

What chance does HPE have with DSCC when all others before have failed?  There are several technical factors that could work in HPE’s favour.

  • Modern storage systems are much easier to manage than their predecessors.  All-flash systems don’t have the same design requirements that were needed with spinning media.  Most modern storage platforms offer native APIs that can be integrated into centralised tools.
  • The public cloud has enabled SaaS management to be practical and accepted.  SaaS tools don’t need onsite maintenance and upgrades.  The SaaS provider assumes these responsibilities as part of the offering.  From a storage management perspective, the only consideration is to validate whether the SaaS vendor can provide service-level guarantees on new storage software releases. 
  • All-flash has changed the consumption model.  In all-flash systems, the unit of granularity for new capacity is much lower.  Many vendors offer upgrades as small as a single drive.  This type of design makes it much easier to match on-premises capacity with demand and reduce over-configuration. 
  • InfoSight and similar solutions provide a wealth of information, including pre-emptive fault diagnosis.  When this is combined with remote management, large-scale fleet management from a central location is a practical reality.

In short, systems have changed to be more flexible and over the past two decades have become increasingly more reliable.  This has allowed vendors such as HPE to offer six or seven ‘nines’ of availability or 100% uptime guarantees.

Why is HPE Making this Change?

So why now for HPE?  As we highlighted right at the top of this post, HPE is committed to offering all infrastructure solutions as services by 2022 (within the next nine months).  DSCC provides the platform to extend support and management for on-premises HPE storage hardware via a cloud model. 

In tandem with the release of DSCC, Nimble and Primera platforms are now rebranded under the HPE Alletra name.  Both systems have received upgrades in the adoption of end-to-end NVMe support.  Primera becomes HPE Alletra 9000 series, focusing on mission-critical workloads, while Nimble becomes HPE Alletra 6000 for business-critical workloads.

Why brand the platforms under a single name?  If the long-term aim for HPE is to deliver infrastructure as a service, customers shouldn’t be concerned about the underlying technology.  Service-based delivery has to focus on uptime, resiliency, performance and scalability rather than the specifics of the hardware.  The branding of Alletra moves HPE into a delivery model where the focus is the solution, not the nuts and bolts of the configuration.  HPE now holds that responsibility.

Everything as a Service

This post is already way too long to go into specifics about the Alletra hardware.  However, we can highlight that the new 6000 and 9000 series systems have been upgraded to make the installation and management process even easier than it is today.  This step is critical in meeting the fleet management goal HPE is aiming to achieve. 

Futures

Of course, this announcement isn’t simply about building a better storage management mousetrap.  It’s a positioning move that ensures HPE can deliver on a transformation to on-premises infrastructure as a service.  Hardware has to become more autonomous.  Data and control planes have to be separated.  Then what happens?  This is where HPE has an opportunity to build in the value-add.  One aspect will be to leverage the benefits of InfoSight, offering automated platform upgrades, performance improvements, and other infrastructure benefits.  If the process can be applied across all infrastructure, then the approach used for storage could provide an appealing model for servers, networking and applications.  Storage services could become data services, at least in the initial form of improved data management.

The Architect’s View™

The on-premises infrastructure market is having to fight back against the onslaught of the public cloud.  This attack is happening on two fronts – the first is the migration of applications off on-premises infrastructure into the public cloud.  The second is the insertion of public cloud hardware into on-premises data centres.  If HPE and the remaining infrastructure vendors want to be relevant in the future market, then an evolution to solutions like DSCC and Alletra have to occur.  The next question is, how will the other infrastructure vendors respond?

One final aspect to consider is HPE’s approach to storage portfolio management. If DSCC is successful and the features can be extended to the server portfolio, why not just offer storage on-demand on generalised HPE infrastructure? This could apply to any offering in the HPE portfolio and deliver an on-premises storage and application marketplace. The battle for the on-premises market isn’t over yet.

Related Posts


Copyright (c) 2007-2021 – Post #1445 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.