The terms Hybrid Cloud and Multi-Cloud have become part of the lexicon for today’s modern information technology professionals. Some vendors even use the combined term of hybrid multi-cloud. But what do those definitions mean, and how should we use them with respect to applications and services across IT?
It’s clear from the conversation shown on the right that there’s some confusion and disagreement about our definitions of hybrid/multi in a cloud context and where we’ll see the industry head. Let’s start by addressing the use of these terms.
As highlighted in this post from two years ago, we define hybrid as a solution built from two disparate technology types. Initially, we may have thought of hybrid as combining on-premises and public cloud infrastructure. In the early days of cloud (around 2010), we talked a lot about cloud-bursting – moving excess demand to the cloud dynamically. The degree to which this capability was both possible and exploited is debatable.
The hybrid model provides something we couldn’t do with just one of the individual components. For example, a model that keeps most data on-premises and runs compute tasks in the public cloud provides data sovereignty while benefiting from the constant improvements in public cloud compute infrastructure (like access to GPUs). Hybrid could also be used for dynamic workload management, as we discussed in this post on Datrium Automatrix.
We can extend the hybrid model further as we see more SaaS applications built to process data from traditional sources. Where SaaS might have been seen as a platform for business and operational tasks (think Salesforce and Office 365), we increasingly observe more complex solutions emerging like Snowflake, FaunaDB and the new Portworx Data Services. These platforms are SaaS but integrate directly into either on-premises or public cloud platforms and specifically address the definition of hybrid – building something from multiple components that creates a new solution not possible with only either of the original parts.
Multi-cloud is a little easier to define and generally refers to the use of multiple services of a similar type. For example, using multiple public cloud providers or multiple SaaS platforms. The most apparent use-case here is to provide resiliency in case one cloud has an issue; another scenario is using multiple clouds because one offers differentiated or better services than another. This differentiation could be technical, operational, or based on cost.
Implementing a multi-cloud strategy doesn’t have to mean combining those services or interweaving solutions across them. We can simply pick and choose our requirements and use cloud services as required.
In today’s market, the implementation specifics of each cloud provider make it challenging to combine clouds together in a meaningful way. Some of these challenges are technical, some are operational (like security and data protection), and some are by cloud design (like egress charges). Vendors have little to gain by allowing the free movement of applications and data to their competitors. Data inertia also introduces issues, as does data consistency.
Over time, we will see some of the integration challenges being addressed. VMware’s Project Ensemble is one example; Rancher is another. NetApp’s Data Fabric strategy also aims to harmonise some of these disparities.
Let’s clear up one final definition. Cloud-native application design is already well-defined. This Wikipedia page says it all. We generally think cloud-native apps are built for resiliency on top of unreliable cloud infrastructure, although this concept seems outdated with the current levels of cloud maturity. Perhaps we should be thinking more of scalable and dynamic as the watchwords of cloud-native apps.
What about cloud-native services? Traditional IT vendors have developed platforms that run on top of public cloud infrastructure. These solutions use features like virtual instances, Kubernetes clusters and serverless to deliver cloud-based products. Some of these solutions (like Druva and Clumio) are cloud-native in the sense that they use cloud-based services that aren’t available on-premises or use a very specific cloud feature that would require re-coding to use elsewhere.
However, these applications are not strictly cloud-native because they’re not integrated within the cloud ecosystem with respect to the cloud provider. These services, for example, aren’t accessible by cloud vendor APIs, don’t directly integrate into cloud security models and aren’t billed by the cloud provider.
We would define cloud-native services as those that are directly integrated into the security, network, storage, API and billing model of the cloud vendor, with the implementation specifics obfuscated from the customer. From the customer perspective, the services appear to be delivered directly by the cloud provider. NetApp’s FSx solution in AWS is a good example.
This definition is probably going to be contentious but worthy of discussion.
Now that we’re clear on definitions, let’s address the question of whether hybrid cloud is here to stay, or not.
History always offers good insight into what to expect in the future. If we were writing this post in the 1960s or 1970s, we would have probably been unable to see past the domination of the mainframe. Remember “Nobody ever got fired for buying IBM” as an industry aphorism? Today, the IBM mainframe is mostly derided as old-fashioned technology – except for many businesses that depend on its reliability.
Those mainframe-using businesses don’t exclusively use one platform. In fact, mainframe probably provides only the core mission-critical applications. All of them will have migrated through virtual machines to containers and serverless, with a mix of all those services in one way or another.
The larger, more diverse, and older a company is, the more technology platforms and solutions will be in use. The cost and complexity of moving entirely from one platform to another usually outweigh the benefits. As a result, businesses retain a technology until it is financially and practically viable to replace it.
The same cycle of adoption will occur in the use of public cloud. Businesses will adopt the technology and, for many reasons, will adjust their consumption models:
- Cost – we’ve seen data repatriation occur because of cost. Cost models in the cloud are highly complex, and changes in business logic can directly impact charges.
- Competitive Advantage – cloud providers are constantly bringing out new solutions, so businesses must be prepared to use multiple clouds to gain the best advantage. Naturally, there are limits to practicality in this scenario. This aspect is also tied to innovation failure if a cloud provider is seen to fall behind in bringing new solutions to market.
- Reliability – Although much less of an issue than in the early days of the cloud, reliability is still a contributing factor to service adoption.
- Regulation – Current FCA guidance in the UK, for example, mandates that financial organisations must know how they would transform applications from one cloud provider to another. Regulation also affects data placement, especially for businesses with a global presence.
- Ethical or Personality Clashes – strange as it may seem, multi-million dollar decisions are made based on personality clashes at the corporate level. Businesses may also choose to move to cloud providers that offer better green or environmentally friendly credentials.
Finally, we shouldn’t discount corporate strategy. Many businesses mandated a “cloud-first” approach due to the perceived cost savings and simplicity of the public cloud. However, those decisions may change in the future based on many factors, not least of which is the decision (or beliefs) of senior leaders or how their organisations are perceived.
The Architect’s View™
Nobody can predict the specifics of the future, but we can be sure of the high-level trends. The hybrid and multi-cloud “genies” are out of their respective bottles, and there will be no going back. The ebb and flow of technology adoption and historical precedent tell us that on-premises deployments are not going away. The only metric up for debate is the degree to which businesses will choose to use the hybrid, multi or hybrid/multi models of consumption.
Copyright (c) 2007-2021 – Post #fb5e – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.