Optimising Public Cloud Service Density

Optimising Public Cloud Service Density

Chris Evans Cloud, Composable Infrastructure, Opinion

During this week’s Storage Unpacked podcast, we discussed how cloud hyper-scalers extend the value of their solutions through the move to platform solutions from pure virtual instances.   I wondered if the “VM value” was being increased through the use of these services, and it seems they are.  Does this upscaling represent a future direction, or is it just business as usual?


Most of the Infrastructure-as-a-Service market is based on virtual machines, also known as virtual instances.  Customers choose an instance type configured by vCPU, memory, storage and networking.  AWS launched EC2 (Elastic Compute Cloud) in August 2006, adding persistent storage (EBS) two years later. 

The virtual instance is a building block for applications.  The options for configuration vary widely, with choices of on-demand, long-term commit, spot instances, preemptable instances, to name just a few.  AWS, for example, now has close to 400 instance options.  Customers can choose from a catalogue of pre-configured virtual machine images or simply build an entire VM from scratch.

The ongoing revenue model from virtual instances across all the hyper-scalers is pretty much the same.  Prices vary by the specification of an instance.  Hyper-scalers have increased their potential service flexibility with spot instances and additional hardware components like GPUs or flash storage. 

Essentially though, the revenue from a single virtual instance is directly related to the underlying hardware.  A physical server can only support a limited number of virtual machines based on CPU and memory constraints.

Managed Services

AWS introduced the Relational Database Service in 2009.  MySQL support has been expanded to include Oracle, SQL Server, PostgreSQL and MariaDB.  Aurora, Amazon’s forks of existing databases was introduced in 2014.  All of these offerings run on standard instance types.  The difference is in the price.  Look at the following table:

Standard InstanceInstance CostDB instance Cost (RDS MySQL)DB Instance Cost (Aurora MySQL)
t3.small$0.0236$0.038 (61%)$0.047 (99%)
t3.medium$0.0472$0.076 (61%)$0.093 (97%)
r6g.large$0.1184n/a$0.304 (157%)
rg6.xlarge$0.2368n/a$0.609 (157%)
r5.large$0.148$0.28 (89%)$0.34 (130%)
r5.xlarge$0.296$0.56 (89%)$0.68 (130%)
Hourly pricing based on Europe (London) region, as of the date of writing, on-demand instances only, Linux operating system, single AZ.  Excludes additional charges.  Cost increase shown in brackets.

This data is pretty simplistic but shows AWS EC2 instance charges compared to the equivalent pricing for RDS and Aurora – the markup per VM is significant. 

Of course, I’m not suggesting the price difference isn’t justified.  AWS is offering a managed service here, so we expect to pay more.  That cost translates into service management in this instance, not into software, as MYSQL isn’t a commercial database.  What we’re highlighting here is the additional value AWS (and money) is making from the underlying hardware – increasing the revenue per VM in the same way a traditional retailer would maximise revenue from physical shelf space.

New Applications

We can apply the same logic to containerised applications.  AWS prices for EKS (Kubernetes Service) are based on the underlying instance costs plus $0.10 per hour.  Fargate is priced on vCPU and memory, which broken down, are more expensive than a virtual instance running the same resources.  The pricing structure is the same across all the hyper-scalers in one way or another.

Long-term View

Hardware optimisation has been a perennial IT requirement long before the public cloud emerged.  Mainframes, with their multi-tasking and virtualisation capabilities, provided efficient use of CPU resources.  Server virtualisation, as pioneered by VMware, is almost table stakes for building on-premises infrastructure.  Containerisation and serverless push this envelope even further.

The big question is, where does this long-term trend go?  IaaS looks to be the least cost-efficient solution from the vendor perspective.  Would it make sense for the hyper-scalers to dis-incentivise the use of basic virtual instances?  There is little or no information about which services generate the most revenue within AWS or any hyper-scaler.  However, it is clear that abstracting customers away from the hardware-specific configuration means AWS and others can optimise further, using features like ARM processors and PCIe fabrics. 

The Architect’s View™

I doubt any cloud service provider will stop selling virtual instances any time soon.  However, as general business strategy, the move to containerisation, serverless and PaaS is clear.  These features generate more revenue per $ of hardware installed than native virtual instances alone.  This observation may seem, well, obvious, but in the battle for supremacy, every opportunity to eke out costs and charge the customer more is a win. 

The bigger question is what this means for on-premises infrastructure vendors.  There’s currently a big push (certainly in storage) to sell hardware as a service.  Can the traditional infrastructure vendors offer services in the same way that’s happening in the public cloud?

Technologies like SmartNICs and composable infrastructure will make on-premises solutions more flexible, but the idea of infrastructure vendors adding application-based services does seem a step away.  However, if these companies don’t make a move, the justification for keeping on-premises infrastructure becomes harder and harder to make. 

Copyright (c) 2007-2021 – Post #c689 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.