AWS Outposts – A Cuckoo in the Enterprise Nest?

AWS Outposts – A Cuckoo in the Enterprise Nest?

Chris EvansCloud, Data Mobility, Enterprise, Opinion

There is arguably a general acceptance that we’re moving to a hybrid cloud world in the enterprise.  Whether this means just using multiple solutions to deliver IT, or a fully integrated mesh of associated applications, most enterprises will use services from more than one vendor.  As Amazon Web Services releases more details on Outposts, how welcome will this fledgling be in the on-premises data centre?

Outposts Primer

AWS announced Outposts back at AWS Reinvent 2018 with an expected delivery date of 2h2019.  The solution is a hardware platform that can run either native AWS services like EC2 and EBS or run VMware Cloud on AWS.  In both cases, the hardware sits in the customer’s data centre, where there are benefits to being close to either other applications or infrastructure that needs low latency or can’t manage network outages.  A typical example is manufacturing that relies on applications running on local servers.

Outposts Hardware

Anthony Liguori released a video in June 2019 that shows how the physical components of Outposts will be delivered.  Customers receive pre-built racks of servers and networking that mirror the technology deployed in AWS data centres.  This means custom hardware with a dedicated busbar power supply, 1U servers and 100Gb Ethernet networking. 

AWS seems to be treating a server as the failure domain, indicating that failed servers are simply replaced and returned to the company.  There’s no indication of any user-serviceable parts inside a server itself.  Multiple racks can be chained together and connected to the customer’s network through top-of-rack switches. 

Possibly the most striking thing here is the apparent lack of any specialisation in the servers.  AWS isn’t building HCI or dedicated storage, everything is generic and available for customer use.

Nitro

How can the management overhead on hardware be so lightweight?  The answer is Nitro, a hardware and software architecture that has moved much of the networking and storage I/O functionality to hardware.  AWS started developing the Nitro ecosystem in 2013, purchasing Annapurna Labs in 2015 to develop custom ASICs for the project. 

Today, there are four Nitro adaptor cards and a Nitro security chip:

  • Nitro card for VPC, which implements the Elastic Network Adaptor.  This card delivers functions that include network packet encapsulation, security, and routing. 
  • Nitro card for EBS, which implements NVMe block storage devices, implements encryption and connects remote storage using NVMe-over-Fabrics.
  • Nitro card for Instance storage, which implements NVMe devices for locally connected storage. 
  • Nitro Controller, a management card that performs coordination with Nitro I/O cards, monitoring and management of hardware resources, exposing a standard management API.  The Nitro Controller also includes a security chip to lock down access and act as a “single point of truth” for other microprocessors in the server.

Nitro Hypervisor

With I/O functionality offloaded to custom hardware, AWS was able to implement a very lightweight hypervisor, based on KVM, already part of the Linux kernel.  This is how Outposts is able to ensure customers get to use almost all the resources of the installed hardware for application instances.  Just run lspci on an AWS instance and you can see the offloaded devices in action.

If you want more background on Nitro, this great session from Reinvent 2018 provides a lot more detail.

VMware on AWS

One interesting aspect of having a hardware-based implementation is the ability to run hypervisors other than Nitro.  This is how VMware on AWS is implemented.  Thinking through the process it makes sense.  VMware modifies vSphere to support ENA and local NVMe devices.  AWS automates the process in the same way that the Nitro Hypervisor is installed on new hardware. 

Of course, there is a lot more involved to make VMware on AWS work.  I’m simplifying the description for brevity and mainly because I have no insight of the actual implementation. 

Remote Management

As Nitro exposes management APIs on the network, Outposts can be managed remotely from the nearest AWS region.  This appears to mean that no local management is taking place and that Outposts is a local implementation of AWS in name only.  This has two interesting outcomes.

First, there’s very little to go wrong onsite.  If the hardware fails, it can simply be replaced (and in any event is redundantly configured).  If the configuration gets screwed up, then assuming data is secure, a deployment can be simply wiped and re-configured.  There’s little or no diagnostics to perform.

Second, Outposts can’t act as a standalone platform.  It needs AWS connectivity to work (or at least to be reconfigured).  This is very different to (for example) Azure Stack, which is an entire instantiation of management software and the customer’s applications.

Cuckoo in the Nest

The title of this post talks about a cuckoo in the nest, which refers to an unwelcome intruder in a place or situation.  In this context, Outposts will be welcomed by customers, but unpopular with other hardware vendors in the market that are looking to deliver equivalent solutions. 

The custom nature of the AWS hardware means the infrastructure can’t be used for any other purpose and will be about as easy to integrate as traditional on-premises infrastructure to standard AWS (hint, not that easy).

The interesting angle here is VMware on AWS and how far integration with on-premises implementations of vSphere will be allowed.  It could become very easy for AWS to offer replacement of existing VMware infrastructure with Outposts, achieving migration using standard vMotion and Storage vMotion.  This doesn’t force the customer to use any native AWS services.

The Architect’s View

AWS has created an elegant implementation of on-premises infrastructure with Outposts.  If the platform is a success, scaling to support thousands of data centre locations would be hard to impossible.  Hence the use of custom hardware with remote management. 

Remember though, that the customer is still going to be using the same APIs, GUIs and function calls for on-premises deployments as with those in the public cloud.  This is a blessing and a curse; it offers consistency and lock-in.  Outside of reduced latency and a degree of local data control, Outposts is effectively a cuckoo in the enterprise data centre.

Who should worry?  As we’ve already discussed, enterprise infrastructure vendors should be very worried.  It’s an easy step for AWS to offer Outposts with VMware on AWS as a stepping stone to moving workloads to AWS native and then migrate them back into an AWS region.  While this is no different from what could be done today, there’s an initial psychological factor where customers will think they are in control of their infrastructure, simply from where it is located.  It’s just like the mother bird who thinks she is raising her chick – until it evicts her own young.


Copyright (c) 2007-2019 Brookend Ltd, no reproduction without permission, in part or whole. Post #7923.