Is AWS Moving to Private Cloud with Snowball Edge and EC2?

Is AWS Moving to Private Cloud with Snowball Edge and EC2?

Chris EvansCloud Storage, Data Mobility, Storage

At a recent AWS Summit in New York, Amazon Web Services announced the availability of EC2 on Snowball Edge devices.  Does this move represent a challenge for private cloud and could it be the start of a hybrid strategy for the company?

Snowball Edge

Snowball in rain at AWS Re:invent Las Vegas 2017

In case you’ve not seen it before, the Snowball is a suitcase-sized appliance that has been ruggedised for shipping to and from customers’ premises.  The device itself stores up to 100TB of unstructured data and is intended to be a faster upload (or download) process than using a wide area network or the Internet.

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway — Andrew S. Tanenbaum

Common deployments are either bandwidth-poor locations, such as remote offices, ships, oil rigs and/or where large volumes of data need to be collected.

In November 2016, AWS announced the support of Lambda functions on Snowball, renaming this offering as Snowball Edge.  Rather than being a “dumb” storage device, Snowball Edge can pre-process data before it is shipped and uploaded into S3.  This could be useful, for example, to clean new data, or do basic initial processing on content to highlight anomalies.  We talked about this as a concept on a recent Storage Unpacked podcast with Scott Shadley (see episode #57, around 5 minutes in).

Snowball Edge & EC2

With the July 2018 announcement, Snowball Edge can now support a restricted number of EC2 instances running on the appliance.  Initially, the environment is relatively limited, with a single Intel Xeon D processor (1.8Ghz), 32GB of memory and up to 24 vCPUs.  Instances have to be based on specific AWS AMIs.  This means Ubuntu Server 14.04, 16.04 or CentOS 7.

It’s obvious that running a virtual instance provides more processing capability than could be offered through Lambda functionality.  Lambda code written in Python is a reactive process, triggered by some action into the Snowball, such as a PUT of new data.  An instance, on the other hand, could be used to do regular processing and upload the results to another location, or work together in a cluster of Snowballs and expose a different interface to the data.  It also allows users to run commercial software.

Private Cloud

I expect the question many people will be asking (and predicting) is that this could be the start of a push by AWS into the data centre and towards private cloud.  Until now, AWS hasn’t really offered anything for on-premises deployments, other than perhaps the storage gateway, which is just another process for moving data into S3.  Features and products outside of AWS core data centres have been focused on funnelling data into the Amazon mothership.

But is this really Amazon’s strategy?  Running AWS as a service is a very different proposition from selling infrastructure.  *aaS offerings provide the ability to obfuscate the inner workings of the platforms.  For all we know, AWS could be held together with string and sealing wax, although it’s unlikely to be the case.  Running as a service rather than selling hardware does allow Amazon to control the hardware, the software development, the software rollout and the use cases.  I’d posit that being in the infrastructure business is much more challenging.  It also puts limits on scale and introduces risk.  For AWS, introducing a new service is relatively simple.  It can be basic at first and easily extended over time, just like Snowball.  Bringing a new hardware product to market is a bit more challenging.

Azure Stack

The most likely comparison that will be made with Snowball Edge is with Azure Stack from Microsoft.  This is intended to offer the same look and feel as Azure in public cloud while providing some of the remote benefits we’ve already discussed.  The “intelligent edge” is no doubt going to be a target for all cloud providers.  Microsoft though, is a very different company, with a history of developing on-premises solutions.  As well as the intelligent edge, Microsoft has many customers running Windows Server and these would be great customers to convert to Azure Stack, with the future promise of moving to public Azure.

The Architect’s View

I might be wrong, but I don’t see this Snowball Edge as the start of AWS moving into the data centre.  Instead, it looks to me as another way of improving the process of getting into public cloud.  It makes public cloud more attractive and from AWS’ perspective, with consistent API endpoints for Snowball Edge (for EC2 and S3), developing data collection/analysis platforms at the edge becomes a whole lot easier.  If this is where most future data will be generated, then having a product in this space makes complete sense.

Where does that leave on-premises compute and private cloud?  Personally, I’d be more worried (if I was an HCI vendor) in Amazon’s approach with VMware in developing VMware Cloud on AWS (VMC).  That’s because the route of “cloudifying” traditional apps is much more likely to go this way.  VMC lets customers remove the capex cycle of buying and supporting hardware platforms, without changing the deployment experience.  It’s a win-win for AWS because customers are on their hardware platform and will definitely look at rewriting applications for native cloud when that time comes.  In the meantime, they simply avail themselves of AWS hardware.

Of course we could look back in 12 months’ time and see that AWS did indeed release an on-premises HCI solution and all this supposition will be moot.  Wouldn’t that be fun?

Further Reading

Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.

Copyright (c) 2007-2019 – Post #B904 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission. Photo credit iStock.