VMware Project Pacific – First Impressions

VMware Project Pacific – First Impressions

Chris EvansContainers, Enterprise, Tech Field Day, Virtualisation, VMware

Single pane of glass, one ring to rule them all, or a consistent uniform management plane.  We’ve had lots of attempts at aligning infrastructure and application management over the years.  As we saw with SMI-S and ViPR, the results have been less than successful.  Unfortunately, there are usually too many vested interests in place that reduce the efficacy of adhering to a single standard. 

So, with the announcement of Project Pacific, has VMware cracked the single management plane conundrum?  Could the next version of vSphere, with native support for both VMs and containers be the solution customers have been looking for?

Application Frameworks

VMware has dominated the server virtualisation market over the last 15 years.  ESX, then ESXi has been the standard architecture, with rivals such as Hyper-V and KVM failing to make much of a dent in market share.  In fact, ESXi and the vSphere ecosystem is now being adopted by public cloud providers as an on-ramp to the public cloud.  All the major western CSPs (AWS, Google and Azure) have announced plans to run (or are running) vSphere on their infrastructure.  

However, server virtualisation isn’t the only game in town.  Modern containerisation that was popularised by Docker and industrialised by Kubernetes is gaining ground and offers a real alternative to application packaging.  Containers are lightweight, easy to deploy and offer significant advantages in maintenance and general operation.

The Kubernetes Challenge

Unfortunately, installing and managing Kubernetes is hard.  This isn’t surprising, as the origin of the software is Google, where Site Reliability Engineers would breeze through this task as a matter of course.  That’s not the case for the typical enterprise.  This is not because enterprise IT teams couldn’t build or manage Kubernetes solutions (and many do).  The difference is that enterprises prefer packaged and supported platforms where vendors take ownership of development and support.  This is one of the reasons why VMware vSphere was so successful in the first place.

With the deployment of Kubernetes representing a roadblock to container deployment, VMware clearly saw an opportunity to be the supplier of preference for the Kubernetes ecosystem, in much the same way that vSphere delivers on server virtualisation.  Even better, if the two solutions (server virtualisation and containers) could be integrated into the same platform, customers would have the ability to mix and match workloads through a consistent API and GUI. 

And so, Project Pacific was born.

Project Pacific

Project Pacific was announced at VMworld 2019 and billed as the most extensive change to ESXi in 10 years.  ESXi and vSphere are extended to create a single management plane and API that addresses both containers and virtual machines.  A Kubernetes Pod becomes a “first-class citizen”, just like a virtual machine and is directly integrated into ESXi.

ESXi Extensions

How has this integration been achieved?  From published material and Tech Field Day presentations, we can see that a form of Kubernetes has been integrated right into the ESXi kernel. 

ESXi is based on a proprietary “Linux-like” kernel (vmkernel) that virtualises the hardware devices of a physical server.  O/S system calls from guest operating systems are intercepted and handled by the vmkernel.  The guest thinks it sees physical hardware but is simply receiving responses from a software instantiation of storage, networking and other hardware. (Note, the implementation of virtualisation in this context is much more complex but simplified here for clarity).

Each virtual machine has a vmm (virtual machine manager) and vmx (virtual machine executive) process that handles all of the other subprocesses to support running a VM.  To implement Kubernetes, VMware introduced a new process called CRX (the container runtime executive) which manages the processes associated with a Kubernetes Pod.  Each ESXi server also runs the equivalent of hostd (the ESXi scheduler) called spherelet, analogous to the kubelet in standard Kubernetes. 

Micro-Kernel

What’s interesting about the spherelet approach is how VMware has chosen to implement what would be a bare-metal or virtualised version of Linux running Kubernetes.  In the Project Pacific implementation of ESXi, the O/S kernel used to support Kubernetes Pods is a cut down version of Linux, derived from Photon OS, part of VMware’s earlier attempts at containerisation. (Note, this isn’t the first attempt by VMware at embedding containers.  Check out Project Bonneville  for some additional background)

This micro-kernel implementation has been highly para-virtualised.  This means software-based device drivers are used to simplify the kernel build and remove unnecessary support for devices that will never be visible to this lightweight O/S.  Para-virtualisation also allows VMware to implement faster hooks between the CRX and ESXi kernel.  Because of this, the time to spin up a Pod under Project Pacific is quoted as 100s of milliseconds.

Direct Boot

In the Tech Field Day videos, Jared Rosoff references the term “direct boot”.  If we imagine the traditional boot process for an operating system, the hardware BIOS hands over control to the O/S boot process which goes about discovering devices and building in-memory structures to support the hardware.  This process is needed because at each boot, the hardware may have changed.

In contrast, VMware is creating a very static structure with CRX and the micro-kernel infrastructure.  The boot process completes really quickly because there are very few devices to discover and what does exist is implemented in software.  Alternatively, the in-memory structures could be pre-created, and the micro-kernel started without any need to have a boot process at all.

Performance

Image courtesy of VMware Inc.

An interesting aspect of using ESXi as the dispatch manager for processes is an increase in performance compared to either virtualised or bare-metal containers.  VMware claims that Project Pacific Kubernetes sees a 30% improvement over virtualised K8S and an 8% improvement over bare metal.  This is because of the NUMA-aware efficiencies of ESXi itself compared to native Linux.  It would be interesting to see some real-world examples of this performance improvement, as I’m sure the results will be application-specific.

Storage & Networking

The implementation of storage and networking within Project Pacific is designed to be directly aligned with the way virtual machines consume these resources.  This means NSX for networking and any existing supported datastore storage for Kubernetes (vVOLS, vSAN, external storage). 

I’m interested in seeing exactly how the storage will be delivered here because security and persistence will be key.  Could it be possible (for example), to store data from a MongoDB database onto a datastore and move that encapsulation from a VM to a container, completely seamlessly?

Detail

There’s still a lot more to understand about the implementation of Project Pacific.  I do recommend checking out the Tech Field Day presentations that provide some additional background information. (The first of the TFD videos is embedded here – I recommend watching them all).

Caveats & Challenges

Implementing Kubernetes natively is seen as a hard challenge.  Of course, over the years, there have been many hard challenges implementing technology (think storage area networks) but in reality, the issue is simply one of skills and experience.  Enterprises prefer to invest in technology and support than have armies of highly trained administrators in place.  This makes perfect sense, because the cost of people doesn’t scale well, and the pro-rata cost of technology decreases over time.

If VMware can offer the same look and feel for containerised applications as they achieved for server virtualisation, then Project Pacific could be a big hit with the enterprise.  However, there are a few caveats

Divergence – I’ve yet to see anything that details how the open-source development of Kubernetes will be aligned to the vSphere-specific implementation.  Will VMware upstream changes into the main Kubernetes code or is this the start of a forked implementation?  The mechanics of exactly how Kubernetes features are implemented in vSphere, specifically the timeline, could be critically important.  vSphere customers are typically more cautious about upgrades than the open-source community.

VMware has indicated that there will be the ability to run a guest Kubernetes cluster on the vSphere infrastructure.  This can be a customer-specific version, allowing developers to work with the latest versions of Kubernetes if necessary. 

Integration – putting virtual machines and containers onto the same platform is like running a motorway with juggernauts (VMs) and motorcycles (containers).  How will the two interact?  VMware has stated that the limitation of resource rules will be applied to a CRX pod as to a Virtual Machine, but what happens when thousands of containers are started every day or hour? 

Security & Data Management – I’d like to understand more about how VMware has implemented security credentials and data management within Project Pacific.  Specifically, how will existing persistent storage be associated with a container/Pod?  There’s also a very important question to ask on how data protection will work.  I’m guessing a Pod will look like a VM, so be backed up in a similar way.  Will it be possible to move workloads (mainly the data) easily between platforms or will this result in lock-in?

The Architect’s View

I can see why VMware is selling Project Pacific as such a big upgrade to vSphere.  The engineering changes are significant.  For enterprises that don’t want to build Kubernetes from scratch, this solution looks extremely attractive.  Like all technologies, though, the implied lock-in could be a problem if the VMware implementation of Kubernetes diverges heavily from the open-source implementation.  Having said that, we see lots of open source solutions that are subsequently packaged differently for the enterprise. 

There will be lots to digest on Project Pacific and Tanzu as the technologies are rolled out and adopted in the enterprise.  One key factor could be how VMC evolves with the ability to run virtual instances and containers in the public cloud.  Is this an on-ramp to native cloud migration or is vSphere a viable long-term management and platform solution for public and private cloud? We will have to wait and see.


Copyright (c) 2007-2019 Brookend Limited. No reproduction without permission in part or whole. Featured image and other images copyright of VMware Inc. Post #BC8D.