Containers vs Virtualisation

Containers vs Virtualisation

Chris EvansContainers, Virtualisation

With the announcement of the release of Docker 1.0, Linux Containers are one of IT’s current hot topics.  How does Docker and Containers in general fit into the world of virtualisation and is the technology simply riding a hype curve?

Background

The idea of containers isn’t a new one.  Sun Microsystems’ Solaris operating system introduced the idea of containers (or Zones) as early as 2004 and features such as Linux control groups have allowed process group isolation since 2007.  The term “containers” is probably best described as Operating system-level Virtualisation.  So rather than create a completely separate virtual machine (VM) instance for each new application (as traditional virtualisation would do), Containers allow multiple instances of an operating system to exist on the same Linux or Unix-based machine.  To understand how this is achieved, we need to know how operating systems like Linux divide up processes and virtual memory.  In order to implement tight security and fault tolerance, code is executed either in kernel mode (or kernel space) or user mode (sometimes called userland).  This segregation allows sensitive or privileged tasks that manage process scheduling or device driver support to be run in the kernel with application functions run in user mode.  User mode processes communicate and use the kernel through the Kernel API and system calls.

This method of operation will be familiar to anyone with a mainframe background.  On the z/OS operating system (and the previous variants such as MVS/XA, MVS/ESA and OS/390), user processes are run in an address space with multiple tasks (TCBs) providing multi-tasking support, but share a common “kernel” through libraries on the SYSRES, or system residence volume.  Privileged instructions are executed using supervisor calls or SVC calls, which run in supervisor mode – the equivalent of running a process in the kernel.  Thus each address space is logically isolated from another using virtual memory addressing however all address spaces and tasks are processed or “dispatched” on the same z/OS instance.

Traditional Virtualisation

Containers implement virtualisation by effectively running multiple copies of userland on the same operating system instance.  These copies all use the same kernel and so have similar dependencies and functionality.  Compare this to traditional server virtualisation where each virtual server deploys its own entire copy of the operating system libraries.

The difference between the two forms of virtualisation starts to become clear.  Containers allow the user to run multiple similar instances of an operating system or application within the same O/S whereas server virtualisation creates machines that are logically isolated and can run different platforms.  Therefore, containers can be used to provide highly efficient deployments of similar applications that all use the same kernel code.  Server virtualisation allows each instance to run entirely separately, supporting many different (and non-Linux) operating systems.

There are some obvious benefits and disadvantages in using containers over VMs:

  • Containers can be created almost instantly – as fast as spawning a new Linux process.  This makes them excellent for scenarios where many transient, temporary instances need to be created and destroyed.
  • Containers are “lightweight”, sharing the same kernel and libraries and taking very little additional disk space.
  • Containers scale well – Linux already manages high scalability in processes, which equates to containers.

However:

  • Containers all run on the same O/S instance; so if that O/S goes down or is rebooted, they all go down.
  • Containers can’t run other operating systems like Windows (or anything not based on the Linux kernel).
  • Containers aren’t great for permanent data storage as they are easy to destroy, so other techniques need to be used to store data used by containers.
  • Containers aren’t as flexible as VMs in terms of resilience or portability (think vMotion).

Application Use

So, why would anyone use containers?  The most obvious benefit is high scalability.  Imagine running many web server instances each with their own database.  A traditional deployment might place them on a single VM, making it difficult to manage and prioritise the workload generated by each site.  Containers provide more flexibility in workload management without having to resort to deploying many virtual machines.  This kind of implementation is effectively PaaS or Platform as a Service, where the container doesn’t need to maintained or patched as this is handled by the owning operating system.

Where does Docker fit in?  Well, Docker tools simply make the process of creating and managing containers easier, but ultimately use existing containerisation software such as LXC and their own container software called libcontainer.

The Architect’s View™

Containers and Docker aren’t going to change the world and won’t replace traditional server virtualisation as there’s a limitation to the way containers can be used.  However they do represent a new opportunity to scale environments more effectively and potentially reduce VM sprawl.  But this will require developers to understand the differences in the deployment model for software and application execution.  I’ll be doing more work on the coming weeks with “How To’s” on getting started with Docker and other container solutions.

Related Links

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2021 – Post #4025 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.