I’ve been using ARM technology since the mid-1980s, although at the time I didn’t realise it. ARM or Advanced RISC Machine is a series of processor architectures that initially developed from the work at Acorn Computers in the UK. The earliest implementation was as a co-processor in the wildly successful BBC Micro. From those humble beginnings, Arm Holdings has created a business designing IP cores that are then taken by system designers to build System on Chip (SoCs), System in Package (SIPs) and custom silicon for low-powered devices, mobiles and smartphones.
Probably the most well-known examples of ARM technology today are the integration into Apple iPhones and iPads and the Raspberry Pi. Apple iOS-based devices use the “A” series of SoCs, the current version of which (A13) is used in the iPhone 11. The Raspberry Pi 4 uses a Broadcom BCM2711 SoC, which is based on a quad-core Cortex-A72 ARM processor. Both use the ARM 64-bit ARMv8-A architecture (or variants).
ARM designs use the RISC or Reduced Instruction Set Computer architecture. RISC designs exploit the concept of fewer, simpler processor instructions that execute in fewer clock cycles than processor designs such as Intel x86. The invention of RISC is credited to John Cocke, who worked at IBM’s Yorktown Heights Research Centre.
(Side Note: I recommend reading “Computer Wars: The Post-IBM World” by Ferguson and Morris for more background on RISC development).
ARM designs don’t provide the same “straight line” performance for single-threaded applications compared to the Intel x86 processors we more commonly use today. Instead, the RISC concept and ARM designs are focused on being more resource-efficient than their Intel counterparts.
The fact that Intel designs are great for single-threaded applications shouldn’t be a surprise. The x86 family derives from the 8086, which was designed as a general-purpose processor and didn’t become multi-core until 2005 (the first multi-core processor was a Power4 chip introduced by IBM in 2001).
Today’s computing is moving towards microservices and parallel processing with containers and Kubernetes. This trend to greater parallelisation first established with server virtualisation. Containerisation introduces the possibility of managing thousands of processes and threads on a single operating system – just the kind of workload that is suited to running across processors with multiple cores. The widespread use of Linux complements parallel processing as the operating system for cloud-native applications. Linux, based on Unix, is a true multi-tasking operating system (unlike Windows).
Amazon Web Services has seen the opportunity to use ARM in their infrastructure-as-a-service (IaaS) offerings. AWS introduced the first Graviton1 processor EC2 (A1) instances at Re:invent in November 2018. A1 instances are based on sixteen-core 64-bit ARM Cortex-A72 cores running at 2.3Ghz.
AWS introduced Graviton2 at Re:invent 2019 and is based on ARM Neoverse N1 cores, which scale from 8 to 16 cores per chip and 128 cores per socket in server architectures. ARM claim 30% more performance per watt compared to Cortex-A72, while AWS is claiming instances based on the Graviton2 will deliver 7x the performance of A1 instances and significant performance benefits over existing 5th generation instances (M5, C5 and R5).
Naturally, these savings are dependent on the application type; however, AWS is planning to use Graviton2-based instances to run existing AWS services such as Elastic Load Balancing and Amazon ElastiCache. Graviton2-based instances are not available yet, but I expect they will be popular.
Another company looking to take advantage of ARM technology is Bamboo Systems, formerly Kaleao. Although the details on the company are currently quite minimal, I did have a recent briefing with the Bamboo team. There does seem to be some potential here to build rack-scale systems based on ARM. However, the specific detail will be critical. I’ll post more on this once I have it.
The use of ARM processors won’t be without challenges. AWS has the ability to build chips and servers based on ARM through acquisitions like Annapurna Labs. The company has the economies of scale to use ARM in places within the AWS infrastructure that ARM suits well, like those already mentioned. Enterprises will depend on incumbent hardware vendors to build solutions they can deploy.
Unfortunately, ARM isn’t binary compatible with the x86 architecture. Applications must be recompiled to work with ARM and although that’s not a big hurdle, persuading vendors to produce ARM-based versions of their databases and other applications might be a little harder to achieve.
Then there’s the small problem of understanding the performance differences. While AWS quotes generational performance improvements, there are no clear indications of which applications will work more efficiently on ARM, compared to x86. Companies like Bamboo will have to produce some useful tools that help developers understand why ARM might work more efficiently for their platform.
The Architect’s View
Despite the theory that public cloud abstracts us from the underlying hardware, in practice, we’re still highly coupled with processor and server architectures when developing applications. The ARM architecture could provide tactical savings, for example, in front-end web servers or infrastructure delivery like DNS and DHCP. It will be interesting to see if ARM coupled with composable architectures like Liqid could provide any power savings over x86. From the presentations at Tech Field Day, I believe the company hasn’t seen any demand, so this idea might be a little way off.
It’s good to have choices, but as always, the benefits of new or different architectures need to be good enough to counteract the inertia of change. This area will definitely be one to watch in 2020.
Post #3257. Copyright (c) 2020 Brookend Ltd. No reproduction in whole or part without permission.