Edge Computing – Reality or Myth?

Edge Computing – Reality or Myth?

Chris EvansEdge Computing, Enterprise

As legacy vendors battle with the grip of the public cloud, edge computing is once again seeing momentum.  This technology has been discussed for years but never really gained mainstream attention.  Is “Edge Computing” a reality, myth, or simply something we’ve been doing for years?

Definitions

It’s hard to put a specific definition of what edge computing means today.  The existence of computing outside core data centres has existed for decades.  Typical historical examples include retail stores and banks.  Today, of course, the options for edge solutions are much greater because computing technology is so much more ubiquitous and cheaper than ever.

Edge Computing solutions are generally autonomous installations of ruggedised hardware capable of running in less environmentally friendly locations.  Modern edge also encompasses higher data flows (in and out) than ever before.  In 2018, Gartner predicted that by 2025, 75% of all data would be created at the edge.  If this prophecy is to hold true, we have a lot of work to do in the next three years.  Remember, of course, that much of this data will be machine-generated, and perhaps this is where the modern edge is different in terms of the volume of data being created.

Hardware

Much of the edge discussion continues to focus on hardware, specifically servers and networking. Dell Technologies, for example, announced the XR4000 in October 2022.  This solution has a “stackable” multi-node option and is only 14” deep, making it practical to deploy in locations without racking. 

However, this isn’t a new market for Dell.  Back in February 2016, we had a presentation on Dell’s VRTX converged platform at a Tech Field Day event in Austin.  This solution had multiple nodes, shared storage, and an excellent remote management interface. 

As another example, back in 2020, we saw how edge computing could be delivered with lightweight Intel NUC nodes.  Scale Computing demonstrated the HE150 (at another Tech Field Day event).  We still have one as a lab environment.  Back in May 2022, Scale announced Fleet Manager, a SaaS solution to manage large-scale deployments of HC3 nodes, including the HE150. 

Hardware Requirements

Is rugged and stackable hardware all the edge needs?  We don’t think so.  Design and functionality are critical requirements for technology at the edge.  Back in October 2019, we spoke to Scale Computing CTO Phil White to understand the experiences of deploying thousands of nodes across hundreds of physical locations, many of which won’t be visited by a technician for up to six months.

What’s clear from this discussion is the additional cost the Edge introduces.  When computing hardware is in core data centres, economies of scale and consolidation ensure that the physical management overhead is easily managed.  When physical infrastructure is distributed, the support costs escalate significantly.  Therefore, edge hardware needs to meet the following requirements.

  • Modular – components need to be easily replaced in the field, with minimal or no downtime to operations.  Modular designs should make it simple to identify and replace failed or upgrading components such as controllers/servers, storage, and networking.  Hardware maintenance should be so simple that on-site staff can do the work without a qualified technician. 
  • Resilient – edge solutions must be fault tolerant and resilient.  This means being capable of sustaining failures that could take some time to replace.  Of course, there’s a delicate balance to achieve between deploying lots of redundant hardware instead of having reliable components. 
  • Autonomous – Where component failure occurs, systems need to deliver autonomous recovery (such as RAID rebuilds) to automatically return to a resilient state.  HCI has been a significant benefit here.
  • Efficient – cost is a big factor in large-scale edge deployments.  All efficiency savings result in a magnified benefit for the business.  Edge designs should be focused on continual performance improvements, with power and space reductions over time.
  • Remotely managed/monitored – We’ll discuss software in a moment, but from a pure hardware perspective, hardware solutions must provide high-quality telemetry, including features like ambient temperature monitoring (as many will be deployed in non-data centre locations) and remote/automated upgrades.  These interfaces must be 100% secure, as many edge networks will be insecure compared to core locations.

We can already see from the examples discussed that design is a big focus for edge vendors.  We think that the metrics of cost and efficiency should be clearly conveyed to potential customers, including being able to demonstrate improved efficiency with each product generation.

Software Requirements

Anyone who’s managed technology remotely will know the frustration of poorly implemented KVM and remote access consoles.  Modern solutions are better than ever at providing remote access monitoring and hardware visibility.  However, what happens if a remote O/S upgrade fails?  How easy is it to revert to the prior image?  Can this process be achieved automatically? 

At the most basic level, fleet management tools are required that can automate and manage the rollout of upgrades, patches, new operating systems, and new packaged applications.  These tools need to be constructed in a way that integrates with the hardware, making it possible to totally re-image a server remotely if all else fails.  The aim here is to minimise the need for a skilled technician to attend site in all but the most extreme cases. 

Why do we want this?  Obviously, there’s a cost implication to consider. It can prove very expensive to maintain teams of mobile engineers, especially in large geographies with many diverse locations.  The second aspect is to understand how service levels will be met.  If a high degree of failures start to require onsite attendance, then applications and systems could be down for extended periods, costing the business money simply due to an engineer’s travel time.

Application Requirements

Extending the software discussion, appropriate application design becomes vital at the edge.  Imagine orchestrating the update to thousands of remote databases.  What checks and balances need to be in place?  In terms of application design, it may prove more practical to consider using cloud-based database platforms like FaunaDB or MongoDB Atlas, although there’s an availability aspect to consider should these systems experience an outage.  Kubernetes and lightweight application frameworks will also be important.  This leads us on to the discussion of data.

Data Requirements

Whether in the public cloud or on-premises in core data centres, data security is always top of the agenda.  At the edge, the challenge of security data is even greater.  IT organisations need to guard against data loss (DLP), while ensuring data be efficiently pulled back into core locations where required. 

Data management at the edge poses one big problem – as we look out from our core data centres, how can we trust the incoming data to be accurate? 

Imagine if hackers compromised an edge network and fed in deliberately distorted or inaccurate data on telemetry information, or worse, financial transactions.  How could a business trust this data or even identify it was affecting essential operations?

For decades, EPOS systems have relied on trusted device status, with robust key management features that guarantee a device passing data does belong on the network.  These features should be enabled for all edge devices by default as part of a zero-trust strategy.  This invariably means some degree of hardware integration.

Finally, systems must think about how to manage data protection and restoration of service in the event of complete systems failure.  Security is also a critical focus, although we believe that security requirements aren’t a specific issue but distributed across all the sections we’ve discussed.   

The Architect’s View®

As a computing location, the Edge has existed for decades.  Modern edge requirements differ in that they experience the three “Vs” of data – Volume, Velocity and Variety.  The opportunities for the edge are varied and complex.  If we believe the hype around data volumes, then we won’t have the network capability to move data into core locations, so edge computing will be essential.  Public cloud vendors are already making noise about edge solutions.  Perhaps initiatives like Dell’s Project Frontier will eventually solve the issues we’ve discussed here.  Edge could then be significant enough to figure in the strategy of every business. 

So, to answer our original question, is “the Edge” a myth?  Most of the requirements we’ve discussed here apply to data and infrastructure wherever we run it.  As a result, we don’t think the edge is a specific location, but an extension of distributed computing.  This is part of the spectrum of solutions modern businesses are expected to deploy.  Perhaps we need a better term to describe the diversity of computing used by enterprises today, which, naturally, would include the Edge.   


Copyright (c) 2007-2023 – Post #26e3 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.