I’ve been pondering the recent ransomware attacks and thinking about what this means for future IT. Is ransomware a scenario we should just accept, or can we do more and protect against an increasingly evolving and sophisticated bane on technology and business?
In “On the Origin of Species”, Charles Darwin creates a framework and scientific theory that describes the evolution of nature and the biological world. Simplifying for a moment, we can generalise that species evolve to better survive in their environments – or die out. There are thousands of incredible examples of adaptation in the natural world, from the colour changing mimic octopus to the cheetah capable of running at 75 miles per hour.
In our connected IT world, we see the same process taking place in the battle between IT organisations in business and ransomware hackers. Technology continues to evolve and protect against infiltration, using advanced AI techniques to detect network intruders. At the same time, the hackers develop increasingly sophisticated attack techniques (for example, supply chain hacking like SolarWinds).
Survival of the Fittest
However, I don’t think all hacks are that sophisticated. Using the Darwinian example again, the lion or cheetah only has to pick off the weakest member of the herd rather than chasing a specific animal. Hackers only have to find companies with vulnerable networks – and of course, data valuable enough to be encrypted and extorted for money.
How have we ended up in this situation? Like many global challenges, the emergence and “success” of ransomware is a multi-layered and incremental problem.
Network Ubiquity – before the early 1990s and the development of the World Wide Web, global network presence simply didn’t exist. The Internet connected some large organisations and mainly academic institutions. I was lucky to get a feeling of what global connectivity meant through using JANET in the UK in the mid-1980s. At the time, the network had connections to some US universities, with no login restrictions on connecting to remote UNIX machines. Today, through one means or another, almost every PC, server and edge device have some form of network connectivity, frequently exploiting the Internet for simplicity and cost-efficiency.
Technical Debt – which can manifest itself in many ways. In some scenarios, shortcuts have been taken that will require future rework and increased expense, resulting in IT organisations retaining systems and software longer than they should. Older systems (especially unpatched ones) are more vulnerable by nature, but the cost of transformation and rewrite can be expensive, or in some cases, impossible if the original source code is missing. I’ve worked in businesses where IT claims to have “one of everything” (and in many cases, that was true). This also included servers that had no apparent ownership, which would be troubling in today’s ransomware age.
Complexity – modern IT can build systems with a broader choice of technology than ever before. Businesses still have mainframes, departmental computers, modern servers, the public cloud and other platforms to choose from. Applications can be deployed on bare metal, virtualised, containerised or serverless. With such a diverse set of choices, it’s impossible for any individual to understand the interactions and risk points of so many platforms.
Lack of skills and understanding – IT has gone through a process of continuous obfuscation away from understanding how the core aspects of computing work. Most of the time, this evolution is a Good Thing. Nobody wants to program in machine code if they don’t need to! On a more serious note, abstraction at the operating system provides the use of multiple hardware variants. Virtualisation enables hardware optimisation and many side benefits like increased availability. Containerisation continues that process, but how many of today’s programmers understand what’s going on underneath all those layers of indirection?
While we don’t all need to be mechanics to drive a car, a degree of understanding about how cars work certainly makes the journey more enjoyable.
Poor Process – probably the biggest problem of all is the lack of process and adherence to standards. We’ve all seen this in daily operations, including poor password standards, sharing of privileged accounts, lack of patching and upgrades and insufficient auditing to name but a few.
Following a process and keeping to standards is often seen as pedantic and detrimental to getting the job done, when in fact, following standards and policy should be applauded (assuming they are effective).
The Cost of Computing
It’s a truism to say that the cost of computing has decreased every year since commercial computing began in the 1950s. Bureau services existed because buying a computer was too expensive for most businesses. Wind forward to the 2020s, and computing can be acquired by credit card on an hourly basis from cloud service providers like Azure, AWS and Google.
So, why raise the subject of the cost of computing as a problem?
When we look at all of the issues already raised, we can see that platform diversity, complexity, network ubiquity all go towards creating highly complex computing environments. It’s incredibly easy to cut corners and not spend time putting in place, for example, effective backup and disaster recovery. It’s easy to assume that strong passwords alone guarantee against server intrusion.
As we move to consume IT resources as services, I believe that many businesses falsely assume the infrastructure is secure, reliable, and protected as part of the service offering.
I am sure that internally, all of the cloud providers deliver secure solutions and spend thousands of hours reviewing, revising and updating their systems to resolve exploits as they are discovered. However, I am also equally sure that many businesses do not do the same level of due diligence because the costs are too high to justify, even if that justification is reasonable.
As a general observation, I think businesses underestimate the risk of a ransomware attack and the subsequent impact it has on ongoing business. Ransomware insurance may offer some degree of financial protection, but reputational damage is probably worse than the initial economic costs. For example, will SolarWinds be viewed in the same light again? Will Exagrid convince customers that their Retention Time-Lock technology is good enough when the company itself paid out for a ransomware hack? In both cases, can existing and prospective customers be 100% sure that hackers left nothing behind that could compromise their systems in the future?
Marie Condo Approach to Ransomware Management
What can be done? As this article title states, the ransomware problem will never be solved. Ransomware attacks will evolve in complexity. Ransomware protection will evolve, too, always trying to plug the leaks in IT systems.
As IT professionals, we all have a role to play. Perhaps we can apply the KonMari approach to IT as an ongoing process to declutter. This means both decommissioning old systems and refreshing technologies, but at the same time, questioning whether we need to introduce yet another programming language or technology platform. If we do, how will the old technology get refreshed and replaced?
The Architect’s View™
As well as simplification, data protection needs to provide that last line of defence, with the immutability tapes offered 30 years ago. There are plenty of solutions on the market today to do this (we’ll review more in our revised Modern Data Protection e-book). So, there’s no reason not to have a fallback remedy in place.
I believe we need a root and branch review of IT and how services are built, managed and delivered. We need new frameworks that are squarely focused on data, data management and security at the core. Everything else is ephemeral by comparison because the value, as always, to businesses, is their data.
Copyright (c) 2007-2021 – Post #be3d – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.