In recent months, Pure Storage, NetApp, and VAST Data have all produced sustainability reports highlighting each company’s approach to managing the impact of technology on the environment. Sustainability is part of a wider remit to report on ESG – Environmental, Social and Governance characteristics of business. We take a quick look at what sustainability in IT will mean over the next evolution of the industry.
Information technology continues to become increasingly important to the global population and the world economy in general. Around 3 billion people are active on Facebook each month (although this appears to be peaking), and across the world, data centres account for an increasing percentage of electricity. Housebuilding in London is now allegedly being restricted by the growth of data centres while existing facilities struggle with climate change. Technology will continue to be an increasing influence on our daily lives, with an equivalent impact on the environment.
ESG (Environmental, Social and Governance) reports enable businesses to demonstrate their credentials in aspects deemed to represent good corporate citizenship. Environmental looks at all aspects of how a business interacts with the environment, which could include the consumption of raw materials, recycling and waste management, the operation and efficiency of buildings, and policies on climate in general.
Social covers social responsibility, including engagement with local communities, donating to, and assisting good causes, ethical investment and engaging with validated subcontractors (such as ensuring payment of a living wage and having a policy on modern slavery issues).
Governance covers the transparent and equitable operation of the business across areas such as finance and accounting, recruitment and career advancement and diversity policies.
In this article, we’re focusing on the environmental aspects of business.
E is for Environmental
What are the aims of establishing environmental policies within a business? As a rule, a company should be looking to minimise its environmental impact. This includes building efficiencies into products sold, the ongoing maintenance, replacement and repair of products, and the operational infrastructure used to support the business.
For companies that manufacture products, this can include:
- Reducing or removing the use of rare earth metals or other materials that are damaging to extract or damaging to the environment as waste.
- Recycling and reusing as much hardware as possible, sourcing raw materials from existing brownfield sources.
- Increasing product reliability to reduce component failures.
- Making products simple to repair or upgrade without a complete replacement.
- Designing for longevity – removing the “planned obsolescence” factor.
In the past, building in obsolescence might have been seen as good for business. After all, we’ve all experienced the 3-year refresh cycle of storage hardware, pushed by vendors looking to justify upgrades through artificial maintenance cost structures. TCO (total cost of ownership) models can easily be manipulated to make a technology refresh look attractive by simply excluding a few variables that would otherwise change the equation.
As customers, this is arguably where we need to review our existing TCO models and see how they apply from a sustainability perspective. If, as consumers, we also want to demonstrate good sustainability governance, then we need to rethink what’s important for our businesses to explain to our own customers.
As an example, is it better to deploy a more power-efficient solution that is in place for longer (say 5-6 years rather than refreshed after three) because the impact on the environment from manufacturing and waste is lower?
No sustainability targets will be met without metrics and new measures on environmental costs. But like all TCO models, there’s tremendous variability in what should be included or excluded as parameters. Here are some that we think create a good starting point.
Longevity & Reliability
Hardware needs to be as reliable as possible and survive in place as long as possible (without replacement). In the storage world, we’re used to seeing MTBF and AFR figures for hard drives and SSDs. Backblaze regularly posts drive reliability statistics that show some interesting data (like this one positing that SSDs are more reliable than HDDs). Unfortunately, server and server component statistics don’t seem to be widely published. This is “justified” on the basis that server configurations can vary widely. But vendors must know how well their products perform simply by looking at the aggregate maintenance history of supported products in the field.
- When Did Hard Drives Get Workload Rate Limits?
- Conflating Reliability and Endurance in SSDs
- Flash Capacities and Failure Domains
Operability & Compatibility
In some respects, this ties in with the longevity discussion. The useful life of hardware isn’t solely dictated by the MTBF before it breaks. Obsolescence is built into software too. Anyone looking to create a VMware environment, for example, will know that drivers and platform support for older hardware drops off quickly. Is it really that hard to support older generations of what is effectively, in most cases, the same product? HBAs, NICs and CNAs are good examples here where perfectly serviceable hardware can’t be used. We’ll touch more on this area in a moment when we discuss software.
There are also many other aspects to operability. In two recent podcasts looking at PCI Express, we learned that PCIe devices are forward and backwards compatible, breaking any need to keep servers and components in lockstep (excluding any performance issues). Pure Storage built upgradability into the FlashArray products (and now FlashBlade), so controllers and media can be replaced, reused and re-distributed over time.
What makes a product efficient? Modern IT focuses on the power/cooling aspects of hardware but doesn’t make it easy to compare platforms. At a component level, we can compare CPUs, DRAM, NICs and storage devices. At the server level, we can even look at power supplies, fans, and motherboards. The greatest challenge here is the difference between processor architectures that can make these comparisons complex due to the need to measure application performance.
What if you don’t sell hardware? What if you’re a software-only vendor? How does this affect sustainability? As this article shows, programming languages are not all the same. There’s a huge trade-off between performance and flexibility. But even within this relatively controlled test, language efficiency varied when looking at execution time, DRAM usage and processor usage.
- Will TCO Drive Software Defined Storage?
- Processors back under the spotlight for 2019
- Are ARM Processors Ready for Data Centre Primetime?
The efficiency of software is a complex and broad topic that doesn’t just include what programming language to use but also encompasses virtualisation techniques, the choice of processor architecture (RISC versus VLIW), the use of custom processing like CPUs and GPUs, and, of course most importantly, how resources are used.
As an example, VAST Data’s report majors on the efficiency with which Universal Storage consumes storage resources to improve performance and longevity. Storage vendors rewrote I/O stacks in dedicated arrays during the transition to all-flash systems. WEKA started from scratch, building a solution that bypasses much of the Linux kernel, producing a much more efficient I/O process with high performance and low latency.
In many scenarios, low-power processing (like Arm) could be more efficient than using the latest and greatest Intel CPUs. Offloading to DPUs could be more power and performance efficient than using general CPU threads. Doing the calculations to prove these theories will not be simple.
Then we need to think about supportability. As mentioned earlier, vendor support for hardware can drop off quickly. Is this genuinely because the regression testing for new features is so hard or because vendors want to move customers on to bigger and better? In this aspect, Open Source has a strong position, offering the ability to create longer supportability than commercial vendors choose to do.
In general, we can say that all IT systems need to be rewritten or at least “re-imagined” over time. Twenty years seemed like a reasonable refresh cycle for storage systems, based on the rapid development of new media, for example. Mainframe architectures were superseded by Linux and Unix platforms based on x86 and other processor architectures. In turn, we’ve seen x86 become dominant due to server virtualisation, followed by containerisation. Serverless architectures may be the next greatest thing, but that transition (and all the others) only make sense if the underlying application code is efficient in the first place.
OK, so we have some yardsticks but which to measure vendors, but none of this is worthwhile without some standards that can be used to compare like for like. Reviewing our referenced vendors, VAST Data doesn’t include any standards in its sustainability report but does make several comparisons with competing vendors.
NetApp follows ISO 14001:2015, which covers the establishment of an environmental management system (EMS). The company is clearly committed to internal efficiency and minimising the environmental impact of operations. However, under “what’s next” in the Environmental section of the 2021 report, we see the following:
“In the fiscal year ending 2022, we aim to calculate the energy use and emissions impact of our products.”
So far, I’ve not been able to find any related material that demonstrates how far this activity has succeeded. So, although we can be confident that NetApp is working on internal processes efficiently, we need to see how this translates at the product level.
Pure Storage talks about technology sustainability in this section of its annual ESG report, but this doesn’t highlight any standards. The final sections of the document reference the Global Reporting Initiative, but this isn’t easily understood and could be clearer.
The Architect’s View®
In this brief look at sustainability, we can see green shoots of reporting and business change based on reducing environmental impact. Both VAST Data and Pure Storage highlight product features that demonstrate both customer-focused efficiencies and environmental benefits. This is a “win-win” for businesses looking to demonstrate environmental leadership by using these products. We expect NetApp will show something similar soon.
Unfortunately, there is no easy way to compare products (trust the unvalidated vendor comparisons at your peril), so we’re reliant on proper analysis to understand the winners and losers in this area.
Then there’s the elephant in the room we haven’t discussed, and that’s the public cloud. Amazon has an entire website dedicated to sustainability (here). Microsoft Azure has collateral online here, while Google discusses its goals and achievements here.
With so much money available, the hyper-scalers can invest in renewable energy, develop more power-efficient hardware and optimise parts of the infrastructure (for example, with bespoke software) that individual enterprises just can’t do. This challenge represents both a risk and opportunity for traditional vendors, where ESG credentials become a differentiating factor but demand more research and development investment.
There’s still a long way to go in developing practical frameworks to monitor both vendor operational sustainability and sustainability within products. This is an area we’re likely to return to many times in the coming years.
Copyright (c) 2007-2022 – Post #2c76 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.