The IT world does an amazing job to produce, ship and deliver components and platforms. However, the global chip shortage has demonstrated the need for a solid and reliable supply chain. How should we be thinking about our storage and other infrastructure, considering the ongoing problems that are likely to run into 2022 and beyond?
The current issues with semiconductor production are in part due to the COVID-19 pandemic, issues with trade between the China and the US, freak weather, and facility fires. The “perfect storm” of global challenges has resulted in the unpredictable production of key components used across a wide range of industries, including motor manufacture and some IT infrastructure products like storage systems and network switches.
We’ve seen statements from leading component vendors including Intel (Pat Gelsinger), NVIDIA (Jensen Huang), IBM (Arvind Krishna) and AMD (Lisa Su) that indicate the crisis will continue into 2022 and take some time to stabilise. This is because the operational processes required tool up and manufacture components have an extended lead time.
Supply-chain management is an interesting challenge for IT vendors. Most don’t produce the components used in servers, networking, and storage equipment. Instead, their process is one of final assembly and testing, essentially integrating hardware that is manufactured in the Far East. As a result, managing the supply chain is a critical process.
Ten years ago, Thailand suffered significant flooding that impacted the hard drive industry. We must remember that this was a time when the use of flash storage was in early adoption, so most enterprise data still resided on hybrid or all-HDD systems.
I remember having discussions with vendors on supply availability. EMC (then still independent) indicated to us that drives could still be sourced, although choice was limited and some drives would be substituted, based on available capacities. The result to many consumers was a significant price hike from major storage providers.
For vendors like EMC, IBM, HPE and NetApp, existing contracts, negotiations, and sheer size would have dictated the ability to keep supplies in place as much as possible. Similarly, for the customers of those companies, the largest purchasers would be more likely to get priority. Outside of that circle, the rest of the storage market had to “make do”. No company personified this more than Backblaze, a vendor of online data protection (and now object storage) solutions. As this post shows, the answer for the company was to buy up portable drives and extract the HDDs, a process Backblaze calls drive farming. Probably more telling is the graph in this follow-up post from Backblaze, showing the longer-term impact of the drive shortage on pricing.
Back in 2011, few IT organisations would have adopted the idea of software-defined storage for enterprise solutions. Most would still have depended on vendors and their existing relationships to get through the challenges of HDD shortages. Today, almost all storage solutions are based on off-the-shelf components and software.
The move to SDS has happened at the enterprise level, with customers choosing to adopt technologies like Ceph or buying solutions from vendors that are essentially commodity hardware sold with software. Ten years on from the HDD shortage, we’re also experiencing a broader set of issues that impact the core components of the infrastructure we deploy.
In modern IT deployments, IT organisations and end-users have three main choices.
- Vendor-supplied solutions – traditional storage hardware products.
- SDS – build from commodity components.
- Public Cloud – move to the public cloud.
The traditional storage appliance market has looked static for several years, although it still represents billions of dollars in investments each year. Software-defined has become increasingly popular, driven by new solutions like object storage, vendors creating more commercial SDS products and, of course, open-source storage solutions. The public cloud has also been a massive reservoir for data, typically unstructured content placed into object stores.
What choices do IT organisations have with respect to the current supply chain challenges? Unless you’re the size and scale of a Fortune 500 company, it’s unlikely that your organisation will have the buying power to gain priority access to storage and server components. The only answer here is to build internal inventory ahead of time, which means locking in capital. In times when global supply chain issues exist, software-defined storage and self-build infrastructure looks like a riskier business than other approaches.
An alternative option is to move some data to the public cloud. Typically, this strategy is helpful for unstructured and less active data. Block storage in the cloud isn’t designed for global access but is tied to the virtual instances running applications. This means there is little or no scope to move transactional type data without moving the application too. (Note: there are some exceptions, some vendors offer co-located storage). So, moving to the cloud is an option if applications can be migrated too, or if the data being transferred is appropriate to access remotely.
The third scenario is to rely on established vendors with solid supply chain stability. This will include companies that develop custom solutions and so have a vested interest in the component supply chain model. This last point is vital because some vendors will have the flexibility to swap out components or systems for others, depending on the nature of any shortage.
When today’s businesses depend on technology so heavily, picking the right solution to run your operations isn’t just one of finding the right technical solution. CTOs must find the right path that navigates between technology, risk, and cost. What does this mean for modern IT?
- Diversity – a multi-vendor strategy is one approach to managing cost and may help with global supply-chain issues. IT organisations should also look at how diversification across on-premises and public cloud can help.
- Standardisation – a multi-platform supply strategy works if applications and data are standardised. The biggest risk of standardisation is to discount the ability to use vendor-specific services by standardising to the “lowest common denominator”. Standardisation also applies to internal systems deployment. In our last blog, for example, we highlighted the benefit of a single, scalable FlashSystem O/S platform.
- Planning – understand long-term business growth and future requirements. Learn how the business wants to consume technology and build offerings that provide flexibility and security of supply.
One final point is to ensure good relationships with suppliers and internal customers alike. IT vendors want to deliver the right solutions for their customers. With a greater understanding of demand, vendors have more opportunities to meet requirements and build that into their own supply chain process.
Question your on-premises vendor on the strength of their supply chain, how they are mitigating challenges and what alternatives they have to offer. Look at the wider market to see which vendors are able to deliver their products in times of supply chain challenges.
IT organisations want to deliver the best and most cost-effective solutions for their customers. With a greater understanding of the needs of the business, this challenge becomes easier to mitigate.
The Architect’s View™
The current chip shortage will soon seem like a distant memory as inventories return to normal. However, lessons will be learned on how to maintain security of IT services. The supply chain challenge is always one to watch, making it as important as picking the right technology itself.
Copyright (c) 2007-2021 – Post #bae3 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.