Remember when converged infrastructure came in? We were told that packaging was the next best thing and doing things in separate towers of skills was bad. Now it appears we’ve developed Logo Addiction and should be looking to build things from scratch again.
The basis of this thesis is a presentation at Interop ITX by Peyton Maynard-Koran, as reported by Marcia Savage at Network Computing. Maynard-Koran works for Whole Foods/Amazon, so it’s easy to think that the company has a vested interest in driving traffic to AWS. Is this premise correct? Should we be building from scratch again?
I was never a fan of converged infrastructure and have written almost nothing about it since it emerged some 9-10 years ago. The inherent advantages of packaging (which is effectively what most solutions are) were always offset by other issues of ongoing management. I see some advantages to having a vendor certify the components, but it made little sense to me with my enterprise background.
- Will NetApp Build a Hyper-Converged Appliance?
- Rumours of Cisco Acquiring Nutanix – Hyper-Converged Consolidation?
- Why HCI Data Protection is Different
Now, whether I’m right or wrong doesn’t really matter. The industry has embraced CI as a delivery model because it helps reduce cost, typically in people resources. The same logic can be applied to HCI, which has a more interesting set of use cases, depending on the architecture and customer size.
Skill Acquisition Costs
Why are CI and HCI seen as being so attractive? Because they reduce the skill level needed to operate them. HCI removes the need to have dedicated storage people, for example. CI removes architects and engineers that would have spent time researching solutions and components, then testing them together to get the best of each technology area. CIOs love the idea of CI/HCI because it reduces their costs. Full stack vendors like Cisco, HPE and Dell EMC love the idea of CI/HCI because it allows them to sell all of their products together, without having to compete at each layer of infrastructure.
Why reduce headcount? Because good people are expensive. Domain experts cost money to hire and have the luxury of moving to companies with the most interesting problems to solve. Multi-domain experts are even harder to find and retain. The inevitable drive to remove cost from the business means that wherever possible, businesses will look to employ lower-cost workers. Whether this stifles innovation or creativity doesn’t really matter – usually because CIOs don’t stick around long enough to see the consequences of their actions.
The alternative is for IT to go back to basics, use commodity hardware, Open Source software and understand everything in detail. Maynard-Koran claims that commodity hardware can be 70-80% cheaper. Theoretically, a lot of Open Source software is free – you just have to pay for support.
The case for commodity hardware has been around for years, ever since the x86 architecture became good enough to drive custom hardware designs out of platforms like storage. In many industry verticals, like HPC (or any IT business at scale), commodity makes sense. There aren’t always the business drivers to justify buying a vendor-branded solution. But using commodity hardware isn’t that simple and does have its own challenges.
Throat to Choke
I don’t believe that commodity has taken off in the enterprise to anywhere near the same degree as certain verticals. That’s because there are other factors at work here. Most are based around risk and mitigation. If I buy a solution from a vendor, I’m betting (a) that there are lots of companies running the same platform, so bugs and issues will be found more easily (b) if the platform fails I will have someone to blame. Obviously, point (a) could be made for Open Source. But with commodity hardware, there may be very few companies running the same configuration. The second point (b) is more relevant. In the enterprise, businesses want someone to “blame” if things go wrong.
I deliberately put the blame word in quotes, because it’s about taking responsibility as much as laying the fault at somebody else’s door. Enterprises already have support issues with application software as the turnover of people resources causes the loss of specific site knowledge. Why create the same issue with infrastructure?
The Architect’s View™
There will be a spectrum of scenarios where it makes sense to use either self-built or vendor-built solutions. The decision isn’t binary across the enterprise but should be made tactically where appropriate, while retaining a strategic flexibility. In the same way, we moved to/from centralised computing, we’ll also mix vendor and home-grown solutions where the cost and supportability factors favour one over another. The skill is not creating islands of future technical debt as so often happens. The real challenge is finding people who have the level of vision to not create that future burden of technical debt.
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2007-2032 – Post #5EA5 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.