Owning the Technology Ecosystem

Owning the Technology Ecosystem

Chris EvansCloud, Enterprise, HCI, Opinion

For the last 10 to 15 years, I’ve been a user of Apple products.  This includes iMac, MacBook, iPhone, iPad and the associated accessories they require.  In common with many vendors, Apple loves to lock users into an ecosystem of connected software tools that complement the hardware.  Take a picture on your phone, for example, and within a few short minutes, it’s available on the desktop.

This “ease of use” functionality is a form of lock-in we see in the enterprise time and time again.  Public cloud is just the latest instantiation of this process that has existed since IBM mainframe days.  Should we see this as a good or bad thing, or is it just a symptom of how IT is delivered and developed?

Lock-in

It’s worth highlighting for a moment whether we should consider lock-in good or bad.  From a customer perspective, there are significant benefits. 

  • One throat to choke – in the event of any issues, a single vendor is on the hook and that vendor should know the interactions between each piece of the infrastructure.  There’s no room for playing the blame game and batting a problem between suppliers.
  • Validated – components should be tested and validated to work with each other.  This should apply even if the vendor uses components from another manufacturer.
  • Integration – individual component solutions should be well integrated.  A good example of this is using the same security (keys and identity management) across all components of an ecosystem.  Another is having consistent logging and monitoring.

For the vendor, lock-in provides ongoing revenue and the chance to introduce “stickiness” to the environment.  With many components delivered from a single supplier, removing a piece to replace with something else can be a challenge, if not impractical or impossible.  Think here, for example of the benefit of using integrated storage with HCI.  Why use an external storage array in this type of configuration?

When Lock-in goes bad

Of course, lock-in can also be an issue.  Vendors may not have the perfect set of product features in every area.  Pricing may be a problem, with other vendors offering better value for money with some aspect of their platform.  Then we have to consider how lock-in will affect future technical debt.  This usually occurs when IT organisations invest heavily in specific platform features that aren’t available elsewhere, but subsequently become duplicated (sometimes in superior ways) sometime in the future. 

Lock-in per-se isn’t a bad thing.  We just have to understand the implications it brings.

Ecosystem

How does that apply to the ecosystems or architectures we deploy?  It’s not enough for vendors to build an architecture and expect at that point their work is done.  Using IBM for two examples, we can see good and bad in architecture deployments.

From the 1960s to the 1980s, IBM owned the mainframe market and enterprise applications.  IBM set the standards and other vendors developed “plug-compatible” products.  If IBM brought out a new solution or feature, there would be a lead time before 3rd party vendors could reverse engineer to the new standard, by which time IBM had a foothold in that market.  Of course, there were examples where IBM was outdone at their own game, for instance when Gene Amdahl left IBM and set up a rival company that developed better hardware virtualisation.

The second IBM example is the Personal Computer.  In 1981, IBM created this market but quickly lost it to clones and vendors that developed compatible products.  IBM tried to regain control of the PC architecture with MCA (micro-channel architecture) but lost out to PCI.  Intel now effectively owns the ecosystem around PC/server architectures. 

There are some good references to how IBM fared in the late 1980s and into the 1990’s in the book “Computer Wars: The Post-IBM World”.

Modern Ecosystems

What examples are there of modern architectures that look to control an entire ecosystem?  The first and most obvious is VMware.  The initial premise of server virtualisation and a hypervisor has extended into application management, desktop, storage, networking and now containerised workloads.  VMware is pushing the boundaries with hybrid cloud too. 

Early Nutanix Marketing (2011)

Nutanix has attempted to follow a similar path, providing the option to remove the “vTax” and buy the entire ecosystem as Nutanix Enterprise Cloud.  What started out as a “no SAN” and HCI solution has morphed into an entire architecture that provides everything an enterprise business needs to deliver IT, including hybrid cloud.

Public Cloud

Probably the biggest ecosystem play is the public cloud.  Each player offers a proprietary solution that implements common ecosystems in a slightly different way.  Pricing is managed in such a way to make it unattractive to move to another cloud – egress charges on moving data out of the platform, for example. 

Public cloud vendors offer reducing prices, long-term pricing incentives and continue to add new proprietary features.  Amazon Web Services is arguably the best at this process, releasing literally hundreds of new features and products every year. 

Innovation

The last point about new products and features is probably the most important to maintaining ecosystem lock-in.  Vendors simply cannot afford to sit still but have to continually innovate, both improving existing offerings and bringing new ones to market.  This continuous process of innovation is a key aspect of the architectures developed by vendors from mainframe days onwards. 

Customer Strategies

How does that help the customer?  Over the past 20 years or so, a dual or multi-vendor strategy has been one way of avoiding lock-in.  The trouble with this approach is the need to “dumb down” to the lowest common denominator or risk losing access to certain features that are unique to one platform or another.  However, when it works, a multi-vendor strategy can provide cost benefits and a certain degree of agility.

The trade-off though is in the increased overhead of design.  Now we need “super architects” who can see the bigger picture and build across multiple clouds.  This strategy is only valid if the economies of scale deliver savings that offset the cost of designing for multi-cloud.

Vendor Solutions

There are vendor solutions that help avoid some lock-in situations.  Containerisation and specifically Kubernetes provides the ability to make applications portable.  However, this does shift the burden of interoperability onto networking and persistent storage (or data management).

Many vendors are introducing managers of managers that can orchestrate multiple environments, on and off cloud.  But this just seems like another version of lock-in at a higher level.

The Architect’s View

Personally, I don’t have a problem with using the ecosystem from a single vendor, either personally or in the enterprise.  My only caveat is to know what my lock-in is, and how I mitigate it in the future.  When a vendor stops innovating and the ecosystem starts to stagnate, that’s perhaps the time to be moving on and looking at what else is available in the market. 

As a footnote, if you want a good example of building a proprietary ecosystem, have a listen and watch to Peter DeSantis and the opening Monday night keynote at AWS Re:Invent 2019. This is a master class in building a solution – namely high-speed networking and dedicated hardware to build HPC and ML/AI environments that will be hard to move off. Enjoy….


Copyright (c) 2007-2019 Brookend Ltd, no reproduction without permission, in part or whole.  Post #24CF.