Windows Server, Release Cycles and HCI

Windows Server, Release Cycles and HCI

Chris EvansSoftware-Defined Storage, Virtualisation

The discussion on Storage Spaces Direct (S2D) seems to have generated a lot of interest and a lot of defensive posts. These posts seem to think that what has been written is either wrong or misinformation.  Actually, the issue is poor communication on Microsoft’s part.  However, what’s subsequently been said has left more questions than answers.

Release Bifurcation

Windows Server builds have been split into long-term releases and more “dynamic” releases – Long Term Servicing Channel and Semi-Annual Channel respectively.  LTSC (last release WS2016) continues to provide long-term product support with existing release cadences (3-4 years).  SAC delivers dated releases every 6 months, of which the first is 1709 (additional info).

So all that seems good, until you read this post, which implies WS2016 is effectively SAC release 1609. That means the SAC channel is derived from the current W2K16. However, S2D has been “removed” from the 1709 release for being too buggy.

“What do you prefer: a buggy S2D release or wait 6 months for a high quality product ? (Tech Coffee blog)”

So does that mean the current implementation of S2D in 1609 (WS2016) is also buggy?  Should existing customers of WS2016 be worried that the current implementation of Storage Spaces is unstable?  Does it mean MSFT tried to make big changes to S2D that haven’t worked in their self-imposed timescale?  If the idea of SAC is to produce a more dynamic release channel, then MSFT needs to get their act in gear. You can’t decide to work to a 6-month schedule then start removing components because they’re not ready. If you do this and can’t cope with the release cycle then don’t move towards it.

I would suggest, based on the timing of this post and the comments at the end, that Microsoft tried to get new S2D features (e.g. Dedupe) into 1709 and either that failed or someone decided the container story was more important.

Choosing a Build

As a customer, should I be going down the route of LTSC or SAC?  It looks to me that if I’m a traditional customer, LTSC makes more sense.  I should expect to keep existing features and see incremental value each upgrade.  SAC seems more experimental.  Removing features is reminiscent of when NetApp removed certain core components in Data ONTAP in attempt to move to a more consistent codebase.  However, there could be a more subtle play going on here.  Microsoft has made Storage Spaces part of their HCI offering that also runs Hyper-V.  There could be an intention to remove S2D in order to re-introduce it as a paid component.

Has Microsoft ever charged extra for a core feature and licensed it separately?  I genuinely don’t know the answer.  However, do a comparison with VMware and vSphere from an HCI perspective.  We see two hypervisors (W2K16, ESXi), both capable of running HCI.  We see one vendor charging for HCI features (VMware with Virtual SAN) and the other giving it away.  Does Microsoft sense an opportunity to charge more money, especially with the move to Azure Stack?

The Architect’s View®

I guess all we can do is wait to find out.  The official line appears to be that enhancements to S2D weren’t good enough to make 1709 so the whole feature was pulled.  Pulling entire features in a 6-monthly release seems to me just to add to customer confusion.  Who will trust SAC if they can’t deploy it every 6 months without losing critical features?

I think Microsoft is trying to act like the Linux distro distributors (and now Docker) in trying to become more agile.  It seems that there are growing pains with this approach.  Let’s hope they (Microsoft) get the strategy right, otherwise the fight for the HCI market will be over before it has started.

Further Reading

Copyright (c) 2009-2022 – Post #F393– Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.