Why Have VVOLs Taken So Long to Deliver?

Why Have VVOLs Taken So Long to Deliver?

Chris EvansStorage, Virtualisation

Another VMworld has come and gone and still we haven’t seen the production deployment of VVOLs.  Just to recap, VVOLs are the evolution of the packaging of virtual machines that currently reside in data stores.  The main benefit is to be able to apply storage performance and availability policies to a VM object rather than an entire datastore as we do today. Although we’ve seen demonstrations of VVOLs for some time, VVOL code has not yet made it into a GA vSphere launch.  Presumably this will change with the release of vSphere 6.0 sometime next year.  The challenges around what seems like a trivial change in storage deployment are pretty immense and require work from both the hypervisor and storage side.

Storage Changes

From the perspective of the storage array, a VVOL seems to look like nothing more than a small LUN (at least on block-based storage anyway).  Many of the references to VVOL talk about a storage endpoint (which is still presumably an existing LUN) and a number of objects that reference the VVOL itself.  For the storage vendor this means amending their array design and potentially supporting significantly more storage objects than before, where each of these objects could also be a snapshot or replica.  Rather than thinking in thousands of LUNs, systems could have to support hundreds of thousands of VVOL objects, which represents a big overhead in DRAM memory/cache requirements and the in-memory structures used to represent them.  This change alone could represent massive overhead and impact array performance.

Then we have to think about other system functions like replication, (array-based) snapshots, VAAI offload, all of which need to be changed to a more refined level of granularity.  Again this can represent a significant design problem for array vendors.  Think also about array connectivity (how each object will be addressed over Fibre Channel, FCoE, iSCSI and NFS) and how those objects will be queued on a connected port.  Finally there’s the whole benefit of VVOLs, the ability to apply independent QoS (Quality of Service) to each object.  Many arrays today don’t even offer this for traditional LUNs.

Hypervisor Changes

Naturally there are many changes to the hypervisor.  The most obvious is addressability of VVOLs through existing storage protocols (the other end of the issue for the storage array), but also the interaction between hypervisor and array needs to be strengthened to ensure both are working in harmony than against each other.  As a case in point, imagine running Storage DRS in vSphere while the array attempts to balance performance and latency at it’s end. If policies on both components are misaligned, sDRS could attempt to move data around to fix a throughput issue that was deliberately introduced by the array, resulting in even more performance problems.

Features such as VASA will need to both provide additional data and allow the hypervisor to specify QoS requirements at the VVOL level with the array feeding back if policy settings can’t be achieved.

The Architect’s View®

VVOLs seem simple on the surface, however what’s simple in concept is rarely simple in execution.  VVOLs have taken time to arrive and even then, functionality may be limited to VM addressability rather than full QoS functionality.  I imagine we will see a league table of vendors who are able to support full VVOL capabilities and that will provide a good indicator of today’s advanced versus legacy storage architectures.

Related Links

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2009-2022 – Post #3460 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.