Will TCO Drive Software Defined Storage?

Will TCO Drive Software Defined Storage?

Chris Evans Software-Defined Storage 1 Comment

This post was originally published on 12 March 2014 and has since been updated to reflect changes in the industry since then.

With the imminent announcement of pricing for VMware vSAN (formerly Virtual SAN), there has never been a more complex landscape for storage in the enterprise.  But how much of future decisions will be based on technology and how much on understanding the TCO of solutions being deployed?

Rumour has it that vSAN will be priced at about $2500 per CPU.  That’s on top of existing licensing and storage hardware (flash and disk).  Presumably this means on a 3-node cluster with 4 CPUs per node, vSAN will add an additional $30,000 in costs, list price.  Anyone who follows enterprise storage pricing would expect vendors to offer pricing around the $5/GB mark, so for the the same price as licensing vSAN, we could go out and purchase a 6TB+ array.  This doesn’t seem much in terms of capacity but to be fair, there are many small array vendors out there who could supply a huge amount of storage capacity for $30,000.  This is rather a simplistic example, so perhaps we need to look at things in more detail.

Note: vSAN 6.6 list pricing is currently $2,495 per CPU for Standard vSphere, $3,995 for Advanced and $5,495 in addition to vSphere licences themselves.

Looking at the costs originally included in this post, we can see that vSAN pricing has remained unchanged since launch. However, media costs have reduced significantly and for capacity, some vendors are quoting $1/GB for flash-based systems. In relative terms, therefore, vSAN has become more expensive than traditional SAN.

Storage Options

Today users have multiple ways to implement storage in a virtual data centre.

Hyper-Converged – In this instance storage and compute are merged into the same physical server chassis.  Software within the hypervisor kernel (e.g. vSAN) or in a VM (e.g. Nutanix) provides virtual storage resources based on disks and flash in the server itself.  There are software only solutions (e.g. Maxta, now defunct) and packaged solutions like Nutanix and HPE SimpliVity.  

From a total cost of ownership perspective, these solutions can be implemented without needing separate storage skills or teams.  There may also be savings in data centre space (facilities) and environmental costs too.  On the negative side, the storage can’t (typically) be used for other purposes and there is a potential for more impact/risk with busy environments and understanding how storage and compute workloads are prioritised in the same infrastructure.

Hyper-Converged Plus – there are also now solutions that extend the hyper-converged paradigm. NetApp has their own flavour of HCI that uses dedicated storage nodes with SolidFire software. Datrium has introduced DVX, an architecture to split storage into host-based and shared components.

Simple-SAN – In this instance, rather than deploy a storage array that is highly complex, the solution is to deploy hardware that provides some of the basic shared storage requirements of availability, resilience and performance.  In many cases, these arrays can be managed by the same team handling virtualisation as they are easy to deploy and generally don’t need a lot of management once in place.  

Probably one of the best examples here is ISE from X-IO (now part of Violin Systems).  ISE systems are “black-box” implementations of storage capacity using hard drives and flash.  Technology within the array controls and manages transient and permanent device failures, resulting in a much lower maintenance cycle.  Calling ISE simple is unfair as there are many more features in the product than high availability, but this is one of its strengths.  Cost savings in using simple SANs can be made in terms of reducing management overhead and eliminating the need for high-end skills.

Complex-SAN – The traditional method of storage deployment is to use high-end arrays with lots of resilience and availability built in, as well as complex functionality like block-level tiering, compression and de-duplication.  Of course the hardware cost per TB is high and the skills required to maintain the hardware can be expensive too.  However many end users look at these solutions for more than just virtualisation and care about availability over the cost of deployment.

So, which solution is right for you?  There are technical merits and issues with each solution, however without an idea of sensitivity to cost (whether that’s infrastructure, skills or facilities), the decision becomes much more difficult to make.  There are also a number of platforms that don’t easily fit into the above definitions, including DDN’s VMstore (formerly Tintri).

Cloud Complexity

Public cloud is also adding a new level of complexity to SDS. Many vendors are moving to deploy their solutions in public cloud, either in virtual instances or natively with support from the vendors. This means solutions move from capital to operational expenditure that can make solution pricing easier.

The Architect’s View

Storage for the Software Defined Data Centre is becoming more complex than ever.  Developing and maintaining a TCO model is essential in order to evaluate the myriad options available as a customer.  TCO and requirements together form the basis of making sound purchasing decisions.  It’s good to see intense vendor competition as this drives the market forward, making storage a key feature of the SDDC to come.

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2018 – Post #C998 – Chris M Evans, first published on https://www.architecting.it/blog, do not reproduce without permission.