One of the interesting take-aways from VMworld EMEA this year was the number of companies looking to solve the “storage problem” in virtualisation. I’m sure many of our traditional storage vendors will say that no problem exists, however there is a perception that storage, in particular Fibre Channel, is a costly solution and requires expensive administrators to keep tidy and in place. As an aside, I don’t believe that’s true, but it has done no harm for the legions of skilled storage administrators who’ve done handsomely out of the industry over the last 10-15 years (including myself, I may add).
But getting back to the point in hand, there are now a range of hardware/software and software-only solutions that are looking to do away with centralised storage and distribute the resources throughout a virtual infrastructure. Anyone following virtualisation will know that VMware are pushing their own technology in the form of Virtual SAN (or VSAN, possibly the worst acronym choice ever), which is currently in public beta. However there are other solutions much more mature that have been out there for some time, including Nutanix (hardware/software), Simplivity (hardware/software), Atlantis Computing (software), Virsto (software, now VMware-owned), Scale Computing (hardware/software), Pivot3 (hardware/software) and ScaleIO (although that’s a presumption on my part, not a released product). It’s also possible to use things like HP’s StoreVirtual VSA, which can exist in a multi-node architecture and there are some vendors touching the edges of this converged model like Tintri and Violin Memory. The latest company to join this group is Maxta Inc, which came formally out of “stealth” on 12 November 2013 with their software-only solution called MxSP.
So what does Maxta offer? In a deployment very similar to other solutions, MxSP is a virtual machine/appliance that sits on the virtual infrastructure and owns the storage resources for that hypervisor installation. Guest VM’s access the storage presented back to the hypervisor as an NFS share. For data resilience between nodes, a private connection is required between the nodes, either as a physical switch or VLAN. As MxSP replicates data in a RAID-1 style, then this network is likely used for both communications and data transfer. All this seems simple enough, and at some stage I do a more comprehensive hands-on review. However, in the meantime here are a few things to think about when reviewing these products.
- How much resource is required on each hypervisor, including compute, disk capacity and memory?
- How is resource usage managed and monitored (including how resources can be constrained)?
- What features are available for data reduction (thin provisioning, compression, dedupe)?
- How does the solution cope with hardware and software failure?
- How many redundant copies of data are being kept, how are they replicated (sync/async).
- What happens if a whole node fails data is being accessed from elsewhere in the cluster? Does that affect performance?
- Are automated rebuilds across the remaining cluster in place to re-establish data integrity?
- Can I even vary the number of copies of data to increase my resiliency?
- What level of data protection (local and remote) is available?
- What leve of data validation is in place (data scrubbing, CRC checking etc)?
- How do I manage the performance of a single node?
- How do I manage the performance of storage resources across the cluster?
- What automated load-balancing algorithms are in place?
- What are the limits of scalability on the solution?
- How easy is it to add and remove resources? Can I add a node and/or just disk non-disruptively?
- What happens if I want to take a node out of the cluster? Can I drain its resources and move my data elsewhere?
- What security controls are in place on the appliance/software?
- What hypervisors are supported?
- Can I run in different versions of the same hypervisor and/or in a heterogenous hypervisor environment at the same time?
- What support is therefore hypervisor features such as FT & HA? How do these integrate?
- What is the hardware support matrix/HCL?
- How is the software supplied? Is it a “black-box” VM or can the user configure it?
- Does the vendor produce hardened versions?
- Could nodes in a cluster run differing versions of the software to allow me to upgrade without an outage?
- What level of tolerance is accepted in version differences?
- What is the licensing model used? How is licensing affected by usable/used capacity?
- How easily can licences be tuned up/down?
The Architect’s View
It’s good to see these new solutions attacking some of the issues encountered in delivering storage for virtual environments. However centralised storage has had many years of evolution that provides a solid basis for data protection and availability. Any of the new solutions need to be reviewed in light of this; what is being lost and gained by distributing storage at the VM level? Am I still delivering a solution that is as resilient as the one before?
One interesting thought is where these solutions are headed. I’m sure Nutanix and Simplivity can provide software-only versions of their solutions, but enrobing them in hardware provides better and controllable performance (and the ability to add more margin). So, will we see more vendor-agnostic solutions? I think the answer has to be yes; VMware are challenging with Virsto and vSAN; I expect Atlantis to do something in the server space; EMC will do something with ScaleIO. Interoperability will be the key here, rather than specific vendor lockin.
I’m hoping to review Maxta soon, in the meantime, check out some other good links I’ve listed here.
- ScaleIO – EMC’s New Baby
- Reflections on VMworld 2013
- Maxta presents software defined storage for vSphere challenging traditional SAN/NAS (UP2V Blog)
- Introduction to Maxta Storage (Willem ter Harmsel Blog)
- Maxta Debuts Server Side Distributed Storage Virtualiation (Wahl Network Blog)
- Maxta chucks vSAN out of stealth and into El Reg review suite (The Register)
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
Subscribe to the newsletter! – simply follow this link and enter your basic details (email addresses not shared with any other site).
Copyright (c) 2013 – Brookend Ltd, first published on http://architecting.it, do not reproduce without permission.