Home | Featured | HP & Violin?

HP & Violin?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

HP_logo_blueI found the following article from last week’s “The Register” an interesting one:


In it, Chris Mellor talks about HP producing an Oracle Exadata competitor by integrating the use of Violin Memory’s all-SSD storage array.  Folks may remember that I predicted exactly this set up in the following post:


While attending an HP event last year it became obvious to me that (for some customers at least) the ability to include a solid state array in a virtualised infrastructure would provide the perfect opportunity to deliver high performance virtual machines.  There’s a lot of talk around at the moment about how virtualisation moves on from the 30% of low hanging server fruit that has been virtualised to date.  I think a combination of SSD-based storage and a virtual platform can be one of the catalysts to improve those “hard to virtualise” configurations.

Violin1So imagine, in 3U you can provision up to 10TB of storage with 200,000 random write IOPS.  With a decent blade server to match, this could easily start to virtualise those difficult applications.  Now of course the fly in the ointment here will be cost; does the TCO for this kind of a configuration justify the expense?  In addition, would it be acceptable to place many high performance (and presumably high importance) applications on the same infrastructure?

I’d love to see HP producing some TCO materials about these kinds of configurations.  In my opinion, using SSD arrays in this fashion has to be the way forward, rather than placing SSDs into what are currently essentially legacy architectures where the low latency response I/O is inevitably hampered by I/O from traditional disks.

One other thought.  HP definitely have technology based on memristors under development.  Is the use of Violin Memory a stopgap until this technology could be brought to the market?  Even if it is, this announcement could make Violin one of the hottest properties of 2011.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Sam

    What about HP’s relationship with FusionIO? Could that have any bearing here? I certainly agree with you that there is a requirement for, shall we say, non-mechanical disk for high performance environments – and virtual desktop deployments. Would a P4000 array (or arrays) with FusionIO meet this requirement?

    • http://www.brookend.com Chris Evans


      The only difference with Fusion-IO is that the card is physically within the server and so that device is impossible to share between VMware nodes in a cluster. Obviously the Violin Memory array could be shared by a cluster for resiliency. Imagine a server or blade failing and having the active data on Fusion-IO; you’d have to remove the card and reseat it in another server to get your data back. This isn’t practical. However where that level of resiliency wasn’t a problem then I guess a standard array with a Fusion-IO card could work, the missing piece is the ability to move data from/to the flash card from the P4000 as it becomes active/inactive.


  • Pingback: Tweets that mention The Storage Architect » Blog Archive » HP & Violin? -- Topsy.com()

  • http://www.staticnat.com Josh

    First of great articles! Been watching this come down the pike for awhile. As for the Fusion IO solution not working with VM due to not being Shared storage, I have always said this is where LeftHand can shine. The FusionIO cards allow high IO storage in each server then timed together via RAID 5 over the Network. Without the Fusion IO component LeftHand has always seemed like a cool idea that can’t really keep up. Just my 10 cents.

  • Cleanur


    Assuming you want to use Fusion I/O as a read / write cache and not read only like PAM, then the local fusion I/O card could be utilised as an extent to the nodes internal file system with P4000 Network Raid mirroring or striping with parity across the other nodes in the cluster. Probably a more expensive solution in terms of development and $ per TB than a shared access layer, but much more integrated. Conversely introducing it as a PAM type card sounds like a bit of a no brainer from an implementation perspective.

  • Michael Nauen

    Isn’t the storage market more moving in virtual storage appliances

    Now I think we will see more comparision.
    Allthough netapp is starting with a virtual appliance.

    For an independend x86 based storage could be build with following componenets:
    1.) X86 oem server (dell, hp, ibm, fujitsu …)
    2.) Fusioio card
    3.) RAM
    4.) Virtual storage appliance (emc, netapp, datacore, falconstore, seanodes …
    5.) Fast interconnects (infiniband, 40 and 100 gigabit)

    With seanodes you could use the local disks or fusioio cards in an X86 server.

    What about memristore?

    • http://www.brookend.com Chris Evans


      I agree we will see more virtual appliances. However they do have a few drawbacks. Firstly, you could deploy them on any infrastructure with other virtual machines. How does performance work? How would you measure performance and therefore compare different VSAs? In my experience, some are efficient, others are seriously memory hungry.

      How does support work when any components can be used? Who can you blame if a particular HBA or NIC isn’t compatible?

      You are right, a storage appliance could easily be built from the components you reference (including memristors when they become commercially available. However for these products to be more than lab experiments there needs to be some standardisation to enable proper support. I’ve a blog post or two coming up to discuss exactly this in the near future.


  • Pingback: EMC Enters The Market With “Me Too” Flash Products | The Storage Architect()

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×