Home | Uncategorized | Enterprise Computing: Cisco, IBM, Sun & EMC – A Busy Week

Enterprise Computing: Cisco, IBM, Sun & EMC – A Busy Week

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

It’s certainly been a busy week in the world of enterprise computing.

logoFirst, Cisco announced their Unified Computing System – blade servers to you and me. UCS integrates blade servers with management functionality and the Unified Fabric.  What’s interesting is that Fibre Channel gets pushed out at this point in favour of either iSCSI or FCoE.  Whilst there’s plenty of commentary in the blogsphere on the server implications to this announcement, I’m more interested in the trend it sets for storage and in particular the movement away from Fibre Channel connected devices.  

I’ve commented before that I didn’t see a great need to shift to FCoE as it introduced additional cost and unnecessary technology change.  Clearly if you’re re-architecting a datacentre based on Cisco UCS, then FCoE will likely be the protocol of choice.  I’m not aware of any vendors actually shipping storage arrays that support FCoE (I know Netapp and EMC have stated they will support it – they did that last October).

Perhaps this indicates even further the move to commoditisation of storage components.

logo_ibmNext there’s the rumour that IBM are looking at acquiring Sun Microsystems.  There’s no doubt that Sun are cheap; at the height of the dot-com boom they were trading at $257.25 a share (1 September 2000).  By October 2007 they were less than a tenth of that figure ($24.92) and earlier this week, they were less than a quarter of that, making Sun worth less than they paid to acquire StorageTek in 2005.

sun_logoWhat would IBM get?  There’s the obvious MySQL and Java components from the “classic” Sun business, but what about storage?  Well, there’s the StorageTek libraries – but IBM already have a business selling ATLs and multiple (and competing) drive formats – and tape doesn’t have a long-term strategic future in anyone’s business.  Then there’s enterprise storage arrays – rebadged Hitachi boxes.  Could this be the opportunity for IBM to finally shelve the DS8000 dinosaurs or would Hitachi run a mile from IBM? Just think how EMC would react if HP, IBM and HDS were all selling the competition to DMX.  The rest of the range is pretty generic modular stuff but does include the 7000 series,which IBM could use in replacement of their Netapp N-series relationship.  

Finally, EMC announced upgraded capacities to their Enterprise Flash Drives.  These come in at 200GB and 400GB models, keeping pace with existing traditional HDDs.  If anyone is prepared to say, I’d be interested to know how much EFD prices have dropped (per GB) since their introduction.  Hopefully DMX-5 (DMX-V) will provide granular access to these devices.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • Geoff

    The immediate benefit of FCoE needs to be viewed from the server side–a single I/O interface (CNA) rather than a multiplicity of NICs and HBAs–reducing data center spaghetti. By extension, FCoE provides a single fabric (LAN and SAN) rather than separate networking infrastructures. This vision will be executed over time as networking infrastructures (both LAN and SAN) are upgraded to accommodate unification over DC Ethernet (lossless). DC Ethernet significantly reduces cost, complexity, and overhead.

    The storage target will still be delivered FC. Whether the FC frames are delivered over DC Ethernet or directly over Fibre is a fairly moot point right now, but will become relevant once the native channel interfaces on the arrays become FCoE channels. Then it simply is a match of convenience, not necessity as any FC target can plug into an FCoE network. Remember that the payload will not change and will remain FC such that the target retains all its properties and capabilities from a functional standpoint. Nothing really changes on the target.

  • Geoff

    Yes, but we need something more scalable and I/O optimized than VMFS for what you are describing. The RDM with NPIV solution today bypassing VMFS is suboptimal from a management and integration standpoint, but is recommended for structured/ random I/O and performance. For these requirements, I can see the value of FC-8 on the storage channels on the arrays. Again, the interface is irrelevant (between FC native or FCoE), as long as it is a low latency I/O stack (i.e. no TCP/IP in the I/O path) and ~10Gbps for concurrent access requirements. Here is where you continue want to use LUN level intelligence from the array vendors for migrations, snaps, and DR requirements for large structured data sets. DDN and 3Par arrays are good for these workloads with good toolkits. These workloads also do not tend not to run in VMs as of yet. Cisco’s UCS changes that, which leads me to hope that the VMFS scalability limits and feature set will be much further enhanced also. The V-storage API is an interesting convergence point for at least for managing the structured data sets. Still not sure how to properly handle very large unstructured data sets on VMFS. NFS?

  • Ced

    Hi Chris,

    “and tape doesn’t have a long-term strategic future in anyone’s business”

    How do see the future of tapes because me, so far, i haven’t found any VTL based solution capable of providing so much storage density per square meters.

    With a SL8500 (the one i know), yo have 8500 tapes of up to 1TB (with T10000B). It’s 8.5Pb. Have a look on how much racks of disks you need for this. I’m pretty sure that the floor space will be much higher.

    Also in a green strategy to reduce power consumption (to insert more stuff in your DC :) ), VTL based solution are still far from the power consumption of a library.

    The only ‘unknown’ is how tapes will evolve in the futures to provide higher density.

  • Pingback: Enterprise Computing: The Long Term Future of Tape « The Storage Architect()

  • Chris Evans


    I’d agree with you on the consolidation front, however a year ago when FCoE was being discussed, for individual servers, there was no benefit in consolidating into a single “I/O” device due to lack of cost saving, more management overhead etc.

    Since then, the Cisco UCS announcement, plus the clear direction we’re headed of having most servers virtualised and sitting above a “Open Systems Mainframe” style architecture means the implementation of FCoE makes perfect sense. In that scenario, I’d agree with you that FCoE is much more preferable than a multiplicity of connections; it’s like going back to ESCON and EMIF of 20 years ago. Personally I think this trend should be sounding alarm bells for the storage vendors. The clear message is that their hardware is not important; higher level intelligent functionality like snapshots and replication will be done at the hypervisor level and/or guest level. So, as long as the hardware delivers and can be managed then who cares where it comes from?

  • Chris Evans

    d_ced – I’ll reply to this with a post as I think it your comment merits a more detailed response.


0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×