Home | GestaltIT | Enterprise Computing: Is DMX The Worst Array for Wastage?

Enterprise Computing: Is DMX The Worst Array for Wastage?

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

As part of my work with Storage Fusion, I’m reporting on capacity and utilisation figures for different array vendors.  Part of this includes analysing exceptions and wastage.  DMX and Symmetrix seem to have more issues than any other platform.  Is this by design?

I guess I should define what I mean by exception and wastage.  So, for exceptions, I’m referring to inconsistencies in the configuration which create a logical error.  This may include things that are just generally untidy (LUN mapped to four ports but only masked/zoned on two), things that cause a potential problem (single pathing of a host to a LUN), or things that simply are wrong (a BCV is smaller than the source LUN so is never going to be able to be replicated). 

Wastage refers to things in the configuration that can be reclaimed – storage mapped to a port which has no masking, for example.

DMX and Symmetrix issues count for a significant percentage of the error codes we have developed and more inconsistent forms of configuration are uncovered each week. 

I have some theories as to why these might be occuring.

  • Complexity.  DMX configurations can be wildly complex.  For example there are 37 (yes 37) different LUN types we track; these include the obvious Unprotected, Std, RDF1+Mir, but there’s also things like RDF2+Mir and RDF2-Mir (is that the same as DVD+RW and DVD-RW?) and RDF1-BCV+R-5.  There’s also all the different replication types; SRDF/A, SRDF/S, SRDF/Star, SRDF/AR, Cascaded SRDF, Concurrent RDF and so on.  The same thing applies to local mirroring – BCVs, Clones and Snaps.  DMX/Symmetrix offers a lot of options but also therefore provides the ability to create complicated and unwieldy configurations.
  • History.  The DMX/Symmetrix line has grown up over many years.  The original design concepts are still there – use of hypers, mirror positions and so on.  New features have been layered over this original design, including features such as thin provisioning and snapshots.
  • Management.  DMX still doesn’t offer an IP-based interface to manage the array.  All configuration is in-band through gatekeepers and command devices.  The CLI (Solutions Enabler) is now vast and expansive, however even basics such as LUN mapping are handled via configuration changes whereas masking is by command.

What of the competitors?  USP/XP configuration is a breeze with Storage Navigator/CVAE and the method by which LUNs are assigned replication functions can only be achieved once a LUN is presented.  NKOTB like 3Par and XIV have much more streamlined and effective management interfaces.  Even Clariion and EVA preclude many error prone configurations simply by the way LUNs are created as needed.

All in all, DMX is getting long in the tooth.  Hopefully DMX-5 will remedy some of these problems.

By the way, EMC arrays produce the most recoverable storage, so if you have DMX or Symmetrix arrays and want to get some storage back, drop me a line.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • http://blogs.rupturedmonkey.com Nigel

    Ooooh I feel a large backlash coming your way ;-)

    But hey, may be Cisco will revamp the DMX line when they buy EMC?

  • tonyasaro

    Very informative. Do you think on some level that IT professionals like the complexity and being the EMC experts in their companies? I remember that one IT professional said to me that they didn’t want to use 3Par because it might end up with him losing his job – because it was so easy. However -contrast to to that another IT pro told me that he would like to see the day when storage was as easy to configure and support as consumer electronics.

  • http://storagezilla.typepad.com Storagezilla

    Around management I was working on a POC here one weekend and needed to add devices, create device groups and perform a number of SRDF personality swaps.

    I don’t fly Symms day to day so it should have sucked to have been me.

    But it didn’t suck to be me as I just downloaded and installed the Symmetrix Management Console (SMC) from Powerlink and did it all with a few clicks and without the assistance of a Symm admin.

    I covered this last year.


    There are two products DMX customers should be using if they’re not already using them. Virtual Provisioning is one, SMC is the other.

    What I don’t know about configuring the Symm via SymmCLI can fill volumes but if I can manage a Symmetrix like I can manage a mid-range array and do so through a browser anyone can.

  • tonyasaro

    Agreed. I think a shift from low-level to more strategic value is inevitable. We have seen this in other areas of technology – networking, operatiing systems, databases, etc.

  • http://chucksblog.emc.com Chuck Hollis

    Hi Chris

    I was a bit confused. I think it would be helpful to differentiate “how the array works” from “how people choose to use the array”.

    Many of the customers I work with prefer to have unused swing capacity around for a variety of reasons of their choosing, which has little or nothing to do with the underlying characteristics of the array itself.

    Put differently, DMX can be run at extremely high utilization, if a customer chooses. However, many don’t — even if they know they can.

    And ‘Zilla is right, SMC (which has been out for quite some time), makes DMX config a breeze, and eliminates many of the issues you raise.

    Even I’ve done it — but not on a production machine :-)

    — Chuck

  • http://www.storagenerve.com Devang

    You point a few things in there that are so called limitations with the Symm DMX platform.

    The problem overall has been the Enginuity Code, it was designed back in the days of Moshe with Symm2 and onwards for a true SCSI backplane. The same code was ported over to the Symm 5.5 with the LVD backplane and the same code has been ported over to the DMX and the DMX-3/DMX-4 platforms. Added features, added design but the core remains the same…..

    With DMX-5 or DMX-5V (??) we might see some additional features related to interfaces and how data structurally works.

    Added support for SMC is good, a while back there was a post i wrote about it. Here is the link

  • Chris Evans

    Oh, Mark and isn’t SMC just using SYMCLI under the covers to do it’s dirty work? The early version I saw didn’t support all features of things that could be done in SYMCLI – I assume that has changed?

  • http://storagezilla.typepad.com Storagezilla

    Yes it is an easy to use front end to SymCLI and there’s nothing wrong with that.

    How many custom scripts have Symm admins written to cover their specific corner case?

    Hundreds? Thousands? Tens of thousands?

    Why would I not leverage that functionality?

    You can’t just ditch that and tell people you want them to do it in a more limited way. We’d have customers screaming blue murder. But with SMC and ECC (I think the limit for SMC is what you said) people don’t have to worry about SymCLI.

    As for Kevin, I’ve worked with him, he’s forgotten more about Symmetrix than you or I will ever know and is a fine engineer. I don’t know what you discussed and won’t comment on that.

    EMC can make tools like SMC and technologies like Virtual Provisioning available but we can’t force any customer to use them no matter how many sessions at EMC World we run or how many times we say they *should* be using them.

    New Symm users start using them on day 0, with long time Symm customers there’s an aspect of “But we’ve always done it this way and we’re not changing that.”

  • http://storagezilla.typepad.com Storagezilla

    I’d argue that a lot of the scripts have more to do with facilitating their application setup and unique requirements than any limitation on the part of the storage array as if I’m reading this correctly you’re saying the DMX is too flexible and SymCLI is too rich.

    You know, I don’t know how well putting rule checking restrictions on what DMX customers can and can’t do with their systems would play.

    That’s not being snitty. Were we to poll a room full of Symm guys and say “we don’t want to allow you to do bleh” I’ll get a multitude of reasons why they’ll want to.

    We’ve been down this road.

    Limitations on mid-range products I get, they’re all about limiting what could go wrong as the margins don’t pay for the support calls, but DMX admins consider themselves to be the experts on what they’re doing and why they’re doing it.

    I’m not saying poorly thought out configurations don’t happen, they do, but I’d be cautious as to how I’d have the system decide what a poorly thought out configuration in a DMX was.

  • http://thestorageanarchist.com the storage anarchist

    FWIW, I think many of you have an out-dated view of SMC.

    First, while initially it was modelled to be a GUI interpretation of all of the CLI commands, it actually doesn’t use CLI to operate – it uses the same SymAPI infrastructure that SymCLI uses to accomplist its tasks.

    Second, it does everything you can do with the CLI, AND MORE. Today’s SMC includes multiple wizards to automate tasks. For example, you can set up specific “profiles” for new storage allocations, and it will automate the steps for you.

    Third, as Mark implies, with merely a basic understanding of the allocation methodologies employed for Symm, just about anyone (with the appropriate credentials) can manage a Symm.

    Fourth, SMC reduces CLI complexity significantly – for example, setting up a SNAP operation that takes 21 CLI commands can be done in 3 clicks with SMC.

    Fifth, SMC ships with each new revision of Enginuity, and fully supports all the latest features of each release at GA – no more waiting for new features to propogate to Symm Manager.

    Sixth: with the latest release of ControlCenter (6.1), SMC essentially replaces Symm Manager altogether. SMC is responsible for all storage allocation and device management, and ControlCenter will launch-in-place SMC as you drill down into tasks on specific arrays. As such, it is no longer “ECC Lite” – it is in fact “ECC’s Device Manager for Symmetrix”.

    Seventh: did I mention that SMC is free to anyone with a Symm Manager license?

    Eighth: The next release of SMC adds even more automation and simplification, especially in the area of mapping and masking where the entire process has been streamlined such that it now takes a fraction of the clicks and elapsed time that competitor products require.

    Ninth: The idea of a “symantec check” is a good one – and you’re right, nobody does a great job at this. Given that Symms are typically bigger and support more hosts/applications makes the issue you raise more acute; I’ll be sure that our engineers take your input into consideration – we’re always looking for enhancements that can improve the practical application of Very Large Arrays.

    Thanks to all!!!

  • http://chucksblog.emc.com Chuck Hollis

    Good point Chris

    I think you particularly (because of your line of work) get exposed to many situations that — politely put — might need some professional help.

    In my travels, I’ve seen customers screw up just about every storage array that’s out there. The EPIC FAIL modes change, but the root cause remains the same :-)

    — Chuck

  • NA

    As has been said before, Virtual Provisioning is an awesome piece of software (resolves a lot of DMX limitations), but with the license cost, I don’t really see it as an option for many customers.

    Here’s an array… if you want it to be easily managable, pay this “extremely high” cost. Perhaps the the non-thin-provisioning portion should be built in for free to make this an appropriate argument.

    DMX is probably a lot easier to manage with a vSeries or SVC in front of it too.

  • http://www.techmute.com Techmute

    As Zilla indicated on the previous post, yes, Virtual Provisioning is (most likely) priced similar to other thin-provisioning solutions on the market place. I guess where I take issue is when VP is thrown out there as a method to make over-all storage management of a DMX easier.

    Not everyone is interested in thin-provisioning (or paying the cost of it), but I’m sure there are a lot of customers who’d like DMX management made easier. Lumping “easy storage management” in with “thin provisioning” is a slightly bitter pill.

    @Chris/Zilla – A lot of the home-grown scripts are to enforce standards, and to prevent common-sense errors… and to make DMX management easier. To indicate that the scripts are mostly written to resolve corner cases seems incorrect, from what I’ve seen.

    I still prefer SYMCLI over SMC and ECC simply because it requires you to know exactly what you’re doing. I’ve seen mistakes made because people assume things about both GUIs. Not to mention, it is quicker (for me) to hunt about a SYMCLI Quick Reference than poke around a GUI :-).

    Shoot, if EMC combined SYMCLI with a CLI environment like Cisco’s, I’d be in heaven.

  • http://www.techmute.com Techmute

    Regarding wastage-

    “Wastage refers to things in the configuration that can be reclaimed – storage mapped to a port which has no masking, for example.”

    I’m pretty sure this is something that is fairly easily discovered through SYMCLI. Once again, it isn’t something that ECC/SMC will show, but at some point familiarity with the CLI should be assumed.

    “So, for exceptions, I’m referring to inconsistencies in the configuration which create a logical error… LUN mapped to four ports but only masked/zoned on two”

    It basically all ties back to the management toolset. I think that the GUIs make it easy for novice storage administrators to make these types of errors. Selecting all the HBAs and Storage Ports in ECC, for example, will create masking entries that are never used… so even if SMC and ECC makes it “easy” to perform a lot of tasks, if someone doesn’t know precisely how to interprete these GUIs, then inconsistent configurations result.

    I’m surprised no one has commented on LUN addressing yet…

  • http://www.techmute.com Techmute

    Host/dynamic LUN addressing (requires 72 code I believe) makes many of the LUN addressing issues moot. It is still a pain for VMware farms where all the hosts have to see the same LUN # for each device, but all-in-all it is a great feature that (significantly) reduces mapping complexities.

    The symmask parameters for it are -lun or -dynamic_lun depending on what you’re trying to do. I believe that if you use this feature, you can map and mask in one command via the CLI.

  • http://www.storagenerve.com Devang

    For the designers and strategist, one advice….lets try the new DMX with a concept where SYMAPI is not on the Service Processor or may be a slight variation where there are multiple paths to symapi which can be used to talk to the symmetrix dmx. I have personally seen cases with allocation or provisioning where the service processor is hung or crashed or symmwin is not working or can’t obtain a lock for multiple reasons.

    If your symapi database has crashed, or corrupted or service processor is crashed, you will lose all configuration and communication access to the dmx…………I am only talking about interfacing the dmx not losing complete access from servers to the disk.

    But truly the dmx with its current design with failover, redundancy, cache mirroring, raid5/6, srdf, is great, but your single point of configuration failure is your SP? Any EMC experts correct me if I am wrong!

  • http://storagezilla.typepad.com Storagezilla

    NA: A DMX-4 can support up to 64000 volumes. An SVC cluster can support up to 8000 volumes, 2000 volumes per IO group.

    That’s 8 SVC clusters consisting of 64 nodes to front end a DMX-4.

    I don’t see the management win when I have now have nodes coming out my ears and IO groups which now have to be managed as well.

  • http://storagezilla.typepad.com Storagezilla

    Techmute: I get where you’re coming from with VP but I will say that I was able to allocate volumes using VP without knowing a thing about what was going on underneath.

    Right click on the Symm and select Create Device or something like that. There’s no way in hell I’d have been able to have created the Symm devices I needed without SMC and VP.

    Or getting someone out of bed.

  • http://thestorageanarchist.com the storage anarchist

    Just curious:

    How much would these complaints be reduced if Symmetrix VP were provided with the system at no additional cost? Is this a pricing issue, or is it something more?

    And as to radically simplifying the tasks of mapping and masking storage to multiple ESX servers – another very good point. I’ll ask our engineers if they can have a look at that as well…

  • http://www.techmute.com Techmute

    Zilla, Anarchist:

    I’m not arguing against how easy VP makes provisioning on the DMX, it is more the feasibility of getting it into a pre-existing environment.

    Anarchist: “How much would these complaints be reduced if Symmetrix VP were provided with the system at no additional cost? Is this a pricing issue, or is it something more?”

    From my standpoint, it is almost completely a cost issue. For new DMX purchases, there is little problem in getting the VP license rolled in (or even “sunk”)… with existing DMXes, that isn’t an option. If you’re in an environment where the thin-provisioning functionality isn’t going to be used (due to politics, uncertainty, the length of time it takes to actually purchase disk), then it is impossible to sell the VP license based simply off of “it’ll make my job a lot easier even without the TP portion.”

    Without VP, the difference in the amount of time it takes to provision CLARiiON storage compared to DMX storage is staggering.

    “And as to radically simplifying the tasks of mapping and masking storage to multiple ESX servers – another very good point. I’ll ask our engineers if they can have a look at that as well…”

    Please do. I can’t think we’re unique, but between VMware (and Windows, for that matter) needing to see LUN IDs less than FF, and the size of the ESX farms, it is quite unwieldy… especially if you’re using dynamic/host LUN IDs for the farms to meet the LUN ID requirement. It can be scripted, but even then, that is a ton of symconfigure and symmask commands.

  • http://www.techmute.com Techmute

    “I think the view is that EMC are giving a solution and savings in one hand then taking it back in the other by charging a licence that could negate a significant amount of the savings.”

    Or, as I’ve heard it put… “Any technology that reduces the amount of storage needed will cost enough to make up the lost profit… unless it is a 3rd party technology.”

  • http://storagezilla.typepad.com Storagezilla

    Okay so we have a good discussion here.

    Chris wants some form of semantic rule checking so people can’t paint themselves into a corner or at least be notified if they’re about to.

    What else do people want around management?

    Tell me what will make Symm management better?

  • SRJ


    Your straw-man SVC example is disingenuous and silly, no?

    Your facts aren’t even correct…the SVC only supports 4096 mDisks per cluster – you’re confusing them with vDisks. So if someone were actually as dumb as you make them out to be, and they actually wanted to provision 64000 (9GB?) LUNs to the SVC, they would need (16) 8-node clusters…not 8.

    But seriously…are you serious? Seriously?!? C’mon…

    That would be like me saying you need 14 DMXs to back-end one SVC cluster, since the SVC can manage 8PB of capacity.

    I could be wrong on max usable capacity of the DMX (585.91TB?)…just illustrating here. Doesn’t matter…it’s an equally ridiculous claim. Let’s both not make ridiculous claims…


  • http://storagezilla.typepad.com Storagezilla

    Yeah lets c’mon, so I mixed up a larger vDisk number with a smaller mDisk number.

    That makes it all even *less* compelling than I thought it was.

    The ridiculous claim is that it’s easier/less expensive to manage a DMX through a lot of small SVCs than just through the DMX itself.

    I don’t want to pay for virtual provisioning so I’m going to buy, manage and pay maintenance on a lot of SVC nodes instead?

    Same for vSeries? Which is what? Two nodes a cluster?

    Seriously? Even early on it’s not an OpEx benefit. It’s a *cost deferral* because as you grow you have to keep adding more up front. All of which you have to buy manage and pay maintenance on.

    Mid-range market? Sure knock yourself out. Front ending something the size of a DMX? Why don’t we just dig a pit, throw your money into it and set it on fire.

    That’s probably more cost effective in the long run.

  • http://thestorageanarchist.com the storage anarchist

    SRJ – indeed, ‘Zilla is exhaggerating, but only to make a point.

    I’ll tone it back a bit…

    There are in fact a large number of Symms running north of 32,000 devices, and the AVERAGE number of exported LUNs across all installed DMX4s is more than an 8-node SVC cluster can pass through (mDisks) or aggregate and export (vDisks).

    And if you use SVC to provide thin provisioning, let’s not overlook the additional capacity on the array required to host to the metadata-it may be small, but not insignificant.

  • SRJ

    Sorry for the late response – thought I had e-mail notifications enabled for these comments…

    ‘zilla: I know it makes it less compelling. I made that point for you because it doesn’t matter – it’s meaningless. The *real* point is that if someone wanted to front-end a DMX with the SVC, they would *never* create 64000 volumes (mDisks) to present to the SVC. They would create 1 volume per RAID array…well within the capabilities of a single cluster to manage. It is ridiculous to present the SVC in the light you did, using totally incorrect assumptions (a straw man).

    Anarchist’s point is much more coherent…(refreshing)…but I suspect that if he only looked at open systems customers, those averages would come way down. Nonetheless – I certainly wouldn’t argue that the SVC can handle *every* situation. And in my opinion, <1% is a relatively small price to pay to get thin provisioning…but BarryB is right – it does exist. Significant? Not in my opinion.

    ‘zilla – Who said it would be less expensive??? It may/may not be, but no one made that argument. This discussion was all about ease of management. NA’s point (rightly made) was that it would be easier to manage storage behind the SVC than it is to manage the DMX. Then you built a straw-man architectural example of the SVC in order to discredit his assertion, which I corrected you on. NA’s point still stands. And there are many, many large enterprise customers using the SVC in front of enterprise-class arrays, DMX included.

    Now, we could have the cost discussion, but that’s a long, and probably unproductive one in a forum like this. You’ll make the argument that you have to buy SVC nodes and licenses, etc… I’ll make the counterpoint that you don’t have to pay for PowerPath, you only have to buy copy services once (and only for the usable capacity you actually need), you can replicate from a DMX to a non-DMX, you can non-disruptively relocate volumes from the DMX to a cheaper tier-2/3 platform as your workload requirements change, etc, etc, etc, etc, etc… As a matter of fact – the ROI is usually satisfied *solely* by the elimination of PowerPath! In the end though, that’s not an argument (valid as it may be) anyone was trying to make.

    Sorry man – that fire pit is much better fueled by cancelled EMC software maintenance contracts…

  • http://thestorageanarchist.com the storage anarchist

    Point of clarification – most of the Symms with large volume counts are Open Systems, not mainframe…

    Unless I’ve been mislead, the oft-overlooked limitation of SVC is that you can only nondisruptively relocate LUNs (vDisks) within a single node pair (2000 vDisks), not across the entire cluster (8000 vDisks max). These sorts of limits tend to fit better within much smaller environments (I’d call them “mid-range”, but Farley has already smacked-up the idea of putting SVC in front of weed-whackers).

    And for the record, you don’t have to buy PowerPath to work with Symm or CLARiiON (the free version of PP handles path failover but does not offer load balancing).

    Nor do you have to buy local or remote replication software for capacity that you don’t plan to replicate on a Symm. Yeah- you used to, but no more. You can even exclude all your SATA capacity from SRDF & TF if you’re not going to replicate it – and if you do want to replicate it, SATA capacity counts half against the capacity-based pricing.

    So, with all due respect SRJ, the economics of SVC vs. DMX these days are not necessarily all that they’re hyped up to be.

  • SRJ

    Thanks for the clarifications…a few of my own:

    You can now relocate vDisks between node pairs (IO Groups) with a feature included with the current version of code, 4.3.x (a free upgrade for all). Even before this was possible though, it was never as big a deal as you guys made it out to be, even in the larger SVC shops. It was more of a convenience factor. But the whole point of this forum is about how inconvenient the Symmetrix is to manage…

    I knew about the free version of PP, but what enterprise customer doesn’t want load balancing? SDD is free and it does both.

    Didn’t know about the new licensing scheme on the Symm. With that knowledge, I’ll take the opportunity to call IBM out on the carpet, whose licensing policies on the DS8000 are still idiotic. Is anyone listening at IBM? If you can do capacity-based licensing on the SVC, why not for the DS8000 as well?

    tsa – does that licensing model apply to all features on the Symm, or just replication?

  • techfreeze


    I have to add some real world color regarding SVC’s and even other competitive gear. I had the pleasure to manage 5+PB of EMC, HP, HDS, SUN, IBM, and NetApp arrays (85% EMC gear). As an outsourcer we looked at the possibility of virtulization via HDS, Invista, and IBM SVC. I reference these other arrays because I don’t want you to think I only know EMC. Not the case, I have even actually deployed several SVC clusters in my day and feel there is a place for virtualization. You might agree that my pain points as an Outsourcer around tech refreshes and the process to on-board/off-board clients in a leveraged environment, might make virtualization a likely choice. Yes, IBM referenced the savings of using SDD vs. PP as HUGE savings. I had my own challenges around that cost but when you leverage PPME (migration enabler – online) and even encryption through PP, I quickly realized IBM’s SDD really wasn’t considered an equal. I find that if you say “can your’s do that”, there were a lot of “No’s” on IBM’s part. Again, some of these features might be a moot point if you are not using them, but I like the fact that if my business needs change and these features are needed, it is a quick license. IBM would require another solution. I keep thinking of IBM’s innovate commercials and trying to figure out if they will add some innovation to their multi-pathing software. You have to admit that EMC has raised the bar with their PP capabilities.

    I have worked with most of the IBM product suite from the DS3000/4000/6000 through the 2105 and 2107’s. From the RDAC’s to the SDDDSM’s. As a customer to IBM and a Business Partner. I couldn’t help but get frustrated at the limited scalability and flexability. I will give you a few examples. Have you ever tried to create multiple LUNs/devices through SMGui? You can’t do it. I can’t say that, you can through scripts. How about Reporting….. Boy, that is not pretty either.. Perhaps this has changed in the 8 months I have been away but I doubt it. The mid-range LSI arrays scale to “x” number of drives and then you had to figure out a couple things.. They generally resorted to 1). What new frame should I buy? 2). How shall I migrate my data?

    EMC at least gives you options or at the very least prolongs this discussion.

    I don’t care what mid-range IBM array you look at but when you can only scale to 224 DDM’s. Not too mention IBM charges for storage partitions or what we call Storage Groups (Free for EMC). You can knock PP cost but I think charging for partitions or storage groups is ridiculous. If you see a common theme here, I can see why IBM went down the SVC route. This was to help protect existing IBM customer investments or at the very least make all of these migrations a little easier. Think about it, what else was left to do? Your arrays don’t scale and they lack a lot of features/functionality that ironically were placed into the feature set of the SVC. I think the right choice would be to make the mid-range arrays more scalable through DIP (it’s okay, take some ideas from EMC).

    I can even turn my conversation to UVM/UVR from HDS. If anyone has gone through the process of virtualization, we all know for any in-band solution it requires an outage. I have to give IBM props on the SVC, that a customer can could deploy this if they wanted to. You put that against HDS, and that is not possible. The whole process to virtualize is not as easy as the IBM counterpart. You will be working a nice excel spreadsheet through the migration process just to keep track of your Ext CU:LDEV, ExtAG, Ports, Trgt CU:LDEV, Target Array Group, Host Mode, etc.

    In short, I can generally leverage real world consulting and discourage virtualization. This just adds another layer of management. Don’t forget you still have to manage the underlying arrays and perform maintenance. So the point that you can minimize other vendor skillsets, would be false. It can still add another layer of complexity because you can’t just present an entire array with data on it to the SVC or HDS and expect to run perfectly. IBM and HDS have best practices on how their own arrays need to be carved out and presented to the SVC/HDS for best practices. No one has “x” amount of TB’s lying around so you can migrate all array devices/data, just to perform maintenance (please don’t tell customers this as it is just not realistic), data mobility – yes this is true however EMC offer’s this capability (Virtual LUN) as well with no need to virtualize externally.

    Let the debate continue… :) Good blogs so far.

  • InsaneGeek

    Maybe it’s just me, but not being forced into GUI, etc based interface is one of the things that I really like about the DMX. I’ve used a number of interfaces that work well enough, until some manager comes and says can you give me this report, etc. Then it get’s massively hard on the supposed “easy GUI”.

    You want to know all the BCV’s that never sync’d?

    symbcv list | grep NeverEstab
    or if Windows
    symbcv list | findstring NeverEstab

    or even easier if you are Excel friendly
    “symbcv -output xml list > filename.xml” open filename.xml in Excel.

    This gives you a list of everything; it even creates filters so with a single click you can switch from Never Established to Split, etc. (you can do the same thing with snap, clones, devices, etc).

    I’m not going to say that the DMX is point and drool-easy, but for me typing “symbcv list” is just as easy (or easier) as opening a browser, clicking a menu to find the same info… but more often than not I can’t export that output.

    Additionally, ECC will tell you all the above in a tabular form that is CSV exportable; also if you want to know about luns that are mapped to a channel but not masked to any host, you click on the “free space” button and it shows and it shows and overview of all unused on the frame, then click the “unallocated device” to show the individual luns. It’s not a new feature it’s been in there for a number of years.

    None of these things seem to require rocket scientist level knowledge, or uber secret hoo-doo backalley guru knowledge (at least to me they don’t); nor would I consider typing “symbcv list”, etc a script.

  • SRJ


    Wow – this topic keeps getting broader and broader! I like it..

    I admit that EMC’s PP has some semi-cool features. I’ve just never met a customer who actually uses the ones you mention. Having managed PBs of EMC gear, I’m sure you have… I’ve never talked to a customer who was interested in doing encryption with a very expensive, vendor-specific path management software product.

    It’s interesting that you make the point that IBM would require multiple products to meet some of the feature/function requirements that are met with a single EMC product. I don’t deny that this is true in certain cases, but it is absolutely equally true the other way around in yet other cases. I can’t tell you how many times I’ve put together proposals for customers listing out the 5 or 6 products that EMC is proposing, right next to the 1 or 2 products that IBM is proposing for the same requirements. I don’t have blinders on…I grant your point, but you’ve got to grant the opposite as well if you’re being honest.

    About the DS3/4/5000… I also agree that the GUI management and reporting capabilities are on the weak side. If you want really useful data, you need TPC (or now, TSPC). But again, you could make a similar point about the GUI that comes with the AX/CX products…or most other low-end or mid-range products. Sure they don’t all have the SAME limitations, but they all have their own limitations that get alleviated by purchase of some SRM software.

    You said – “The mid-range LSI arrays scale to “x” number of drives and then you had to figure out a couple things.. They generally resorted to 1). What new frame should I buy? 2). How shall I migrate my data?”

    Really?! How can you not say the EXACT same thing about the Clarriion?? That statement is incredible…

    And FYI, IBM has mid-range arrays that start as small as 5 drives and scale to 840 drives. The LSI line in particular scales to as large as 448 drives in a single system…

    All due respect, but IBM’s arrays (not mine, I don’t work for IBM and never have) start smaller, scale larger, and can provide more features/functions than clarriions any day of the week. With IBM you have options in the mid-range… DS3/4/5/6, N Series, XIV, and SVC+anything else. You might be able to pick ONE product line from IBM’s line and compare it to EMC in a way that favors EMC, but IBM can always do the same. Sorry man…

    And no, you don’t really need to manage the arrays behind the SVC. At most, you’ll need to handle a component failure or upgrade some firmware. Unless your environment is insane, that shouldn’t happen very often. I’ve got several customers with both EMC and IBM disk arrays behind the SVC that don’t know any more than the absolute basics of those arrays. No need to learn CLIs, copy services, scripting, etc., for all those arrays. The argument is very valid indeed. And again, yes, I have customers buying an entire extra DS3400 just to use as scratch capacity for off-the-cuff migrations. I have competed against EMC in price-war deathmatches, and it is COMMONLY cheaper for a customer to buy SVC with an entire EXTRA DS3400 to use just for scratch capacity (if they need it) than to buy a competing solution from EMC. I’ve done it…several times.

    Your last point is confusing to me…can you clarify? EMC can move a LUN between different boxes (Symm to CX3) with no downtime? I wasn’t aware that this was possible without a virtualization solution…

    Thanks for the discussion. Sorry to everyone for the meandering conversation…I never intended to do anything other than correct a false caricature about the SVC.

  • techfreeze


    I like a good debate as it is typical for words to be read backwards or not even at all. Let’s go back to exactly what was said and you can help me understand where your interpretations went awry. Apparently you didn’t know PP is open source? It works with all the major vendors. I can’t exactly say every customer requires encryption at the source but those moving into PCI compliance Level 2, need this capability and furthermore an easy transition doesn’t hurt.

    I never worked for IBM but I did work for an IBM business partner as well and managed every IBM array to date and I think I am hearing a little more sales then actual deployment here. Absolutely incorrect from a mid-range GUI comparison.

    1.)Can you tell me if you think SMGUI has good reporting?
    *I would be amazed if you agreed with this. Maybe I have fallen off my rocker but I don’t think so.

    2.)Can you confirm that you can get good performance metrics?
    *I would love to hear this response. I will answer that from real world experience and this is NO.
    *IBM will always push TPC because there is no good reporting. So you start with TPC basic which allows you to discover the IBM array or third party array but if you want performance and reporting, well time to pay again. Let’s get you into TPC standard. I do want to give credit where credit is due and TPC isn’t all that intuitive, but TPC Reporter (free) is probably the best all-inclusive reporting engine I have seen.
    *EMC can push you to ECC, but not appropriate in many cases. Navisphere Analyzer can be leveraged for performance metrics. Yes, there is a very small cost for Analyzer so I will state that before you counter. Keeping it honest SRJ :)

    3.)Can you tell me that you don’t charge for Partitions (ie. number of attached hosts 4,8,16,32,64,128)?
    *So I guess the cost of PP and the cost of partitions is a wash?
    *What if I have 32 host partition license and I need to add 1 more host? In what increment do I buy my next partition license? With the purchase of a mid-range DS array you get 1 partition (default) but you have to purchase with growth in mind. You can license later but 99.999% of the time you have unused partitions because you buy more than what is needed.
    *EMC doesn’t charge you for connecting hosts to the CX array
    *Just so you know, EMC doesn’t charge for PP in the CX4-120
    *You can use MPIO if you so choose.

    4.)How well does QOS work on your mid-range DSLine?
    *Again the answer is? I will let you respond :)

    5.)How many LUNs can you create at once through the SMGUI?
    *Only one at a time, unless you want to use Rlogin and script this. I can provide the script for the viewers but can’t imagine running this every time.
    *This is more astetics but I can’t tell you how many times I wanted to knock this out quickly and just sat and clicked over and over. Now, I learned quickly to script this but what about those that aren’t so savy which is a large customer base. Sure weren’t thinking about those consumers!

    6).Do you monitor your mid-range DS3000-5000 arrays?
    *Again the answer is NO.
    *This requires either a CE to come out or you have the customer make a phone call and send over the subsystem profile. If the situation requires more in-depth support, let’s console into the array and launch a series of command line commands and send the output to IBM. In my support days to selling IBM product, I never had 1 customer say that approach was proactive in any manner.

    7).Can you support Solid State today?
    *Not a need for everyone I know, but shows that you are limited until LSI integrates this into their product line. I suspect it is coming.
    *Nice to know this technology can be leverage in the mid-range arrays.

    8).Can your mid-range DS Line support 960 drives?
    *Again not everyone needs 951TB RAW but if you need to scale to that compacity when you consolidate (that is the key here), it is nice to have the scalability.
    *How many DS5300’s do you need to reach 960 drives?
    *First IBM will sell you SVC because this fixes the inability to scale the mid-range array to 960 drives.
    *Second it will take 4 DS5000 arrays (IBM states initial release supports 256 so perhaps you are stating this now supports 448, we will give IBM the benefit). So then that would be two arrays.
    *My math is decent but I would say power/cooling/floor space/rack space might be a premium concern for most consumers in todays market. You agree?

    9.)Can the DS Line do FC and iSCSI?

    10.)Does IBM offer LP ATA (5400rpm)

    11.)Can IBM migrate a LUN from a different raid group (in IBM’s terms “array”) or protection type natively without introducing any cost to the consumer?
    *Your answer is yes with flashcopy but No on introducing cost.
    *This is natively supported by EMC at no cost to the customer.

    With all due respect SRJ, I think you are missing the boat on many comparative aspects. Let’s say the customer starts with a DS4700. Equivalent to our CX-120 product line. What is your option when you scale past 112 drives? You buy another DS4700 or 4800 or 5000, whatever your growth requires. My point is once in a CX and your reach 120 drives, you scale out in the same array. You can go from 120 drives all the way to 960 drives without purchasing another array or introducing an SVC to help with migrations. This is a perfect example of what every IBM business partner including myself did. We took a weak architectural point and sold an SVC to get around this flaw. This didn’t solve anything for the customer and really offered up a bandaid to limited scalability/flexibility of which the SVC needed to be introduced.

    You mention from 5 drives to 840 drives. That comment is true but you are mixing LSI and NetApp in the mix. This would be your 3000 through the N7000. I didn’t intermix our EMC CX with our EMC NS product line. Let’s stay in the CX to DS product line. The whole Unified Storage Platform is another debate and I am just focusing on CX to DS.

    I challenge you on your comment “All due respect, but IBM’s arrays (not mine, I don’t work for IBM and never have) start smaller, scale larger, and can provide more features/functions than clarriions any day of the week. With IBM you have options in the mid-range… DS3/4/5/6, N Series, XIV, and SVC+anything else. You might be able to pick ONE product line from IBM’s line and compare it to EMC in a way that favors EMC, but IBM can always do the same.”

    I listed 10 points in which I am comparing apples to apples and you still stand by your comment that IBM can do the same? Perhaps I my past experience escapes me but if I follow that up with IBM data sheets, I still don’t get out it matches up 1 for 1? I see a lot of advantages that would benefit an EMC customer comparatively.

    *Scale larger? 840 drives right? How about 960 for the CX and I didn’t even intermix product lines. BTW, our NS-960 scales to 960 drives which is comparable to the N7000.

    *start smaller? Our AX line starts with 4 drives (which I don’t know why this really matters sooooo much). Yes there are customers that only want 1TB and this fits the mold but not a huge win for either side.

    *provide more features/functionality? Without bringing in another product like the SVC, do you still believe? I listed 10 of them that I would love for you to counter.

    I can give credit where credit is due but you are really drinking the IBM koolaid. I think IBM has some great products but it amazes me when comments are made in regards to scalability, flexibility, features, functionality, and then want to go head to head. I like the fire and I share that same hunger but I am a little more cautious by throwing out competitive comparisons when I know where EMC, HDS, IBM, HP, SUN, NetApp might fall short. Each Vendor has shortfalls and it depends if the customer sees risk or concern and if not happy selling.

    You wanted an answer to Virtual LUN but not sure how you managed to state that this works across frames. I never stated as such. I talk about consolidation vs. virtualization. My comment was that we don’t have to virtualize because we can scale to almost 1PB in the mid-range array and move data between Tiers online without any downtime. There is a lot you can internal to the array. If I have a host that has a R5 LUN used for Test/Dev and I want to change the protection type, I can do that by a click of a button and the migration to the R1 LUN occurs with no impact to the application and no downtime. I didn’t even have to introduce the SVC into the mix to achieve data mobility.

    In short SRJ, I think this is a constructive debate and if we want to go into the bits/bytes this might get even better :) I know you are a big time IBM VAR but I might get you to agree to some distinct advantages that EMC offers. As I have stated before, virtualization has its place and I think IBM has done some great things with the SVC. I don’t think EMC or any other Vendor would disagree. I know I would love to talk to your customers that purchased a scratch DS3400 as a cost competitive alternative. I am more than confident I can educate them on when it makes sense and the true TCO/ROI. Not sure how the EMC/VAR missed that opportuntiy but I won’t let it happen on my watch :)

    Great Debate and looking for your response.

  • Pingback: Enterprise Computing: Storage Wastage - A Reclaim Challenge « The Storage Architect()

  • techfreeze

    sorry guys… I will stay within the realm of the discussion.

  • SRJ


    I *really* should be going to bed now, but this can’t wait. Here we were having a civilized discussion, and you had to go and throw the “s” word at me! Trust me – I have spent MANY more years deploying this stuff than I have spent selling it (in a pre-sales technical role…I’m still DEFINITELY not a “s”ales guy). I pride myself on never letting sales guys get away with half the crap that comes out of their mouths. So let’s keep the polemics out of the discussion please. (grin)

    Now…with all due respect to you as well, considering your time working for an IBM business partner, and having managed “every IBM array to date”, your lack of knowledge on the basics of the systems is pretty astounding.

    I don’t mean for that to sound rude or to convey a sense of disbelief that you’ve managed these systems before….you clearly have! It’s just that your level of experience is probably not as extensive as you seem to believe….or you’ve been drinking the EMC cool-aid long enough to seriously affect your memory. =)

    Before I start responding to your points, I want to make it clear that I completely reject the notion that I can’t compare IBM’s entire midrange line (including the SVC and N Series) to EMC’s line. Customers care about SOLUTIONS…not products. Point products are meaningless. The fact that IBM has a much more complete line of products with which to create solutions is a strength. Even so, I’ll do my best to play by your arbitrarily limited rules, just for the sport of it…even though it’s not “real-world.” I’ll answer your questions using only the LSI products, but will offer side points using other products where applicable. I’ll also interject a few questions of my own…..fair?

    I’ll just respond to your points in order:

    First, I did not know that PP was open source. Cool. But that doesn’t make it free…(yes, I know there is a free crippled version…is that the open source version?). RDAC and SDD are free…not some crippled version…and they don’t get more expensive as the number of processors in my hosts increase. I grant that encryption capability is cool, just not really relevant.

    1. I grant this. No, the reporting is not good without purchasing another tool. Same goes for the Clarriion.

    2. Partially true. Real-time performance monitoring is built right in and easy to use. The boxes are also capable of collecting good historical performance metrics…and it’s trivial to enable them. The limiting factor is the fact that customers don’t have access to a software tool to interpret the historical data. I agree that this is dumb. But any business partner worth their salt would do this for free for their customers…they all have access to the free tool. Surely you used it in your past…?

    However, your characterization of TPC purchasing requirements is completely false. TPC is a modular product, so if all you want to manage and report on is disk, you only need to purchase “TPC for Disk.” No need to purchase “TPC for Fabric” or “TPC for Data” (the 3 of which, combined, make up TPC Standard Edition). It would be interesting to compare the costs of TPC for Disk vs Navisphere Analyzer, keeping in mind that the capabilities of TPC for Disk are more comparable to a subset of ECC. These products just don’t line up properly for a good comparison.

    3. It is true that IBM charges for Storage Partitions (host connections) on the DS3/4/5000 products.
    * I never said the cost of PP vs Storage Partitions was a wash. But since you mentioned it….yes, the costs of Storage Partitions are usually much cheaper than the total cost of PP.
    * 33rd host would require the next license increment, which is 33-64 hosts. How much would it cost to license PP for a single new 16-way AIX host? How much does that increase my yearly software maintenance? With IBM, the answer is $0 to both questions.
    * EMC may not charge per host to connect, but they do charge per host for the Navisphere Manager Suite…and it ain’t cheap. The tiered pricing structure is also somewhat inconvenient – similar to the way you tried to mischaracterize TPC.
    * Note that the number of included Storage Partitions varies by model….it is not “1” as you state.
    * Also note that the SVC completely eliminates the need for these licenses.
    * Is it true that EMC charges for additional software just to manage replication?

    4. Interesting question. How well does it work on the Clarriion? =) How many Clarriion customers actually turn it on?
    * Note that QOS works extremely well on the SVC and N Series….I’d put them up against the Clarriion any day of the week. Further note that rate limiting, etc, can be accomplished elsewhere in the solution (Cisco MDS).

    5. I don’t understand why you keep saying this….it’s completely false! You can *absolutely* create multiple arrays and logical volumes at once from the GUI. It’s called “Automatic Configuration.” If you really want to read about it, I can find the exact chapter and section in the RedBook for you. Wow – the depths to which you EMCers will go to spread FUD!

    6. False, false, and false. It’s called RSM…available since January 2007. How long has it been since you worked for that IBM partner again? ;)

    7. You suspect correctly that this is coming….later this year. Interestingly, not a single customer has asked for this yet, but I agree that it is comforting to know that it can be supported if required. For an equally (ir)relevant point – can your Clarriion support disk encryption today?
    * Note that the SVC can support flash behind it, and even within the new nodes. The N Series can also support flash behind them.

    8. So glad you asked! =) No, the LSI line cannot support 960 drives, thank God. However, you correctly point out the the N Series can scale to MORE than 960 drives…not 840 as I stated. (840 is supported in the 6000 series, not 7000. The 7000 series supports 1,176…so yes indeed, larger than any EMC midrange product.)
    * You mention that it’s important to have the scalability to grow that large. Would you agree that its *kind of* important to be able to scale performance right along with that capacity??? I can’t wait to hear your response to this. The CX4-960 only supports that many drives because it puts 120 drives on each loop pair. This is flat out *ridiculous*. If you honestly claim that performance scales even CLOSE to linearly with capacity then we can end this discussion right here. You’re not being reasonable. The Clarriion is so under-architected in the back-end as to make a 960-drive configuration all but laughable. The DS5000 performance scales linearly with the number of drives… How? Because they have enough loop pairs to handle it. A maxed out 5300 has less than half the number of disks per loop! Even the DMX requires a MINIMUM of 4 disk directors and at least 2 channel directors to handle a 960 drive configuration. You’re gonna tell me that the Windows-based CX4 can somehow magically push the same number of disks with substantially fewer back-end resources? I don’t think so man…
    * Yes, the DS5300 supports 448 drives now.
    * Yes, the SVC is a perfectly reasonable way to scale capacity while also scaling performance! it’s funny that you think there is something wrong with this. The DS line isn’t “unable” to scale to 960 drives…it’s just that the design engineers aren’t yes-men for the marketing department. It’s an enforced limitation (limited by design), not a forced limitation (limited by some other factor).
    * Yes, it would take multiple DS systems to get to 960 drives. But I can play this game too… The difference is that the DS boxes could actually push them all. How many Clarriions would it take to get to 1,176 drives? (N7900) And then how many Clarriions would it take to actually push that many drives? =)
    * Yes, power/cooling/floor space/rack space are important to customers. And so is performance utilization. Are you going to tell me that a customer is going to be HAPPY that he bought 960 drives, but can only get the performance of half that many?!
    * Speaking of rack/floor space – note that the DS disk trays hold 16 disks per 3U, whereas the Clarriion trays only hold 15 disks per 3U.
    * Speaking of performance – How much cache is available for actual user data on those Clarriions, anyway? Seems like that Windows OS eats up an awfully large chunk of the cache your customers pay for! When a customer buys 16GB (for example) of cache on a DS, they actually get to use it all… On a Clarriion they would get, what, 9GB? Or is it 6.7GB? Ouch!
    * While we’re at it, is the storage processing (heavy lifting) done with a general-purpose processor in those big CX4s? Or high-performance ASICs?
    * I can’t seem to stop on this topic of performance… What happens to performance when using snapshots? What about when one of the vault drives fails?

    9. Coming soon to the DS5000. New SVC nodes can also do iSCSI. N Series dominates the multi-protocol game….sorry!

    10. No. And again, not a single customer of mine asking for it. If we’re playing the “Green” game – Does EMC offer cheaper/greener mediums like tape? Optical?

    11. Again – completely false. The DS line supports DAE (Dynamic Array Expansion) to increase the size of the RAID array; DVE (Dynamic Volume Expansion) to increase the size of a logical volume…(this is different/better than LUN concatenation!); DRM (Dynamic RAID Level Migration) to change the RAID type/configuration; DSS (Dynamic Segment Size Migration) to change the data stripe size; and even Dynamic Mode Switching for replication. ALL WITHOUT DOWNTIME. ALL FREE. I seriously cannot believe you didn’t know about these, knowing as much about these systems as you are leading me to believe.
    * Can Clarriion dynamically change the segment sizes? Dynamically switch from synchronous to asynchronous replication as bandwidth limitations dictate?
    * By the way – the DS can do all of these without needing to move/copy the LUN…can Clarriion?
    * Can Clarriion non-disruptively expand and use a remote mirrored volume without destroying the mirror, or would you have to completely re-sync the volume after expansion? Of course not…
    * When you add more drive trays later, can you then re-arrange the disks in order to provide tray enclosure loss protection, where you didn’t have it before? Nope. I can.
    * Don’t even get me started on the additional functionality that the SVC brings to this point…that would get really ugly really fast for EMC.

    So, all due respect techfreeze – I don’t think I’ve missed any boat. Let me address your hypothetical scenario of the DS4700 (which is most definitely NOT comparable to the CX4-120 in any respect other than drive count…see point above).

    When a customer maxes out the DS4700, they would simply purchase new DS5000 controllers, power down the DS4700, re-cable all the disk trays to the new DS5000 controllers, power up the system, and get back to work. I’m starting to sound like a broken record here, but I REALLY can’t believe you didn’t know this. DACstore keeps all the array information on all of the drives, which enables things like this. No SVC required. Although the SVC is awfully helpful for migrating from a CX4-120 to a much better-performing IBM array! =) If you were selling SVC just to handle an upgrade from a DS4700 to a DS4800 or DS5000, I can see why you no longer work for an IBM partner – that capability is inherently available because of DACstore!!! I now understand why you have so many misconceptions about IBM products…

    To take this a step further…what if a customer has an old CX3 that they want to continue to utilize for the time being, but they just purchased a shiny new CX4 for new workloads. What if they want to move an array (or a bunch of them) from the CX3 to the new CX4? With the DS line, you could simply pop out the drives from the old system, and pop them into the new system, and away you go! The DS line is WAY more flexible than the Clarriion for things like this.

    You are correct – I mis-stated the 840 drive count. It’s actually 1,176 drives. And I admit that I thought that all Clarriions required at least 5 vault drives (of which you shouldn’t really use, and even if you did, you’d have to accept a bunch of limitations). Which system can accept only 4 drives? Also note that the SVC has no real drive limitation…it’s capacity limitation is 2PB (for now).

    I *absolutely* stand by my claim that I can make points in favor of the IBM products that EMC cannot match…even if I stick just to the DS line! EVERY vendor can play that game against ALL OTHER vendors! The point is that it’s a silly game. It’s the *solution* that matters…not the point products.

    So – if your goal is to get me to agree that EMC point products have some features that a certain IBM point product doesn’t have, you win. I admit it, but you have to admit the same in reverse if you’re being honest!

    You seem to grant that the SVC is a good product, but at the same time you want to artificially limit its applicability to a solution in order to win a feature comparison battle (which you can’t win, btw). You say “virtualization has its place.” What place is that? What place *isn’t* that? I’m interested to hear an EMC perspective on this, given the fact that they’re the only major storage vendor to not “get” storage virtualization.

    Consolidation does not equal virtualization. Migrating between tiers in a single box is just not impressive…it’s old news. Migrating non-disruptively between DMX and CX would be impressive. I can do that for you if you can’t…no problem! Migrating non-disruptively from a 5-yr-old box to its replacement box is helpful and a real money-saver. Doing replication between a DMX and a DS4000 is a game-changer. Sorry man, but you just can’t compare the feature sets! You say “we don’t have to virtualize because we can scale (not really) and migrate data between internal tiers.” I say “we don’t need to buy an over-priced midrange array that can pseudo-scale, because I can virtualize and get *real* scalability from midrange disk, etc…!”

    Thanks for the thoughtful reply…although I feel sorry for the people reading this forum for its intended topic! But heck – we’ve gotten away with it so far…

  • SRJ


    Chapter 5.3.3 of the DS4000 Redbook. See it here:


  • Pingback: Tony Asaro’s Blog Bytes » Blog Archive » External Blog Posts You Should Read()

  • Pingback: David Merrill’s Blog » Blog Archive » Defining the soft costs()

  • Pingback: Enterprise Computing: 63% Of Firms Failing to Manage Storage Resources « The Storage Architect()

  • http://www.us.logicalis.com/our-partners/ibm.aspx Logicalis IBM Partners

    I couldn’t say it’s the worst, but you sure bring up fine points!

  • Pingback: IBM DS3400 Hard Drive Array Problems()

  • Chris Evans

    Probably! However I’m about to go on holiday – and it’s all true!! :-)

  • Chris Evans


    I believe some storage professionals do garner a certain level of comfort from being an “expert” in a particular subject. However experience has told me that no-one is indispensable. I’d rather have the lower layers of storage sorted and configured and concentrate on more pressing issues, like de-dupe, archiving, virtualisation etc, than the basic bits and bytes.

  • Chris Evans

    Mark, I really like SMC – saw it a few years ago in Cork. Is there still an 8-array limit per SMC installation? This was one of the only drawbacks I saw for it at the time – oh and I prefer to call it ECC-Lite, much to the shagrin of Kevin doing the presentation (Mr, I only like RAID-1).

  • Chris Evans


    There’s no doubt DMX/Symmetrix provides plenty of flexibility to allow customers to configure the array how they choose. On the one hand, that’s a good thing, however it also means there’s that there’s scope to deploy configurations that are logically inconsistent. Here’s some additional examples;

    Arrays with tens of thousands of hypers, combined back into LUNs then metas to get “better” performance arrays with RAID-5 and RAID-1 LUNs from the same physical disks
    arrays with BCV copies literally years old which can’t be used because they’ve never been synced or the primary/BCVs are different sizes

    My point is that most people either don’t have the time or aren’t organised enough to enforce their own standards to ensure things like this don’t happen. From experience, DMX seems to the array type which generates the most issues. ECC/SMC don’t provide that “semantically” correct check of a configuration against a set of customer defined rules or against common sense rules.

  • Chris Evans

    I think I said I liked SMC – however it isn’t anything more than a SYMCLI GUI. People write scripts because the vendor supplied software doesn’t do what they want.

    I wasn’t being negative about Kevin – I agree with your comments on his ability – however I was making a joke about his dislike of us calling SMC ECC-Lite. I should have added the appropriate smiley to my comment. The guys in Cork were more than helpful on my last visit there; let’s not misconstrue this as anything personal.

  • Chris Evans

    Chuck – I don’t disagree – there’s lots of issues with other platforms – definitely it is easy to mess up every type array. This is not purely a DMX/EMC issue.

  • Chris Evans

    Techmute – that’s why I was referring to “symantically” correct. It is possible to configure an array correctly but it may be unacceptable to that customer – simple example being dual pathing – yes acceptable technically to have one path, no unacceptable to a customer not to have dual path…

  • Chris Evans

    Ohh, LUN addressing, don’t get me started…there’s a benefit USP/XP had over DMX/Symmetrix long long back….

  • Chris Evans


    You may remember I’ve lambasted HDS for trumpeting the benefits of UVM/virtualisation without admitting that this is a chargeable product. VP/TP is great in concept to begin with but (and a future post coming up) VP/TP isn’t a panacea for removing waste and overallocation, especially as filesystems fragment (HDS being the worst at this with their big chunks). I think the view is that EMC are giving a solution and savings in one hand then taking it back in the other by charging a licence that could negate a significant amount of the savings. It’s a bit like the UK government realising that diesel does more MPG than petrol, so they can tax it higher! Personally, if it was free, I’d use it wherever I could.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×