Home | Uncategorized | Enterprise Computing: Understanding The Cost Of Storage

Enterprise Computing: Understanding The Cost Of Storage

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×

There was an interesting question posed this afternoon by Ditchboy on ITToolbox.  The gist of the question was why Enterprise storage is so expensive, when 1TB drives are less than £100 a piece.  It’s a good question and something I’m asked quite often. 

To us seasoned professionals, the answer may be obvious; here’s the response I posted:

If you go to Best Buy etc and buy a 1TB hard drive, sure, you can get it at a bargain price.  But what warranty does it come with?  Nothing other than a 3 year return to vendor if it fails.  No guarantee on the actual data on your drive.  

If you want something faster and more reliable than SATA, you can go for SAS or FC, and perhaps increase the drive speed, but the cost will increase as a consequence.

If you put the drive into a chassis, you can add more cost covering the cost of the chassis.  The chassis may give you more reliability; perhaps RAID, perhaps hot removable drives

Push the price higher to a modular array and you get more features; write/read cache, higher quality power supplies, UPS/battery backup.

Higher priced arrays may also come with other features; web-based management, sync/async replication; snapshots.  You’ll also get better connectivity; FC rather than iSCSI or NAS; active/passive multipathing.

Then there’s enterprise arrays; more reliability; more scalability; more data protection via predictive failure, automated drive swap/replacement; call home, additional monitoring, predictive performance algorithms, multi-host connectivity; active-active multipathing, thin provisioning, snapshots, clones, multiprotocol support, tiered storage, SSDs.

So you can see, as the prices go up, so does availability, performance, scalability, reliability. 

I think that pretty much sums it up; you get what you pay for.  However the key is making sure you’re not paying for stuff you don’t need; only a small percentage of servers need high performance access.  High availability can be achieved without the expense of high performance; costs can be reduced by using modular products and space reducing features like thin provisioning and archiving.

Don’t pay for what you don’t need.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
  • J

    There’s one key point that you missed…along the way, there’s the vendor testing to make sure that it WORKS. Qualification is a key component when you move up the value chain, because the latest and greatest don’t always just work at release. At least not reliably that is.

  • Pingback: BotchagalupeMarks for February 27th - 16:44 | IT Management and Cloud Blog()

  • http://media.seagate.com/center/storage-effect Pete Steege

    On the mark, Chris! There’s an army of Seagate employees working with each of the large storage makers qualifying drives, testing software, analyzing to incredible detail possible failure scenarios. Multiply that investment on the storage makers’ side.

    A fraction of the value of storage is in the physical disk drive.

  • http://rich.whiffen.org Rich Whiffen

    You also can’t easily share, subdivide or move that 1Tb between servers like you can in an enterprise class array. I don’t find it tough to sell the features and abilities of enterprise arrays to mid-to-large customers right now, but I think that’s going to change over time. In the past, Enterprise arrays were the only sane way to attach large quantities of storage to a server. But that’s not exactly true anymore, which is what I think is at the heart of the ‘1Tb drive at best buy’ question, even if they don’t phrase it that way. When you can buy a 16-drive 2U server and replicate the data in software painlessly for fail over, (like in SQL 2005, and soon exchange) why would you bother with an enterprise class array? Lots of reasons, really but it’s not as simple as ‘more capabilities’ anymore. Don’t get me wrong, I’m not advocating servers with lots of local storage instead of arrays, but that’s the attack vector I see questions coming from next. Perhaps Microsoft, Sun, the linux camp, or some startup will come out with the game changing ‘internal cloud storage’ system that allows your local storage in your data center to form a storage cloud. I’m sure something like this already exists, haven’t taken the time to dig. Everything old is new again, you turn your servers with lots of local storage into an old school banyan vines network. Ah well, just as long as I’m not the guy who has to run from machine to machine replacing all those drives…


  • Pingback: Rich Whiffen » Blog Archive » The next assalt on storage arrays…()

  • Chris Evans

    Good point, J (Julie). I rushed off my list rather quickly and perhaps with a little thought I might have remembered that. Part of the benefit of Enterprise and modular storage arrays is the testing that was performed by the vendor to ensure data integrity. You are right that this is a huge benefit and consequently is part of what you’re paying for.

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 Filament.io 0 Flares ×