Cache or Tier – Does it Matter?

Cache or Tier – Does it Matter?

Chris EvansStorage, Tech Field Day

Who would have thought such an innocent comment would have created a stream of conversation?  Well, that’s what happened after I called out the Avere folks on the use of the term “tiered file system” during their presentation at Storage Field Day 11.  We even followed the discussion with a podcast, in an attempt to flesh out exactly what the differences might be.  Both tiering and caching are similar but subtly different techniques.  Should we care what they are and use one over another, or does it not really matter?

Caching

Caching has been around forever, since computers were first invented and was needed because both programs and data are impractical to keep permanently in system memory.  This happens for a number of reasons:

  • Fast memory (e.g. DRAM) is volatile.  Turn off the power and your data is gone.  One way or another you need external persistent storage to store your data.
  • Fast memory is finite (and expensive).  There’s a limit to how much memory we can deploy in a single server and historically it is expensive (certainly compared to external storage).
  • Most of our data is inactive.  It’s impossible to access all of our data, all of the time.  Even if we could, we don’t because data is accessed in structured ways, like reading rows in a table or loading an index.

Caching places data onto a faster storage medium than it would normally be stored on, in order to improve application performance.  Crucially, it is a copy of data stored elsewhere, so we don’t need to build in high availability, unless we are caching writes that haven’t been stored to external disk.  This means cached data can sit in memory, on fast storage (like SSDs) or in the case of Avere, on an appliance local to the application.  In sizing a cache we have to look at how big our working set is.  The working set defines the active data and could be 10-20% of the total persistent storage size.  Undersize the cache and all the benefit could be lost, because data will simply be continually swapped in and out of the cache, rather than being re-accessed from the cache itself.

Tiering

Tiering was introduced to permanently place data on the most appropriate type of storage, based on a balance of cost, availability and performance.  Test and development environments might justify only tier 2 storage, whereas OLTP databases might/can justify tier 1 or even tier 0 (which are arbitrary names, based on however you want to define them).  In tiering, data is stored only once on each tier and can be moved around depending on the “temperature” of the data.  As data becomes hotter (i.e. more active), so it can be justified to move it to a faster tier.

The exact definition of tiers is entirely self defined.  Traditionally tiers were based around HDD speed; 15K RPM drives were tier 1, 10K tier 2 and so on.  However another option could be to use 7.2K RPM drives for tier 2.  Tiers can also vary by drive capacity, providing different I/O density ratios.  As flash has become more widely adopted, tier 0, then tier 1 has been seen as the flash tier.  Who knows how this could change in the future; we’re almost certain to see tiering with multiple types of flash or new technology.  Tiering technologies have typically been reactive, moving data over time between tiers based on measuring historical performance.  This can be a real problem if the active working set of data is continually moving.  Tiers also started off as a rather static process, however over time automated tiering came in, first based on the LUN/volume, then eventually at the block or sub-LUN layer.

Pros & Cons

Caching is a great technique for gaining overall performance for a relatively small cost.  However if your data isn’t in the cache, you have to suffer a poor initial first (read) I/O, with subsequent I/Os completing faster.  To guarantee every I/O will get a fast response every time, the data has to be put on a fast tier.  Therein lies the trade-off – the cost of a cheaper/smaller cache versus placing all data on a fast tier, which is typically more expensive.  Neither one or other solution is the best – it depends on the data – and in most cases both caching and tiering is used.

The Architect’s View

What about Avere?  Is their product a tiered file system?  Actually it’s more like a cache, because data eventually cascades down to the persistent storage layer.  However depending on timings, data could persist in the cache almost “forever”.  And so the boundaries are blurred. And so we get to our original question – does it matter?  Personally for me, no it doesn’t matter, except to understand where my data is, should I have a hardware or networking failure.  If the appliance dies, what do I lose?  Perhaps more important, when extending a cache over distance, how do I maintain integrity between local and remote access.  Maybe that’s a conversation for another day…

Further Reading

You can catch all the videos at the Tech Field Day SFD11 website, or check out my page of links.

Comments are always welcome; please read our Comments Policy first.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Disclaimer:  I was personally invited to attend Storage Field Day 11, with the event teams covering some of my travel and accommodation costs.  However I was not compensated for my time.  I am not required to blog on any content; blog posts are not edited or reviewed by the presenters or the respective companies prior to publication.  

Copyright (c) 2009-2020 – Chris M Evans, first published on https://www.architecting.it/blog, do not reproduce without permission. Post #43b2.