Object Storage Essential Capabilities #4 – Performance

Object Storage Essential Capabilities #4 – Performance

Chris EvansObject Storage

This is the fourth in a series of posts covering object storage requirements.  Other posts in this series can be found at Object Storage Capabilities Series.

Object stores are seen as the repository for large volumes of data, much of which can be inactive.  As a result, performance is probably not top of the list of requirements.  However, rather than being a store for data we may never use again, object stores are increasingly being used as active archives, for CDN solutions or targets for content that may previously have been stored on a file server.  This means that storage performance is actually an important function of an object storage platform.

Defining Performance

As noted in previous  posts, objects are generally read and written to in their entirety, rather than in part.  An object store request will return the entire object, unless the access method allows partial access.  As a result, the latency and throughput definitions we see applied to block-storage solutions are defined slightly differently.

  • Time to First Byte – this describes the time taken to retrieve the first byte of data from an object.  In effect, it is a measure of the time taken to start accessing the object itself and in some ways like traditional storage latency.  Typical values are measured in milliseconds.
  • Throughput – this is, as we would expect, a measure of the bandwidth of an object store.  Here we can break down the definition into the performance of a single stream request and the overall throughput of the system.  This leads to the next metric.
  • Concurrency – perhaps not thought of as a typical performance metric, concurrency is important in object stores as it allows us to see the overall capability of the system.  We can define concurrency simply as the number of concurrent streams an object store can serve at any one time.

Looking at the definitions, it’s clear to see that reducing the time taken to access an object is important.  From the first byte onwards, the challenge is streaming the data fast enough.  At this point, latency of individual I/O is less relevant compared to traditional storage.  This is because the client requesting the data doesn’t need to process each block before requesting the next.  It simply has to process the entire data set.

Scaling Performance

Scaling the performance of an object store can be a challenge.  As capacity scales, any node could be handling I/O requests that generate a lot of cross-node traffic.  The more nodes there are, the more traffic could be generated.  This is where the internal design of a solution becomes really important.  We have to look at two components of the architecture here.  First is metadata, which describes the objects stored in the object store.  The ability to access metadata quickly is key, because this is the first stage in accessing or storing the object itself.  As a result, vendors keep metadata both distributed and in memory to make lookups extremely quick.  The same scenario applies to writes where the platform needs to quickly generate and distribute data across nodes.

For actual data, the speed at which objects are located becomes critical.  For both read and write, data protection algorithms are extremely important.  For example, erasure coding techniques need to be quick in their ability to transform data into components for writing across the infrastructure, while reading needs fast reconstitution of that data.

Hardware Considerations

Naturally most object stores are rated on simple metrics like $/GB.  When storing billions of objects and petabytes of data, cost becomes a consideration.  However, any single archive will have both active and inactive data, so tiering can place data on the most optimal storage based on performance requirements.  Faster flash storage can be used to cache more frequently accessed data.

Vendor Implementations

IBM’s Cloud Object Storage is based on technology acquired from Cleversafe, which was acquired towards the end of 2015 for around $1.3 billion.  Cleversafe divided data access and data storage functions into two separate hardware components called Accessers and Slicestors respectively (there is also a management function).  Each Accesser is effectively stateless, performing the task of erasure coding data and dispersing or retrieving across the persistent Slicestors.  At the time of acquisition, Cleversafe offered some SSD-based Slicestors, however, IBM now only offers systems with hard drive storage (although they have SSD for boot).  Accessers use SSD for local storage.

Data Direct Networks is well known for their high performance storage platforms.  WOS (Web Object Scaler) claims 9ms read and 25ms write latency using WOS High Performance or Archive appliances and a highly distributed SAS back-end configuration.  Scality RING implements performance and scale-out through a three tier architecture, which divides the platform into an Access Layer, Protection layer and Storage Layer.  As a software solution, RING depends on the hardware offered by partners.  HPE, for example offers nodes based on Apollo 4500 servers which can use a mix of HDD and SSD storage.

Cloudian HyperStore scales performance with each node added to a cluster (which is common across most solutions).  Both 1500 and 4000 series provide flash SSDs for storing metadata, although general capacity is delivered from HDD.

The Architect’s View

There is very little information on available on object storage performance.  This is unfortunate as the future is likely to see many applications where active object stores are needed.  With Software-Defined Storage implementations, some of the performance capabilities are delivered from the underlying hardware (some from the efficiency of the software), again making it harder to compare vendors.  However, it would be useful to have some generic benchmarking available, so object storage capabilities at scale can be assessed.

Further Reading

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2020 – Post #FDDB – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission. Photo credit iStock.