Ondat Tops Kubernetes Database Performance Testing

Ondat Tops Kubernetes Database Performance Testing

Chris EvansCloud-Native, Container-Attached Storage, Data Practice: Data Storage, Databases, Enterprise, Kubernetes, Lab Work, Software-Defined Storage

In January 2021, we published a report in conjunction with Ondat that compared the relative performance of container-native storage solutions.  This work used an industry-standard benchmark tool (fio) to compare Ondat with a range of open-source storage software.  In our latest work, we compare commercial and open-source solutions running popular database platforms to offer a closer approximation to real-world expectations. 

Background

Container-native storage (CNS) describes a set of solutions built to provide data storage services within Kubernetes clusters.  The underlying physical storage could be hard drives, SSDs, or block devices from deployments in the public cloud.  In each case, the benefit of using container-native storage is to add resiliency, efficiency, and data protection features into the cluster for persistent data. 

Naturally, the benefit of Kubernetes (and containers in general) is to run applications with high efficiency and the lowest level of overhead.  These are some of the central tenets of containerisation compared to running applications within virtual servers or even on bare metal hardware. 

It’s clear then that storage performance is a crucial success factor in delivering efficient containerised applications.  Modern SSDs (for example) provide almost a million IOPS each, with hundreds of megabytes per second of throughput.  However, there’s a big difference between deploying on a single SSD and building out a scalable and resilient storage infrastructure. 

You can read more about why and how Kubernetes storage performance should be tested in these blog posts written to accompany the benchmarking work.

The two posts provide the depth and details to understand both our original report and the data presented in the latest analysis. We also have an in-depth evaluation eBook of container-native solutions, which can be purchased online.

Initial Testing

In the report released in January 2021, we looked at four platforms – StorageOS (now Ondat), OpenEBS, Rook/Ceph and Longhorn.  In that review, StorageOS was the only commercial platform under comparison.  The testing used a test bed lab environment consisting of 10GbE networking, NVMe SSDs and Dell R640 servers.  The output of these tests showed the results of typical generic workloads, with random and sequential I/O, read and write I/O, both with variable block sizes.  The conclusion of the work showed that StorageOS (now Ondat) performed best across all test cases.

Updated Testing

From a purely raw performance standpoint, fio is an excellent tool for gaining initial insights into the relative performance of storage devices.  Of course, storage systems are much more complex and designed to cater for a variety of workload types.  These capabilities are achieved through features like advanced caching and data I/O profile analysis. 

When we look at the performance of applications, developers want predictable I/O throughput and latency with a high level of determinism, meaning the I/O response should be delivered with as little variation as possible.  In our first tests, we looked at IOPS, bandwidth and latency.  In the latest series of tests, we again look at throughput and latency while examining the outliers in performance to measure predictability. 

The latest testing compares five container-native storage platforms – Ondat, Portworx, OpenEBS, Rook/Ceph and Longhorn.  Each is tested with PostgreSQL, Redis and MongoDB.  For each database type, the testing tool is pertinent to the solution (pgbench for PostgreSQL, memtier for Redis and YCSB for MongoDB). 

Objectivity

In designing the tests, we have aimed to ensure that the comparison and results are as objective and fair as possible.  For example, each configuration runs the database application on the same node as the primary data mirror.  All performance testing is run from the Kubernetes master to minimise the resource impact on the worker nodes. 

Each test is run for sufficient time and with enough I/O to remove the effects of caching and ensure data is written to and read from physical media.  Each test is run multiple times to validate consistent results.  We’ve also used the latest releases from each vendor available at the time of testing. 

Results

We don’t want to totally pre-empt the findings, so if you want the full test results, then we recommend registering and downloading the report, which is available through the Ondat website (here).  However, we can say that the high-level view shows the commercial solutions performed much better than the open-source offerings.  This outcome mirrors the results we saw in the first set of testing.  The addition of Portworx resulted in a close-run outcome, however Ondat achieved better results, especially with respect to outlier determinism. 

The Architect’s View®

Our first set of testing in January 2021 set the baseline for container-native storage performance benchmarking.  That work was always going to be an initial exploration of application behaviour.  At the time, we indicated that a better measurement of performance would be to use real-world application platforms, of which structured databases were likely to be the first next step.  This work is now complete and continues the testing timeline.  As we’ve already said, please go to the Ondat website and download the report to learn more.

Where could we go next?  The testing so far has been focused on performance in controlled environments.  We know that all storage systems experience failures, so another scenario is to examine how each platform manages to maintain consistent performance while handling media and node failures.  Another report (already completed and to be published soon) will look at storage choices in AWS with respect to running the Ondat solution.  We intend to extend this work to other cloud platforms at some point in the future. 

As we see a greater focus on sustainability and efficiencies in 2023 and onwards, then picking the right container-native storage solution will be an important aspect in deploying efficient container-based applications at scale.  The testing we’ve performed and the results posted in the report highlight that performance and efficiency matter significantly when using Kubernetes and must be taken into consideration as part of a TCO design for Kubernetes applications. 

Copyright (c) 2007-2022 Brookend Limited. No reproduction without permission in part or whole. Post #59e9.