The Renaissance of Data ONTAP

The Renaissance of Data ONTAP

Chris EvansCloud Storage, Data Management, Enterprise, NetApp Insight, Storage

The genesis of NetApp, initially known as Network Appliance is ONTAP or Data ONTAP.  NetApp developed the ONTAP “storage operating system” to run on their original filer platforms.  It’s amazing to think the underlying technology from 1992 still drives the majority of their storage platforms today.  How has ONTAP managed to survive so long and what can we expect in the future?

Background

This post isn’t meant to be a history lesson on the development of ONTAP.  However, there are features of the platform that have been serendipitous in keeping ONTAP relevant over the years.  WAFL, or Write Anywhere File Layout was a core development that provided the ability to lay data out across disks in a fashion that didn’t need an update in place.  Instead, as files and blocks are updated, the data is written to new free space.

There’s lots of similarity between WAFL and waffles….

WAFL enables efficient snapshots. All of the content that relates to a specific copy of a volume can be mapped out through a series of pointers to the individual blocks of data.  When a block on disk is no longer associated with a snapshot or volume, it’s simply released and reclaimed.

Many of the features of ONTAP rely on WAFL, however, there is a disadvantage to always writing to free space.  At some point, garbage collection has to reclaim released blocks and make them available for re-use.  In the early days of ONTAP, running at a high percentage of disk utilisation could cause slowdowns as garbage collection tried to keep up with the write I/O rate.  This meant ONTAP wasn’t always the fastest solution for block storage,

Flash

In the flash world, the idea of writing 4K blocks to disk aligns nicely with NAND flash technology.  This is perhaps one of the most interesting pieces of serendipity in ONTAP development.  Nobody could have predicted the flash revolution and known in advance that 4K would be the ideal unit of write I/O.  However, that said, WAFL is in a good position to take advantage of the alignment.  AFF (All-Flash FAS) systems still manage to achieve high levels of performance and figure highly in industry standard benchmarks.

Renaissance Period

Of course, things haven’t always been smooth sailing for ONTAP.  The merger of the Spinnaker technology to create Clustered ONTAP represented a significant diversion and issue for customers as some features were removed before being added back into the core product line.  However, taking that into consideration, ONTAP is perhaps a renaissance period, helped by the applicability of the technology to public cloud and edge use cases.

Figure 2 – Vault Edge

Edge

The reason this opinion holds merit is based on looking at how ONTAP is being used today.  At NetApp Insight in December 2018, I spotted an interesting use of the technology in the form of ONTAP Select.  This is a purely SDS version of ONTAP that can run on small hardware footprints.

Remember that ONTAP has always been software-defined.  One of the earliest features of the platform was the ONTAP Simulator that provided 100% compatibility with production appliance deployments and could be used for testing and education.

ONTAP Select can be used on x86 hardware or on multiple hypervisors.  Figure 2 shows an edge use case that could be used remotely in locations with no data centre facilities.  It’s a ruggedised hardware appliance from Vector Data called Vault Edge.  The platform runs either KVM or vSphere ESXi, onto which ONTAP Select can be installed.

Cloud

In the public cloud, ONTAP can be deployed in a virtual machine as Cloud Volumes (AWS) and natively in Azure and GCP (see previous posts below on this).  In the case of the native deployments, NetApp and the cloud provider deliver the maintenance and support, charging by the amount of capacity used.  In AWS, multiple licence models apply, including BYOL (bring your own licence).

The internal development model for ONTAP is changing to adapt to the requirements of the public cloud.  This means being able to bring new features to the cloud platform without having to go through huge maintenance releases.  I hope to be able to talk in more detail about this in a future podcast.

Why This Matters

OK, this sounds like an ONTAP love-fest, but the aim of this post is to highlight a number of things:

  • ONTAP was always software-defined and in the long run, that’s added to the longevity of the platform. Eventually, SDS will be the standard model for all storage solutions, even if they are delivered as appliances.
  • The demands of the cloud are driving new modes (think agile) of development that benefit all customers because for storage to work in the cloud, it has to be seamless. Downtime for upgrades and maintenance isn’t acceptable.
  • A single, consistent storage platform and interface across many use cases helps to simplify operations and allows data to efficiently move between edge, core and cloud locations.

ONTAP becomes the underpinning of a data framework that, if you’ve bought into NetApp’s technologies, allows the same operational processes to be used wherever data is being created and used.  This message isn’t lost on the industry and is why the secondary storage vendors have been so successful to date because they offer a similar paradigm of consistency across edge, core and cloud.

The Architect’s View®

Looking back 10-15 years, it was definitely the case that NetApp offered up ONTAP as the solution for any storage problem, even if it wasn’t the right tool for the job.  In fact, the company led with ONTAP as the main product on the company home page.  Contrast that to today, where the discussion on the NetApp is squarely about data.

ONTAP is an enabling technology but doesn’t have to lead the discussion.  However, the storage operating system does seem to have managed to continue to evolve over time.  This is because even in 1992, Data ONTAP was a software solution that just ran on (mostly) generic hardware.  Today we prize the ability to eliminate proprietary hardware from storage solutions and focus on adding value through software.

Other than the product developers themselves, I guess few of us know how much or how little code remains from that original platform developed by Dave Hitz & Co.  However, it doesn’t really matter because the future is about data and storage products are just an enabler to that goal.

Copyright (c) 2007-2024 – Post #9886 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission.