The Three Facets of Backup

The Three Facets of Backup

Chris EvansCloud Storage, Data Management, Data Protection

Real Tape Drive

All technology evolves over time and backup is no different.  I started protecting data with tapes – the kind that are stored on reels as shown in this lovely picture I took at the Computer History Museum recently.  While at Cloud Field Day 3 last week we saw three different facets to how backup is delivered.  Each is tackling the problem of data protection in their own way.

Heritage – Veritas

It would be easy to think of Veritas as a legacy backup vendor, but that’s too simple a picture to portray.  As I mentioned in my preview post, Veritas has a long heritage in data management and backup software, with products such as Netbackup and Backup Exec.  Most recently, Veritas has moved to the appliance model and a new solution called CloudPoint that is architected for the public cloud.

From a positive perspective, tools like Netbackup have a wealth of support and enterprise-level scalability.  Backup Exec was more small to middle enterprise, but offered features for bare metal restore, when that was a thing.  While we’ve – in general – moved towards server virtualisation and different forms of backup, there is still a lot of legacy out there.  This means there’s still a need for platforms like Netbackup that can cover the gamut of older operating systems and applications.

CloudPoint

CloudPoint is a new backup solution from Veritas that is aimed at cloud native workloads.  The solution utilises the ability to take snapshots from either on-premises storage arrays or of instances running in public cloud.  Cloud providers archive snapshots from primary storage to secondary storage areas like object stores.  This means the backup in the form of a snapshot is secure compared to the way in which snapshots are implemented on traditional storage.

The CloudPoint GUI could be run anywhere and in general, multiple cloud workloads could be administered from a single location.  In practice, CloudPoint does need to be deployed in clouds where additional functionality like application consistent snapshots will be used.

The demo at CFD3 showed an interesting product, but one that seems to be lacking on certain core features.  There’s still some work to be done on CloudPoint, even though it is a 2.0 release.  I’ve looked at the installation process in Microsoft Azure and it’s not really that straightforward.  The marketplace entry isn’t a self-contained VM image, but rather instructions on building a Ubuntu VM, downloading the software from a Veritas account, installing a bulky 1.5GB Docker image and then running this to extract the containers necessary to bring up the software.

This isn’t what I’d call cloud native – scripting the install for instance, would be tricky.  I certainly couldn’t build backup on-demand.  Bear in mind also that this is now the customer’s VM to maintain, network harden and patch.

Having said all that, I’ve managed to get CloudPoint installed relatively quickly and it works.  The software can be driven from Netbackup, which is good for heritage organisations, but doesn’t explain fully how data could be made available cross-platform, e.g. backed up by Netbackup and restored by CloudPoint.

Rubrik

Rubrik has been around since 2014 and in a few short years has managed to release four versions of the Rubrik Data Management Platform.  The initial focus for the company was in delivering a backup appliance that worked with virtualised environments.  This allowed very quick implementation with platforms like VMware vSphere – provide a set of credentials, point at the vCenter server and you’re done.  Rubrik manages the appliance, so software upgrades and patches are the responsibility of the vendor.

Under the covers, Rubrik is much more complex than a simple hardware device.  The architecture implements a scale-out file system for storing data that allows multiple snapshots from virtual machines to be stored and tracked with no performance penalty.  Data protection definitions are much simplified compared to legacy products.  Administrators simply need to set the service requirements of backup as SLAs, rather than spend time and effort scheduling backups to optimise the network throughput.  This level of simplicity is something we’re seeing in other products like CloudPoint, because the restrictions on network capabilities no longer exist.

Cloud & Polaris

Since initial release, the Rubrik platform has been extended to both protect cloud workloads and use the public cloud as a target for data protection.  IT organisations can also extend backup to branch offices with a virtual appliance, allowing data to be protected at the edge and centralised into a core location.  This is all done with a single consistent view of metadata.

The metadata component is where the long-term value or Rubrik lies.  Whilst data is being protected against loss, the metadata collected across all systems and platforms now offers a wealth of other uses.  These include eDiscovery, data governance, analytics and archival.  Polaris is a new offering from Rubrik that brings together the data spread across multiple locations into a single SaaS-based tool.  I’ll cover more on this is in a separate post.

We shouldn’t forget that Rubrik has started to expand the base of supported platforms, including NoSQL and eventually consistent type databases through the acquisition of Datos IO.

Druva

We started this discussion with on-premises heritage technology, moved to new technology and cloud.  What happens if you dispense with hardware altogether and run the entire data protection business as a cloud service?  This is exactly the approach Druva has taken.  The company has evolved from an endpoint backup service to one that covers the enterprise and now cloud-native workloads.

As a solution that is 100% cloud-based, Druva can take advantage of the economies of scale offered by the public cloud.  Object stores offer scalable capacity for data that can be archived over time to Glacier.  Configuration is stored in native databases (for AWS this is DynamoDB).  Bandwidth to process backups is spun up on-demand by bringing up EC2 instances as they are required.  Druva also uses other services such as MySQL (RDS) and platform-specific authentication services.  Each of these services are built dynamically across multiple public cloud regions.

As a backup architecture, the Druva platform solves a lot of problems that were typically seen by enterprises.  The creeping increase in primary storage capacity would eventually lead to levels of demand that breached SLAs on the backup infrastructure.  Now, scaling is automatic.  Resources aren’t being purchased ahead of being needed.  In fact, they have the option of being spun down during the day when not in use.  This means the entire infrastructure is scaled as efficiently as possible.  Service users are charged against consumption, making costs more aligned to the use of the public cloud itself.

Three Facets

The challenge for Veritas is going to be how they bridge the gap between legacy and new applications.  Customers have a wealth of backup data that can’t be discarded.  Backups are retained for years, in many cases to act as a de-facto archive.  As applications become more mobile, backup needs to adapt to meet these needs.  At this point there seems to be no ability to move backups between Netbackup, Backup Exec and CloudPoint.  Each are effectively separate products and repositories.  Effectively managing data across these boundaries will be key.

Rubrik has taken a step further and centralised the metadata associated with data protection.  This makes search and discovery much simpler.  The physical data to perform restores is distributed across the enterprise and public cloud, providing the capability to quickly bring applications back to life through features like instant restore.  As a data management platform, it will be interesting to see where the company heads next.  To date, everything Rubrik operates on is point-in-time data from applications running on primary storage.  The data management aspects are therefore based on secondary copies.  What about moving to manage and orchestrate production workloads?

Finally we see Druva essentially dispensing with data protection hardware altogether.  As a result, data for the business is one place, from both a metadata and physical content perspective.  This has positive and negative consequences.  On the plus side, this data can be de-duplicated and optimised, then delivered wherever it is needed from the single central location.  On the negative side, funnelling terabytes of backup data from one or two very large data centres probably isn’t the best use case for Druva.  Instead the solution is best suited to organisations that have many smaller operations that can take advantage of public cloud’s ubiquitous nature.

The Architect’s View

How do we see this playing out?  Veritas is in a stage of transformation while Rubrik and Druva are in expansion mode.  Much of what we’ve talked about is the data protection aspects of these companies, whereas long-term data retention and management will have the greatest value.  That means extending support for unstructured data, content analysis and making a lot of this data more portable.  Incidentally, some of these vendors are already either doing or working on these features.  Perhaps the future is based around the vendor that can provide the most value from a company’s data, offsetting the cost associated with management and providing reuse of content.  I guess we will have to wait and see.

Comments are always welcome; please read our Comments Policy.  If you have any related links of interest, please feel free to add them as a comment for consideration.  

Copyright (c) 2007-2019 – Post #D9E8 – Brookend Ltd, first published on https://www.architecting.it/blog, do not reproduce without permission. Photo credit iStock.