Velero 1.3: Improved CRD Backups/Restores, Multi-Arch Docker Images, and More!

Steve Kriss
March 02, 2020

Velero’s voyage continues with the release of version 1.3, which includes improvements to CRD backups and restores, multi-arch Docker images including support for arm/arm64 and ppc64le, and many other usability and stability enhancements. This release includes significant contributions by community members, and we’re thrilled to be able to partner with you all in continuing to improve Velero.

Let’s take a deeper look at some of this release’s highlights.

Custom Resource Definition Backup and Restore Improvements

This release includes a number of related bug fixes and improvements to how Velero backs up and restores custom resource definitions (CRDs) and instances of those CRDs.

We found and fixed three issues around restoring CRDs that were originally created via the v1beta1 CRD API. The first issue affected CRDs that had the PreserveUnknownFields field set to true. These CRDs could not be restored into 1.16+ Kubernetes clusters, because the v1 CRD API does not allow this field to be set to true. We added code to the restore process to check for this scenario, to set the PreserveUnknownFields field to false, and to instead set x-kubernetes-preserve-unknown-fields to true in the OpenAPIv3 structural schema, per Kubernetes guidance. For more information on this, see the Kubernetes documentation. The second issue affected CRDs without structural schemas. These CRDs need to be backed up/restored through the v1beta1 API, since all CRDs created through the v1 API must have structural schemas. We added code to detect these CRDs and always back them up/restore them through the v1beta1 API. Finally, related to the previous issue, we found that our restore code was unable to handle backups with multiple API versions for a given resource type, and we’ve remediated this as well.

We also improved the CRD restore process to enable users to properly restore CRDs and instances of those CRDs in a single restore operation. Previously, users found that they needed to run two separate restores: one to restore the CRD(s), and another to restore instances of the CRD(s). This was due to two deficiencies in the Velero code. First, Velero did not wait for a CRD to be fully accepted by the Kubernetes API server and ready for serving before moving on; and second, Velero did not refresh its cached list of available APIs in the target cluster after restoring CRDs, so it was not aware that it could restore instances of those CRDs.

We fixed both of these issues by (1) adding code to wait for CRDs to be “ready” after restore before moving on, and (2) refreshing the cached list of APIs after restoring CRDs, so any instances of newly-restored CRDs could subsequently be restored.

With all of these fixes and improvements in place, we hope that the CRD backup and restore experience is now seamless across all supported versions of Kubernetes.

Multi-Arch Docker Images

Thanks to community members @Prajyot-Parab and @shaneutt, Velero now provides multi-arch container images by using Docker manifest lists. We are currently publishing images for linux/amd64, linux/arm64, linux/arm, and linux/ppc64le in our Docker repository.

Users don’t need to change anything other than updating their version tag - the v1.3 image is velero/velero:v1.3.0, and Docker will automatically pull the proper architecture for the host.

For more information on manifest lists, see Docker’s documentation.

Bug Fixes, Usability Enhancements, and More!

We fixed a large number of bugs and made some smaller usability improvements in this release. Here are a few highlights:

  • Support private registries with custom ports for the restic restore helper image ( PR #1999, @cognoz)
  • Use AWS profile from BackupStorageLocation when invoking restic ( PR #2096, @dinesh)
  • Allow restores from schedules in other clusters ( PR #2218, @cpanato)
  • Fix memory leak & race condition in restore code ( PR #2201, @skriss)

For a full list of all changes in this release, see the changelog.

An Update on CSI Snapshot Support

Previously, we planned to release version 1.3 around the end of March, with CSI snapshot support as the headline feature. However, because we had already merged a large number of fixes and improvements since 1.2, and wanted to make them available to users, we decided to release 1.3 early, without the CSI snapshot support.

Our priorities have not changed, and we’re continuing to work hard on adding CSI snapshot support to Velero 1.4, including helping upstream CSI providers migrate to the v1beta1 CSI snapshot API where possible. We still anticipate releasing 1.4 with beta support for CSI snapshots in the first half of 2020.

Join the Movement – Contribute!

Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online community meetings every Tuesday and catch up with past meetings on YouTube on the Velero Community Meetings playlist.

You can always find the latest project information at velero.io. Look for issues on GitHub marked Good first issue or Help wanted if you want to roll up your sleeves and write some code with us.

You can chat with us on Kubernetes Slack in the #velero channel and follow us on Twitter at @projectvelero.

Related Content
Velero 1.2 Sets Sail by Shifting Plugins Out of Tree, Adding a Structural Schema, and Sharpening Usability
How to use CSI Volume Snapshotting with Velero
How to use CSI Volume Snapshotting with Velero
In the Velero 1.4 release, we introduced support for CSI snapshotting v1beta1 APIs. This post provides step-by-step instructions on setting up a CSI environment in Azure, installing Velero 1.4 with the velero-plugin-for-csi, and a demo of this feature in action.
Getting Started

To help you get started, see the documentation.