Documentation

Customize Velero Install

Plugins

During install, Velero requires that at least one plugin is added (with the --plugins flag). Please see the documentation under Plugins

Install in any namespace

Velero is installed in the velero namespace by default. However, you can install Velero in any namespace. See run in custom namespace for details.

Use non-file-based identity mechanisms

By default, velero install expects a credentials file for your velero IAM account to be provided via the --secret-file flag.

If you are using an alternate identity mechanism, such as kube2iam/kiam on AWS, Workload Identity on GKE, etc., that does not require a credentials file, you can specify the --no-secret flag instead of --secret-file.

Enable restic integration

By default, velero install does not install Velero’s restic integration. To enable it, specify the --use-restic flag.

If you’ve already run velero install without the --use-restic flag, you can run the same command again, including the --use-restic flag, to add the restic integration to your existing install.

Enable features

New features in Velero will be released as beta features behind feature flags which are not enabled by default. A full listing of Velero feature flags can be found here.

Enable server side features

Features on the Velero server can be enabled using the --features flag to the velero install command. This flag takes as value a comma separated list of feature flags to enable. As an example CSI snapshotting of PVCs can be enabled using EnableCSI feature flag in the velero install command as shown below:

velero install --features=EnableCSI

Feature flags, passed to velero install will be passed to the Velero deployment and also to the restic daemon set, if --use-restic flag is used.

Similarly, features may be disabled by removing the corresponding feature flags from the --features flag.

Enabling and disabling feature flags will require modifying the Velero deployment and also the restic daemonset. This may be done from the CLI by uninstalling and re-installing Velero, or by editing the deploy/velero and daemonset/restic resources in-cluster.

$ kubectl -n velero edit deploy/velero
$ kubectl -n velero edit daemonset/restic

Enable client side features

For some features it may be necessary to use the --features flag to the Velero client. This may be done by passing the --features on every command run using the Velero CLI or the by setting the features in the velero client config file using the velero client config set command as shown below:

velero client config set features=EnableCSI

This stores the config in a file at $HOME/.config/velero/config.json.

All client side feature flags may be disabled using the below command

velero client config set features=

Customize resource requests and limits

At installation, Velero sets default resource requests and limits for the Velero pod and the restic pod, if you using the restic integration. In Velero versions before 1.4.2, restic pod defaults were not set at install.

Velero Customize resource requests and limits defaults
Setting Velero pod defaults restic pod defaults
CPU request 500m 500m
Memory requests 128Mi 512Mi
CPU limit 1000m (1 CPU) 1000m (1 CPU)
Memory limit 256Mi 1024Mi

Install with custom resource requests and limits

You can customize these resource requests and limit when you first install using the velero install CLI command.

velero install \
  --velero-pod-cpu-request <CPU_REQUEST> \
  --velero-pod-mem-request <MEMORY_REQUEST> \
  --velero-pod-cpu-limit <CPU_LIMIT> \
  --velero-pod-mem-limit <MEMORY_LIMIT> \
  [--use-restic] \
  [--default-volumes-to-restic] \
  [--restic-pod-cpu-request <CPU_REQUEST>] \
  [--restic-pod-mem-request <MEMORY_REQUEST>] \
  [--restic-pod-cpu-limit <CPU_LIMIT>] \
  [--restic-pod-mem-limit <MEMORY_LIMIT>]

Update resource requests and limits after install

After installation you can adjust the resource requests and limits in the Velero Deployment spec or restic DeamonSet spec, if you are using the restic integration.

Velero pod

Update the spec.template.spec.containers.resources.limits and spec.template.spec.containers.resources.requests values in the Velero deployment.

kubectl patch deployment velero -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "velero", "resources": {"limits":{"cpu": "1", "memory": "256Mi"}, "requests": {"cpu": "1", "memory": "128Mi"}}}]}}}}'

restic pod

Update the spec.template.spec.containers.resources.limits and spec.template.spec.containers.resources.requests values in the restic DeamonSet spec.

kubectl patch daemonset restic -n velero --patch \
'{"spec":{"template":{"spec":{"containers":[{"name": "restic", "resources": {"limits":{"cpu": "1", "memory": "1024Mi"}, "requests": {"cpu": "1", "memory": "512Mi"}}}]}}}}'

Additionally, you may want to update the the default Velero restic pod operation timeout to allow larger backups more time to complete. You can adjust this timeout by adding the - --restic-timeout argument to the Velero Deployment spec. The default is 60 minutes in Velero versions before 1.4.2, and 240 minutes in Velero 1.4.2 and later.

NOTE: Changes made to this timeout value will revert back to the default value if you re-run the Velero install command.

  1. Open the Velero Deployment spec.

    kubectl edit deploy velero -n velero
    
  2. Add - --restic-timeout to spec.template.spec.containers.args.

    spec:
      template:
        spec:
          containers:
          - args:
            - --restic-timeout=240m
    

Configure more than one storage location for backups or volume snapshots

Velero supports any number of backup storage locations and volume snapshot locations. For more details, see about locations.

However, velero install only supports configuring at most one backup storage location and one volume snapshot location.

To configure additional locations after running velero install, use the velero backup-location create and/or velero snapshot-location create commands along with provider-specific configuration. Use the --help flag on each of these commands for more details.

Do not configure a backup storage location during install

If you need to install Velero without a default backup storage location (without specifying --bucket or --provider), the --no-default-backup-location flag is required for confirmation.

Install an additional volume snapshot provider

Velero supports using different providers for volume snapshots than for object storage – for example, you can use AWS S3 for object storage, and Portworx for block volume snapshots.

However, velero install only supports configuring a single matching provider for both object storage and volume snapshots.

To use a different volume snapshot provider:

  1. Install the Velero server components by following the instructions for your object storage provider

  2. Add your volume snapshot provider’s plugin to Velero (look in [your provider][0]’s documentation for the image name):

    velero plugin add <registry/image:version>
    
  3. Add a volume snapshot location for your provider, following [your provider][0]’s documentation for configuration:

    velero snapshot-location create <NAME> \
        --provider <PROVIDER-NAME> \
        [--config <PROVIDER-CONFIG>]
    

Generate YAML only

By default, velero install generates and applies a customized set of Kubernetes configuration (YAML) to your cluster.

To generate the YAML without applying it to your cluster, use the --dry-run -o yaml flags.

This is useful for applying bespoke customizations, integrating with a GitOps workflow, etc.

If you are installing Velero in Kubernetes 1.14.x or earlier, you need to use kubectl apply's --validate=false option when applying the generated configuration to your cluster. See issue 2077 and issue 2311 for more context.

Use a storage provider secured by a self-signed certificate

If you intend to use Velero with a storage provider that is secured by a self-signed certificate, you may need to instruct Velero to trust that certificate. See use Velero with a storage provider secured by a self-signed certificate for details.

Additional options

Run velero install --help or see the Helm chart documentation for the full set of installation options.

Optional Velero CLI configurations

Enabling shell autocompletion

Velero CLI provides autocompletion support for Bash and Zsh, which can save you a lot of typing.

Below are the procedures to set up autocompletion for Bash (including the difference between Linux and macOS) and Zsh.

Bash on Linux

The Velero CLI completion script for Bash can be generated with the command velero completion bash. Sourcing the completion script in your shell enables velero autocompletion.

However, the completion script depends on bash-completion, which means that you have to install this software first (you can test if you have bash-completion already installed by running type _init_completion).

Install bash-completion

bash-completion is provided by many package managers (see here). You can install it with apt-get install bash-completion or yum install bash-completion, etc.

The above commands create /usr/share/bash-completion/bash_completion, which is the main script of bash-completion. Depending on your package manager, you have to manually source this file in your ~/.bashrc file.

To find out, reload your shell and run type _init_completion. If the command succeeds, you’re already set, otherwise add the following to your ~/.bashrc file:

source /usr/share/bash-completion/bash_completion

Reload your shell and verify that bash-completion is correctly installed by typing type _init_completion.

Enable Velero CLI autocompletion for Bash on Linux

You now need to ensure that the Velero CLI completion script gets sourced in all your shell sessions. There are two ways in which you can do this:

  • Source the completion script in your ~/.bashrc file:

    echo 'source <(velero completion bash)' >>~/.bashrc
    
  • Add the completion script to the /etc/bash_completion.d directory:

    velero completion bash >/etc/bash_completion.d/velero
    
  • If you have an alias for velero, you can extend shell completion to work with that alias:

    echo 'alias v=velero' >>~/.bashrc
    echo 'complete -F __start_velero v' >>~/.bashrc
    

bash-completion sources all completion scripts in /etc/bash_completion.d.

Both approaches are equivalent. After reloading your shell, velero autocompletion should be working.

Bash on macOS

The Velero CLI completion script for Bash can be generated with velero completion bash. Sourcing this script in your shell enables velero completion.

However, the velero completion script depends on bash-completion which you thus have to previously install.

There are two versions of bash-completion, v1 and v2. V1 is for Bash 3.2 (which is the default on macOS), and v2 is for Bash 4.1+. The velero completion script doesn’t work correctly with bash-completion v1 and Bash 3.2. It requires bash-completion v2 and Bash 4.1+. Thus, to be able to correctly use velero completion on macOS, you have to install and use Bash 4.1+ ( instructions). The following instructions assume that you use Bash 4.1+ (that is, any Bash version of 4.1 or newer).

Install bash-completion

As mentioned, these instructions assume you use Bash 4.1+, which means you will install bash-completion v2 (in contrast to Bash 3.2 and bash-completion v1, in which case kubectl completion won’t work).

You can test if you have bash-completion v2 already installed with type _init_completion. If not, you can install it with Homebrew:

brew install bash-completion@2

As stated in the output of this command, add the following to your ~/.bashrc file:

export BASH_COMPLETION_COMPAT_DIR="/usr/local/etc/bash_completion.d"
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . "/usr/local/etc/profile.d/bash_completion.sh"

Reload your shell and verify that bash-completion v2 is correctly installed with type _init_completion.

Enable Velero CLI autocompletion for Bash on macOS

You now have to ensure that the velero completion script gets sourced in all your shell sessions. There are multiple ways to achieve this:

  • Source the completion script in your ~/.bashrc file:

    echo 'source <(velero completion bash)' >>~/.bashrc
    
    
  • Add the completion script to the /usr/local/etc/bash_completion.d directory:

    velero completion bash >/usr/local/etc/bash_completion.d/velero
    
  • If you have an alias for velero, you can extend shell completion to work with that alias:

    echo 'alias v=velero' >>~/.bashrc
    echo 'complete -F __start_velero v' >>~/.bashrc
    
  • If you installed velero with Homebrew (as explained above), then the velero completion script should already be in /usr/local/etc/bash_completion.d/velero. In that case, you don’t need to do anything.

The Homebrew installation of bash-completion v2 sources all the files in the BASH_COMPLETION_COMPAT_DIR directory, that’s why the latter two methods work.

In any case, after reloading your shell, velero completion should be working.

Autocompletion on Zsh

The velero completion script for Zsh can be generated with the command velero completion zsh. Sourcing the completion script in your shell enables velero autocompletion.

To do so in all your shell sessions, add the following to your ~/.zshrc file:

source <(velero completion zsh)

If you have an alias for kubectl, you can extend shell completion to work with that alias:

echo 'alias v=velero' >>~/.zshrc
echo 'complete -F __start_velero v' >>~/.zshrc

After reloading your shell, kubectl autocompletion should be working.

If you get an error like complete:13: command not found: compdef, then add the following to the beginning of your ~/.zshrc file:

autoload -Uz compinit
compinit
Getting Started

To help you get started, see the documentation.