Documentation for version v1.3.1 is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
You can use the
velero bug command to open a
Github issue by launching a browser window with some prepopulated values. Values included are OS, CPU architecture,
kubectl client and server versions (if available) and the
velero client version. This information isn’t submitted to Github until you click the
Submit new issue button in the Github UI, so feel free to add, remove or update whatever information you like.
Some general commands for troubleshooting that may be helpful:
velero backup describe <backupName>- describe the details of a backup
velero backup logs <backupName>- fetch the logs for this specific backup. Useful for viewing failures and warnings, including resources that could not be backed up.
velero restore describe <restoreName>- describe the details of a restore
velero restore logs <restoreName>- fetch the logs for this specific restore. Useful for viewing failures and warnings, including resources that could not be restored.
kubectl logs deployment/velero -n velero- fetch the logs of the Velero server pod. This provides the output of the Velero server processes.
You can increase the verbosity of the Velero server by editing your Velero deployment to look like this:
kubectl edit deployment/velero -n velero ... containers: - name: velero image: velero/velero:latest command: - /velero args: - server - --log-level # Add this line - debug # Add this line ...
Because of how Kubernetes handles Service objects of
type=LoadBalancer, when you restore these objects you might encounter an issue with changed values for Service UIDs. Kubernetes automatically generates the name of the cloud resource based on the Service UID, which is different when restored, resulting in a different name for the cloud load balancer. If the DNS CNAME for your application points to the DNS name of your cloud load balancer, you’ll need to update the CNAME pointer when you perform a Velero restore.
Alternatively, you might be able to use the Service’s
spec.loadBalancerIP field to keep connections valid, if your cloud provider supports this value. See
the Kubernetes documentation about Services of Type LoadBalancer.
custom resource not founderrors when starting up.
Velero’s server will not start if the required Custom Resource Definitions are not found in Kubernetes. Run
velero install again to install any missing custom resource definitions.
velero backup logsreturns a
Downloading artifacts from object storage utilizes temporary, signed URLs. In the case of S3-compatible providers, such as Ceph, there may be differences between their implementation and the official S3 API that cause errors.
Here are some things to verify if you receive
Velero cannot currently resume backups that were interrupted. Backups stuck in the
InProgress phase can be deleted with
kubectl delete backup <name> -n <velero-namespace>.
Backups in the
InProgress phase have not uploaded any files to object storage.
Steps to troubleshoot:
ports: - containerPort: 8085 name: metrics protocol: TCP
$ kubectl -n <YOUR_VELERO_NAMESPACE> port-forward <YOUR_VELERO_POD> 8085:8085 Forwarding from 127.0.0.1:8085 -> 8085 Forwarding from [::1]:8085 -> 8085 . . .
Now, visiting http://localhost:8085/metrics on a browser should show the metrics that are being scraped from Velero.