This is the documentation for the latest development version of Velero. Both code and docs may be unstable, and these docs are not guaranteed to be up to date or correct. See the latest version.
Velero node-agent is a DaemonSet hosting the data movement modules to complete the concrete work of backups/restores. Varying from the data size, data complexity, resource availability, the data movement may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.) during the backup and restore.
Velero data movement backup and restore support to constrain the nodes where it runs. This is helpful in below scenarios:
Velero introduces a new section in the node-agent ConfigMap, called loadAffinity
, through which users can specify the nodes to/not to run data movement, in the affinity and anti-affinity flavors.
If it is not there, a ConfigMap should be created manually. The ConfigMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one ConfigMap in each namespace which applies to node-agent in that namespace only. The name of the ConfigMap should be specified in the node-agent server parameter --node-agent-configmap
.
The node-agent server checks these configurations at startup time. Therefore, users could edit this ConfigMap any time, but in order to make the changes effective, node-agent server needs to be restarted.
The users can specify the ConfigMap name during velero installation by CLI:
velero install --node-agent-configmap=<ConfigMap-Name>
Affinity configuration means allowing the data movement to run in the nodes specified. There are two ways to define it:
MatchLabels
. The labels defined in MatchLabels
means a LabelSelectorOpIn
operation by default, so in the current context, they will be treated as affinity rules. In the above sample, it defines to run data movement in nodes with label beta.kubernetes.io/instance-type
of value Standard_B4ms
(Run data movement in Standard_B4ms
nodes only).MatchExpressions
. The labels are defined in Key
and Values
of MatchExpressions
and the Operator
should be defined as LabelSelectorOpIn
or LabelSelectorOpExists
. In the above sample, it defines to run data movement in nodes with label kubernetes.io/hostname
of values node-1
, node-2
and node-3
(Run data movement in node-1
, node-2
and node-3
only).Anti-affinity configuration means preventing the data movement from running in the nodes specified. Below is the way to define it:
MatchExpressions
. The labels are defined in Key
and Values
of MatchExpressions
and the Operator
should be defined as LabelSelectorOpNotIn
or LabelSelectorOpDoesNotExist
. In the above sample, it disallows data movement to run in nodes with label xxx/critial-workload
.To create the ConfigMap, save something like the above sample to a json file and then run below command:
kubectl create cm <ConfigMap name> -n velero --from-file=<json file name>
To provide the ConfigMap to node-agent, edit the node-agent DaemonSet and add the - --node-agent-configmap
argument to the spec:
kubectl edit ds node-agent -n velero
- --node-agent-configmap
to spec.template.spec.containers
spec:
template:
spec:
containers:
- args:
- --node-agent-configmap=<ConfigMap name>
Here is a sample of the ConfigMap with loadAffinity
:
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
},
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"values": [
"node-1",
"node-2",
"node-3"
],
"operator": "In"
},
{
"key": "xxx/critial-workload",
"operator": "DoesNotExist"
}
]
}
}
]
}
This example demonstrates how to use both matchLabels
and matchExpressions
in the same single LoadAffinity element.
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
}
},
{
"nodeSelector": {
"matchExpressions": [
{
"key": "kubernetes.io/os",
"values": [
"linux"
],
"operator": "In"
}
]
},
"storageClass": "kibishii-storage-class"
}
]
}
This sample demonstrates how the loadAffinity
elements with StorageClass
field and without StorageClass
field setting work together. If the VGDP mounting volume is created from StorageClass kibishii-storage-class
, its pod will run Linux nodes.
The other VGDP instances will run on nodes, which instance type is Standard_B4ms
.
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"storageClass": "kibishii-storage-class"
},
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B2ms"
}
},
"storageClass": "worker-storagepolicy"
}
],
"backupPVC": {
"kibishii-storage-class": {
"storageClass": "worker-storagepolicy"
}
}
}
Velero data mover supports to use different StorageClass to create backupPVC by design.
In this example, if the backup target PVC’s StorageClass is kibishii-storage-class
, its backupPVC should use StorageClass worker-storagepolicy
. Because the final StorageClass is worker-storagepolicy
, the backupPod uses the loadAffinity specified by loadAffinity
's elements with StorageClass
field set to worker-storagepolicy
. backupPod will be assigned to nodes, which instance type is Standard_B2ms
.
{
"loadAffinity": [
{
"nodeSelector": {
"matchLabels": {
"beta.kubernetes.io/instance-type": "Standard_B4ms"
}
},
"storageClass": "kibishii-storage-class"
}
],
"restorePVC": {
"ignoreDelayBinding": false
}
}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kibishii-storage-class
parameters:
svStorageClass: worker-storagepolicy
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
If restorePVC should be created from StorageClass kibishii-storage-class
, and it’s volumeBindingMode is WaitForFirstConsumer
.
Although loadAffinityPerStorageClass
has a section matches the StorageClass, the ignoreDelayBinding
is set false
, the Velero exposer will wait until the target Pod scheduled to a node, and returns the node as SelectedNode for the restorePVC.
As a result, the loadAffinityPerStorageClass
will not take affect.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kibishii-storage-class
parameters:
svStorageClass: worker-storagepolicy
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Because the StorageClass volumeBindingMode is Immediate
, although ignoreDelayBinding
is set to false
, restorePVC will not be created according to the target Pod.
The restorePod will be assigned to nodes, which instance type is Standard_B4ms
.
To help you get started, see the documentation.