As the use of Kubernetes grows significantly in production environments, backing up the configuration of these clusters is becoming ever more critical. In previous blogs, we have talked about the difference between stateful and stateless workloads, but this blog focuses on backing up the cluster config, not the workloads.
When running a Kubernetes cluster in an environment, the Kubernetes Master Node contains all the configuration data including items like worker nodes, application configurations, network settings and more. This data is critical to restore in the event of a master node failure.
For us to understand what we need to back up, first we need to understand what components Kubernetes needs to operate.
In Kubernetes the etcd is one of the key components. The etcd component is used as Kubernetes’ backing store. All cluster data is stored here. The etcd is an open-source, key value store used for persistent storage of all Kubernetes objects like deployment and pod information. The etcd can only be run on a master node. This component is critical for backing up Kubernetes configurations.
Another key component is the certificates. By backing up the certificates, we can easily restore a master node. Without the certificates, we would need to recreate the cluster from scratch.
We can back up all the key Kubernetes Master Node components using a simple script. I got the basics of the script
#K8S backup script # David Hill 2019 # Backup certificates sudo cp -r /etc/kubernetes/pki backup/ # Make etcd snapshot sudo docker run --rm -v $(pwd)/backup:/backup \ --network host \ -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd \ --env ETCDCTL_API=3 \ k8s.gcr.io/etcd-amd64:3.2.18 \ etcdctl --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \ snapshot save /backup/etcd-snapshot-latest.db
The script above does two things:
- It copies all the certificates
- It creates a snapshot of the etcd keystore.
These are all saved in a directory called backup.
After running the script, we have several files in the backup directory. These include certificates, snapshots and keys required for Kubernetes to run.
We now have a backup of the critical Kubernetes Master Node configuration. Now, the issue is that this data is all stored locally. What happens if we lose the node completely? This is where Veeam comes in. By using Veeam Agent for Linux, we can easily back up this directory and store it in a different location. We can also protect this critical data and manage and store that data in multiple locations, like a scale-out backup repository and the cloud tier leveraging object storage.
Veeam Agent for Linux
When configuring a backup job, we only want to back up the directory where the Kubernetes configuration data is stored. By running the script above, we store all that data in /root/backup. This is the directory we are going to back up in this example.
Walking through the backup job for our master node, two options we must select are File Level Backup and the directory to backup:
Once the job has run, we can open the backup and check to be sure all the files we requested to be backed up are included.
We now have a successful backup of our Kubernetes Master Node configuration. We can offload this backup to an object storage repository for off-site backup storage.
In the next blog, the topic will be restoring this configuration data in the event of a failure or the loss of a master node.