Working with Kubernetes and Terraform Part 3: Installing Kasten using Terraform

In this three-part series, I will explain how to use Kubernetes (K8s) and Terraform (TF) together to set up a Kubernetes cluster, manage applications and install Kasten. We will of course keep data management best practices in mind for every step. Installing Kasten in the cluster is also a great example of how Terraform can be used when managing cloud resources outside the cluster.

In the first part, we discussed the concepts behind Terraform and Kubernetes, their similarities & differences, and how to use the two in harmony. In the second part, we shared a hands-on example for setting up a Kubernetes cluster on AWS EKS with Terraform. And lastly, in this third part, we will use Terraform to install Kasten and set up an S3 export location. You can also find all the code on GitHub.

Kasten is a perfect example of a service that works together with resources outside K8s. We’ll run through an installation of Kasten in our Terraform managed environment.

Install Kasten K10 using Terraform

If you look at the Kasten documentation for installation on AWS we find this command:

helm install k10 kasten/k10 --namespace=kasten-io \\ --set secrets.awsAccessKeyId="${AWS_ACCESS_KEY_ID}" \\ --set secrets+.awsSecretAccessKey="${AWS_SECRET_ACCESS_KEY}"

…and apparently, we need AWS credentials. Following the principle of least privilege, we should create a new user and limit its access to what Kasten really needs. But creating an IAM user, policies, and everything this user accesses? That sounds like a job for Terraform. So in this case I’d argue it’s justified to manage the whole Kasten installation via Terraform.

To avoid the issue with platform-level-infra and app-level-infra outlined earlier, we create a “applications” Terraform project in a new directory. Feel free to take another look at the GitHub repository with the complete structure for this.

Again we need some boilerplate which you can put in the main.tf file:

provider "aws" { # Region neesds to match the region of your cluster! region = "eu-central-1" } # Now we also make use of the kubernetes and helm providers provider "kubernetes" { config_path = "~/.kube/config" } provider "helm" { kubernetes { config_path = "~/.kube/config" } } locals { tags = { Project = "Terraform K8s Example Applications" Terraform = "True" } }

Since you may install different applications on this level, it makes sense to put all the code related to Kasten in a separate [kasten.tf](<http://kasten.tf>) file or even in its own terraform module.

We know that we need an IAM user and credentials, so we’ll create that first:

resource "aws_iam_user" "kasten" { name = "kasten" tags = local.tags } # Minimal set of permissions needed by K10 for integrating with AWS EBS # See: resource "aws_iam_user_policy" "kasten" { name = "kasten" user = aws_iam_user.kasten.name policy = <<JSON { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CopySnapshot", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeSnapshotAttribute", "ec2:ModifySnapshotAttribute", "ec2:DescribeAvailabilityZones", "ec2:DescribeSnapshots", "ec2:DescribeTags", "ec2:DescribeVolumeAttribute", "ec2:DescribeVolumesModifications", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:ResourceTag/*" ], "Resource": "*" }, { "Effect": "Allow", "Action": "ec2:DeleteSnapshot", "Resource": "*", "Condition": { "StringLike": { "ec2:ResourceTag/Name": "Kasten: Snapshot*" } } } ] } JSON } resource "aws_iam_access_key" "kasten" { user = aws_iam_user.kasten.name }

EKS also supports using IAM Roles with K8s Server Accounts, but we will stick to IAM users here to stay a little bit more general.

Instead of installing the helm chart with the helm CLI, we add a resource for it in our TF code. This way we can access the credentials from before directly. We can also add the required namespace that way.

resource "kubernetes_namespace" "kasten" { metadata { name = "kasten-io" } } resource "helm_release" "kasten" { name = "k10" repository = "" chart = "k10" namespace = kubernetes_namespace.kasten.metadata[0].name set { name = "secrets.awsAccessKeyId" value = aws_iam_access_key.kasten.id } set { name = "secrets.awsSecretAccessKey" value = aws_iam_access_key.kasten.secret } }

We can now again run terraform init and terraform apply in the new project. Again, the application might take a while.

If everything goes well, Kasten should now be running in your cluster. You can check by running helm list -A. We can now also access the Kasten web interface:

kubectl --namespace kasten-io port-forward service/gateway 8080:8000 # then open:

Keep this command running in the background, we will use the web UI again later.

Deploying an Example Application

In the interest of having some data that we can back up, let’s quickly create a demo workload. The following commands will install a PostgreSQL database using helm:

helm repo add bitnami kubectl create namespace demo-app helm install demo-db bitnami/postgresql --namespace=demo-app

The install command will log some helpful commands for connecting to the database. We will copy and run them like this:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace demo-app
demo-db-postgresql -o jsonpath="{.data.postgresql-password}" | base64
--decode) kubectl run demo-db-postgresql-client --rm --tty -i --restart='Never'
--namespace demo-app --image
docker.io/bitnami/postgresql:11.11.0-debian-10-r22
--env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host
demo-db-postgresql -U postgres -d postgres -p 5432

Next, we execute a tiny bit of SQL to leave some data behind. Don’t worry if this seems confusing, it’s just so we can later confirm that backups work; bear with me here.

CREATE TABLE demo(id VARCHAR);
INSER INTO demo (id) VALUES ('a test entry');
SELECT * FROM demo;

Quit with

\\quit

Backup Policies as Code

If we now log into the Kasten web UI we should see our “demo-app” pop up as “Unmanaged” application. We could now create a policy directly in the web UI but in the spirit of IaC let’s create the policy in code as well.

We create a new file call backup-policy.yaml and define our policy (for more details on this see: the docs):

apiVersion: config.kio.kasten.io/v1alpha1 kind: Policy metadata: name: demo-app-backup-policy namespace: kasten-io spec: comment: Backup policy for the demo-app frequency: "@hourly" retention: hourly: 24 daily: 7 actions: - action: backup selector: matchLabels: k10.kasten.io/appNamespace: demo-app

And apply it using kubectl apply -f backup-policy.yaml. This reflects my point earlier about not forcing Terraform into workflows that are not designed for it. I am sure there is some way to represent this policy in Terraform code, but what would we gain from it?

In the Kasten Web UI, the demo-api should now have moved from “Unmanaged” to “Non-Compliant” which should change to “Compliant” as soon as the first backup task executed.

Setting up a S3 Export

We now back up our demo-app within the cluster. But what if there is an issue with the entire cluster? Kasten addresses this concern with the ability to export your data. For our purposes, we will set up an S3 location profile and export.

To do that let’s first go back to our Kasten Terraform file and create a S3 bucket for that purpose:

resource "aws_s3_bucket" "kasten_export" { bucket_prefix = "kasten-export-" acl = "private" # We do this so that we can easily delete the bucket once we are done, # leave this out in prod force_destroy = true tags = local.tags }

With some slight modifications, we could easily create the bucket in another region or even AWS yaaccount.

Kasten encourages the use of a separate IAM user for the location profile, so we will again create an IAM user and assign the necessary permissions:

resource "aws_iam_user" "kasten_export" { name = "kasten" tags = local.tags } resource "aws_iam_user_policy" "kasten_export" { name = "kasten-export" user = aws_iam_user.kasten_export.name policy = << JSON { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:PutBucketPolicy", "s3:ListBucket", "s3:DeleteObject", "s3:DeleteBucketPolicy", "s3:GetBucketLocation", "s3:GetBucketPolicy" ], "Resource": [ "${aws_s3_bucket.kasten_export.arn}", "${aws_s3_bucket.kasten_export.arn}/*" ] } ] } JSON } resource "aws_iam_access_key" "kasten_export" { user = aws_iam_user.kasten_export.name

To use the user in K8s we create a secret:

resource "kubernetes_secret" "kasten_export" { metadata { name = "k10-s3-secret" namespace = "kasten-io" } data = { aws_access_key_id = aws_iam_access_key.kasten_export.id aws_secret_access_key = aws_iam_access_key.kasten_export.secret } type = "secrets.kanister.io/aws" }

And an output to get the name of the bucket which we will need later:

output "kasten_export_bucket_name" { value = aws_s3_bucket.kasten_export.id }

We can now terraform apply again.

To tell Kasten about the location profile we create a file called applications/kasten-export-profile.yaml. Because we are switching from Terraform to YAML again, you will now have to manually copy the bucket name and region:

apiVersion: config.kio.kasten.io/v1alpha1 kind: Profile metadata: name: s3-export namespace: kasten-io spec: type: Location locationSpec: credential: secretType: AwsAccessKey secret: apiVersion: v1 kind: Secret name: k10-s3-secret namespace: kasten-io type: ObjectStore objectStore: objectStoreType: S3 # NOTE: Name and region must be manually update! name: kasten-export-00000000000000000000000000 region: eu-central-1

The profile can then be created using: kubectl apply -f kasten-export-profile.yaml. If you look at the web UI, you will the see location profile appear in the settings.

Our original backup policy can now be extended by an additional export action. The final policy should look like this:

# See: apiVersion: config.kio.kasten.io/v1alpha1 kind: Policy metadata: name: demo-app-backup-policy namespace: kasten-io spec: comment: Backup policy for the demo-app frequency: "@hourly" retention: hourly: 24 daily: 7 actions: - action: backup - action: export exportParameters: frequency: "@hourly" profile: name: s3-export namespace: kasten-io exportData: enabled: true selector: matchLabels: k10.kasten.io/appNamespace: demo-app

After running kubectl apply -f backup-policy.yaml you should now see the updated policy in the web UI. Future snapshots will automatically be exported to S3.

Confirming Working Restores

To conclude we will demonstrate that we now have a cluster with properly secured data storage by deleting and restoring our demo-app.

NOTE: If you are following along and no automatic hourly backup job has executed so far, you can manually trigger one by clicking “run-once” on the backup policy in the web UI.

So let’s just delete everything:

kubectl delete namespace demo-app

You can now see in the AWS EKS dashboard that the workload is gone. Additionally, the EC2 volume that K8s created for us in the background for storing the data for the DB is also gone.

But do not worry, by simply navigating to Applications → Removed in the Kasten web UI we can easily restore our beloved demo-app. Click Restore, select the restore point, then in the UI panel we recreate our namespace by clicking “Create a New Namespace” and entering demo-app again; confirm by clicking Restore again.

After the restore job finished we can connect to the database again with the same commands that we used to create our demo entry in the first place.

export POSTGRES_PASSWORD=$(kubectl get secret --namespace demo-app
demo-db-postgresql -o jsonpath="{.data.postgresql-password}" | base64
--decode) kubectl run demo-db-postgresql-client --rm --tty -i --restart='Never'
--namespace demo-app --image
docker.io/bitnami/postgresql:11.11.0-debian-10-r22
--env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host
demo-db-postgresql -U postgres -d postgres -p 5432

Then in the SQL promt we run SELECT * FROM demo; and, yes, indeed, our data is back:

postgres=# SELECT * FROM demo; id -------------- a test entry (1 row)

Quit with

\\quit

Conclusion

For everyone following along, you can now run terraform destroy in both projects to delete all resources again. First in the application project then the cluster project. Or, you know, continue from here and build something great.

As I said before, all code from these posts can also be found on GitHub. Feel free to fork the repo and use this as the foundation for yourself.

Related Resources

#1 Kubernetes Data Protection
Free
#1 Kubernetes
Data Protection
Similar Blog Posts
Business | October 3, 2024
Technical | September 30, 2024
Business | September 27, 2024
Stay up to date on the latest tips and news
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam’s Privacy Policy
You're all set!
Watch your inbox for our weekly blog updates.
OK