#1 Global Leader in Data Resilience

How to Clean Up Veeam Kasten for Kubernetes Orphaned Volumes on GKE k8s 1.28.x

KB ID: 4610
Product: Veeam Kasten for Kubernetes
Published: 2024-06-12
Last Modified: 2024-06-12
mailbox
Get weekly article updates
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.

Cheers for trusting us with the spot in your mailbox!

Now you’re less likely to miss what’s been brewing in our knowledge base with this weekly digest

error icon

Oops! Something went wrong.

Please, try again later.

Purpose

This guide provides instructions on how to clean up Veeam Kasten for Kubernetes provisioned PVs that are not deleted, causing orphaned volumes in the GKE k8s 1.28 release.

It has been observed on GKE  running k8s 1.28, Veeam Kasten for Kubernetes provisioned PVs using in-tree provisioner cannot be deleted via “kubectl delete pvc <pvcname>”, resulting in volume sprawl requiring manual remediation. Restoring from snapshots/backups still functions.

There is no risk of data loss or inability to recover from existing backups.

This issue effects only in-tree provisioner “kubernetes.io/gce-pd”, on GKE k8s 1.28.x.

Solution

Identification Of Issue

This section demonstrates how to check for orphaned volumes in the environment and successfully clean them.

Environment

  • GKE k8s 1.28.x

  • Kasten K10 Version : 6.5.2

  • in-tree provisioned “kubernetes.io/gce-pd”

 

Step 1: Protect workloads using Veeam Kasten for Kubernetes. Verify any Orphaned PVs
  1. Run the Policy to take a Snap+Export and capture the result.
    The following screenshot demonstrates a successful snapshot and export.
Despite the reported success in the Dashboard, the report shows a failed status after the export was completed with orphaned volumes:
status:
message: 'error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0":
    no volume plugin matched'
  phase: Failed

 

  1. Check for failed PV to trace orphaned volume:
kubectl get pv | grep -i failed
kubectl describe pv <pv-id>

example output:

kubectl get pv |grep -i failed
kio-4e1c8777bcd411eeb71cde3cda5267f6-0     8589934592   RWO            Delete           Failed   kasten-io/kio-4e1c8777bcd411eeb71cde3cda5267f6-0   standard                39m
kio-e747f57abcd111eeb71cde3cda5267f6-0     8589934592   RWO            Delete           Failed   kasten-io/kio-e747f57abcd111eeb71cde3cda5267f6-0   standard                57m
kubectl describe pv kio-4e1c8777bcd411eeb71cde3cda5267f6-0
Events:
  Type     Reason              Age   From                         Message
  ----     ------              ----  ----                         -------
  Warning  VolumeFailedDelete  15m   persistentvolume-controller  error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0": no volume plugin matched
Step 2: Attempt to Delete the Orphaned PVC

Using the following command attempt to delete the identified orphaned PVC.

kubectl delete pvc <pvc-id> - n kasten-io

Example

kubectl delete pvc kio-4e1c8777bcd411eeb71cde3cda5267f6-0 - n kasten-io
status:
  message: 'error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0":
    no volume plugin matched'
  phase: Failed
Step 3: Restore Workload

Restore workload from Veeam Kasten for Kubernetes snapshot/export, which succeeds!

Restore

Orphaned Volume Clean-Up Procedure

The following steps outline the process to clean up any orphaned volumes.

  1. Identify GCE Disk names for orphaned volumes.

kubectl get pv --selector k10pvmatchid \
-o jsonpath='{.items[?(@.status.phase == "Failed")].spec.gcePersistentDisk.pdName}'
  1. Clean up orphaned disks from GCE.

disks=$(kubectl get pv --selector k10pvmatchid -o jsonpath='{.items[?(@.status.phase == "Failed")].spec.gcePersistentDisk.pdName}')
for disk in $disks; do
gcloud compute disks delete $disk --quiet
done
  1. Clean up failed Veeam Kasten for Kubernetes PV Resources in k8s cluster.

failedpvs=$(kubectl get pv --selector k10pvmatchid -o jsonpath='{.items[?(@.status.phase == "Failed")].metadata.name}')
for failedpv in $failedpvs; do
kubectl delete pv $failedpv
done
To submit feedback regarding this article, please click this link: Send Article Feedback
To report a typo on this page, highlight the typo with your mouse and press CTRL + Enter.

Spelling error in text

This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply except as noted in our Privacy Policy.
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

Oops! Something went wrong.

Please, try again later.

You have selected too large block!

Please try select less.

KB Feedback/Suggestion

This form is only for KB Feedback/Suggestions, if you need help with the software open a support case

By submitting, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply except as noted in our Privacy Policy.
Verify your email to continue your product download
We've sent a verification code to:
  • Incorrect verification code. Please try again.
An email with a verification code was just sent to
Didn't receive the code? Click to resend in sec
Didn't receive the code? Click to resend
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

error icon

Oops! Something went wrong.

Please, try again later.