#1 Global Leader in Data Resilience

Exports Don't Work After Veeam Kasten for Kubernetes Reinstall

KB ID: 4609
Product: Veeam Kasten for Kubernetes
Published: 2024-06-13
Last Modified: 2024-06-13
mailbox
Get weekly article updates
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.

Cheers for trusting us with the spot in your mailbox!

Now you’re less likely to miss what’s been brewing in our knowledge base with this weekly digest

error icon

Oops! Something went wrong.

Please, try again later.

Critical Warning

The solution provided in this article results in data removal. These steps should only be followed when testing or evaluating Veeam Kasten for Kubernetes or if the existing data is no longer required.

If you have any questions, please contact Support.

Challenge

Description: Jobs fail to run after reinstalling Veeam Kasten for Kubernetes on the same cluster

Veeam Kasten for Kubernetes backups can be transferred to any S3-compliant object store to keep secondary copies and for long-term retention based on compliance needs. Both metadata and data are saved in a location defined by the cluster-ID. To prevent accidental loss, Veeam Kasten for Kubernetes will refuse to overwrite this data in case it is deleted and reinstalled and will display errors if new backups on a new Veeam Kasten for Kubernetes install are initiated.

Error: "Failed to connect to backup repository: invalid repository password"

Veeam Kasten for Kubernetes uses passkeys (automatically generated or user-defined) to protect metadata and data. Since this is unique to each Veeam Kasten for Kubernetes deployment, and because old data exists in the same backup location, a reinstalled Veeam Kasten for Kubernetes instance will run into conflicts.

Solution

To overcome this failure, existing data in the object storage location must be removed from the previous cluster.

Part 1: Extract Cluster ID

To do this, first, extract the cluster ID of the current cluster (to know which one shouldn't be removed). Cluster ID can be extracted in two ways:

  1. Cluster ID Extraction with CLI
kubectl get namespace default -ojsonpath="{.metadata.uid}{'\n'}"
  1. Veeam Kasten for Kubernetes Dashboard: append settings/support to the end of the URL.
    Example:
    https://cluster.example.com/k10/#/settings/support
    

Part 2: Identify Bucket Name

Retrieve the specific S3 bucket

  1. Within Settings, view Locations.
  2. Identify the name under Bucket Name.

Part 3: Remove Existing Data in Bucket

  1. Open the S3 Console.
  2. Select the bucket identified in the previous section.
  3. Open the K10 directory.
  4. Open the directory whose name matches the Cluster ID identified in the previous section.
  5. Delete everything.
To submit feedback regarding this article, please click this link: Send Article Feedback
To report a typo on this page, highlight the typo with your mouse and press CTRL + Enter.

Spelling error in text

This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply except as noted in our Privacy Policy.
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

Oops! Something went wrong.

Please, try again later.

You have selected too large block!

Please try select less.

KB Feedback/Suggestion

This form is only for KB Feedback/Suggestions, if you need help with the software open a support case

By submitting, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.
This site is protected by hCaptcha and its Privacy Policy and Terms of Service apply except as noted in our Privacy Policy.
Verify your email to continue your product download
We've sent a verification code to:
  • Incorrect verification code. Please try again.
An email with a verification code was just sent to
Didn't receive the code? Click to resend in sec
Didn't receive the code? Click to resend
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

error icon

Oops! Something went wrong.

Please, try again later.