Now you’re less likely to miss what’s been brewing in our knowledge base with this weekly digest
Please, try again later.
Kasten now supports exporting metrics from the embedded Prometheus to external backends using Prometheus's remote_write capability. This feature supports the collection, aggregation, and visualization of cluster and multi-cluster metrics in monitoring tools like Grafana Cloud and Datadog.
Enterprises managing complex Kubernetes environments with Kasten often require centralized observability across multiple clusters. A common challenge has been the inability to aggregate Prometheus metrics from Kastendeployments into external monitoring platforms running outside the clusters.
To address this, enhancements have been introduced to expose Kasten’s Prometheus metrics externally, enabling seamless integration with centralized visualization and alerting tools. This improvement simplifies multi-cluster monitoring and provides more granular operational insights.
For more information on Grafana and Kasten, refer to: KB4635: How to Install Grafana with K10 Dashboard and DataSources Pre-provisioned
To facilitate monitoring for multiple clusters and enable customers to aggregate metrics in external backends like Grafana Cloud, we enabled Prometheus’s remote_write capability in the Kasten deployment. This was accomplished via Helm values, allowing cluster administrators to specify remote endpoints, authentication details, and optional filtering or relabeling rules directly in their deployment configuration. By setting these values during installation or upgrade, users can ensure that their Prometheus instance forwards metrics to a centralized backend.
A key aspect of this configuration is cluster labeling. Through Helm values, users will specify a unique identifier for each cluster (e.g., clusterID), which is then applied as a label to all exported metrics, via external_labels. This labeling ensures that when metrics from multiple clusters arrive at the remote backend, they can be filtered, grouped, and visualized by cluster. Additionally, Kasten automatically attempts to add a unique cluster UID label (cluster_uid) to each metric, providing an extra layer of identification and helping distinguish between clusters even if their names are similar.
Create a new file with the name: k10-remote-write-values.yaml. Paste in the code below and replace the placeholder values. Give the cluster a name of your choosing in the clusterName entry.
clusterName: "" # REQUIRED: Enter a cluster name
prometheus:
server:
remote_write:
- url: <insert remote backend url>
basic_auth:
username: <insert username> # Remote backend ID
password: <insert API key> # Remote backend API key
clusterName is required when remote_write is enabled; the deployment will fail without it. clusterName will appear as the cluster_name on all exported metrics. A unique cluster_uid label is automatically added.
For enhanced security, you can store credentials in a Kubernetes Secret and reference them using bearer_token_file:
kubectl create secret generic prometheus-remote-write-token \
--from-literal=token=<your-bearer-token> \
-n kasten-io
k10-remote-write-values.yaml with extraSecretMounts:clusterName: "" # REQUIRED: Enter a cluster name
prometheus:
server:
remote_write:
- url: <insert remote backend url>
bearer_token_file: /etc/prometheus-secrets/token
extraSecretMounts:
- name: remote-write-token
mountPath: /etc/prometheus-secrets
secretName: prometheus-remote-write-token
readOnly: true
subPath (note line 10):
clusterName: ""
prometheus:
server:
remote_write:
- url: <insert remote backend url>
bearer_token_file: /etc/prometheus-secrets/bearer-token
extraSecretMounts:
- name: remote-write-token
mountPath: /etc/prometheus-secrets
subPath: token # Mount specific key from secret
secretName: prometheus-remote-write-token
readOnly: true
Optionally add basic filtering like:
prometheus:
server:
remote_write:
- url: <insert remote backend url>
# ... auth configuration ...
metricDrop:
- k10_debug_.*
clusterName: ""
prometheus:
server:
clusterUIDOverride: "" # Optional: manual cluster UID
remote_write:
- url: <insert remote backend url>
basic_auth:
username: <insert username>
password: <insert API key>
Apply the new remote_write configuration with the following helm command:
helm upgrade k10 kasten/k10 -n kasten-io -f k10-remote-write-values.yaml
helm upgrade k10 ./helm/k10 -n kasten-io -f k10-remote-write-values.yaml
helm repo add kasten https://charts.kasten.io/
helm repo update
# Replace "8.0.12" with your desired Kasten version
helm install k10 kasten/k10 -n kasten-io \
-f k10-remote-write-values.yaml \
--set global.image.tag="8.0.12" \
--create-namespace
helm get values k10 -n kasten-io
Look for your new remote_write configurations in the output:
clusterName: prod-k10-demo prometheus: server: remote_write: - url: ...
kubectl get configmap k10-k10-prometheus-config -n kasten-io -o yaml | less
Output should include:
external_labels: cluster_uid: <uuid> # Auto-detected from namespace UID (if lookup succeeded) cluster_name: <your cluster name> remote_write: - url: https://... basic_auth: username: <remote backend username>
kubectl get pods -n kasten-io | grep prometheus
Expected output:
prometheus-server-xxxxx-xxxxx 2/2 Running 0 2m
kubectl logs -n kasten-io -l component=server,app=prometheus -c prometheus-server --tail=50
Look for these indicators (may need to adjust tail value in kubectl command):
Starting WAL watcher with your Grafana URLStarting scraped metadata watcher for your remote endpointDone replaying WAL (indicates historical data was sent)curl http://<prometheus-url>/api/v1/series?match[]=k10_backup_duration_seconds
While it is technically possible to manually edit the Prometheus ConfigMap using kubectl edit, this approach is generally discouraged. Manual edits can be overwritten by subsequent Helm upgrades, leading to lost configuration and potential outages. Additionally, ConfigMap editing can be error-prone, especially when dealing with remote_write or relabeling rules. Instead, all configuration should flow through Helm values, ensuring consistency, safety, and easy rollback or upgrades.
The end result of both options is the same. Both allow Prometheus to export metrics to a remote backend with proper labeling. kubectl edit is a manual, in-place edit of the live config. The Helm values approach is a declarative, template-driven config applied at install or upgrade.
For quick, temporary, or emergency configuration changes, we might use kubectl edit; however, for production, it’s recommended to use Helm values. Configs are stored in version control and are automatically applied and updated during every install/upgrade.
If this KB article did not resolve your issue or you need further assistance with Veeam software, please create a Veeam Support Case.
To submit feedback regarding this article, please click this link: Send Article Feedback
To report a typo on this page, highlight the typo with your mouse and press CTRL + Enter.
Your feedback has been received and will be reviewed.
Please, try again later.
Please try select less.
This form is only for KB Feedback/Suggestions, if you need help with the software open a support case
Your feedback has been received and will be reviewed.
Please, try again later.