Prometheus with Kubernetes#
Because the Ametnes Prometheus service usually runs outside your workload clusters, run a Prometheus Agent (or Prometheus) inside each Kubernetes cluster and use remote_write to ship metrics to your Ametnes Prometheus endpoint.
Prerequisites#
- Prometheus provisioned on Ametnes Platform.
- A metrics collection agent installed in each cluster that will send metrics to Ametnes Prometheus.
- Cluster RBAC and networking that allow the in-cluster Prometheus to scrape targets and reach your Ametnes endpoint.
Scrape workloads and remote write#
Install and run a Prometheus Agent (or Prometheus) inside each Kubernetes cluster. The in-cluster agent discovers local targets and forwards metrics to Ametnes Prometheus.
scrape_configs:
- job_name: kubernetes-pods
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: "true"
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
remote_write:
- url: "https://<prometheus-username>:<prometheus-password>@<ametnes-prometheus-endpoint>/api/v1/write"
Common target pattern#
- Set
prometheus.io/scrape: "true"on pods you want scraped. - Optionally set
prometheus.io/pathandprometheus.io/port. - Expose a Prometheus-compatible metrics endpoint in your workload.
- If your endpoint path is not the default, set
prometheus.io/pathto the actual path.
Kubernetes Dashboards#
After connecting Grafana to Prometheus, import community Kubernetes dashboards from the dotdc/grafana-dashboards-kubernetes project.
Using the UI#
- Open the repository and download the dashboard JSON files you need (from the repo
dashboardslayout, or follow the project README for the recommended set). - In Grafana, go to Dashboards → New → Import.
- Click Upload dashboard JSON file and select a downloaded
.jsonfile, or paste the file contents into the import box. - Select your Prometheus data source when prompted.
- Click Import.
Repeat for additional dashboards from the same repository as needed. Adjust variables (cluster, namespace, etc.) in each dashboard if the template defines them.
Using Terraform#
For a more composable and repeatable process, Infrastructure as Code (IaC) is usually a better option.
Store dashboard JSON files in your repository (for example under dashboards/kubernetes/) and let Terraform import them:
terraform {
required_providers {
grafana = {
source = "grafana/grafana"
version = "~> 3.0"
}
}
}
provider "grafana" {
url = var.grafana_url
auth = "${var.grafana_user}:${var.grafana_password}"
}
locals {
kubernetes_dashboards = fileset("${path.module}/dashboards/kubernetes", "*.json")
}
resource "grafana_dashboard" "kubernetes" {
for_each = local.kubernetes_dashboards
config_json = file("${path.module}/dashboards/kubernetes/${each.value}")
overwrite = true
}
variable "grafana_url" {
type = string
}
variable "grafana_user" {
type = string
}
variable "grafana_password" {
type = string
sensitive = true
}
Validation checklist#
- In Prometheus UI (or Grafana Explore), in-cluster scrape targets are healthy where you expect.
- Remote write succeeds (no sustained failures on the agent).
- Imported dashboards show panels populated from your Prometheus data source.