Creating Custom Grafana Dashboards for Kubernetes Resource Monitoring

Introduction

In modern DevOps workflows, monitoring Kubernetes clusters is crucial to ensure optimal performance, resource allocation, and overall system health. While tools like Prometheus and Grafana provide powerful insights, default dashboards may not always meet the needs of different teams.

In this post, I’ll walk you through the process of creating custom Grafana dashboards to monitor Kubernetes resources, making monitoring data more accessible and actionable for different stakeholders.

Why Custom Dashboards?

A one-size-fits-all dashboard doesn’t always work in dynamic environments. Different teams require different levels of detail:

  • Developers might want insights into application performance and error rates.
  • SREs and Ops teams need deep infrastructure metrics like CPU, memory, and pod statuses.
  • Management and Business teams may prefer high-level overviews of system health.

By creating role-specific visualizations, we can provide each team with the data they need.

Setting Up Grafana for Kubernetes Monitoring

Step 1: Install Prometheus and Grafana in Kubernetes

If you haven’t already installed Prometheus and Grafana, you can deploy them using Helm:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring --create-namespace

After installation, forward the Grafana service to access the UI:

kubectl port-forward svc/monitoring-grafana 3000:80 -n monitoring

Now, open http://localhost:3000 and log in with:

  • Username: admin
  • Password: Retrieve it using:
kubectl get secret --namespace monitoring monitoring-grafana -o jsonpath="{.data.admin-password}" | base64 --decode

Step 2: Configure Prometheus as the Data Source

Once inside Grafana:

  1. Go to Configuration → Data Sources.
  2. Click Add data source and select Prometheus.
  3. Set the URL to http://monitoring-prometheus-oper-prometheus.monitoring.svc.cluster.local:9090.
  4. Click Save & Test to verify the connection.

Now, Grafana can fetch Kubernetes metrics from Prometheus.

Step 3: Creating a Custom Dashboard

We’ll manually create a Kubernetes Resource Monitoring dashboard in Grafana.

Adding a CPU Usage Panel

  1. Go to Dashboards → New Dashboard.
  2. Click Add a New Panel.
  3. Under Query, select Prometheus as the data source.
  4. Enter the following PromQL query to monitor CPU usage per namespace:
sum(rate(container_cpu_usage_seconds_total{namespace!='', container!=''}[5m])) by (namespace)
  • In the Legend format, enter {{namespace}} to label the graph properly.
  • Click Apply.

Adding a Memory Usage Panel

  1. Add another panel in the same dashboard.
  2. Use the following PromQL query to monitor Memory usage per namespace:
sum(container_memory_usage_bytes{namespace!='', container!=''}) by (namespace)
  • Set the Legend format to {{namespace}}.
  • Click Apply.

Saving the Dashboard

  1. Click Save Dashboard.
  2. Enter a name like Kubernetes Resource Monitoring.
  3. Click Save.

Step 4: Viewing the Dashboard

Once saved, the dashboard should display real-time CPU and memory usage graphs categorized by namespaces.

  • SREs can track high CPU-consuming namespaces to optimize resource allocation.
  • Developers can monitor application memory usage to debug performance issues.
  • Managers can get an overview of cluster health at a glance.

By creating custom visualizations, we make Kubernetes monitoring more actionable and role-specific.

Conclusion

In this post, we explored how to create a custom Kubernetes monitoring dashboard in Grafana. By leveraging Prometheus metrics, we designed role-specific panels for CPU and memory usage, making monitoring more insightful and efficient.

Stay tuned for more Kubernetes insights! If you found this helpful, share your thoughts in the comments.👇

Building a Comprehensive Logging Stack with Loki and Grafana

Logs are the backbone of observability in microservices. But traditional logging systems can be complex, expensive, and inefficient at handling high-volume logs. This is where Grafana Loki comes in!

Loki is a lightweight, cost-effective logging solution designed to work seamlessly with Grafana. Unlike Elasticsearch-based solutions, Loki indexes metadata instead of the actual log content, making it faster and more scalable for Kubernetes environments.

What we will achieve in this guide:

✅ Deploy Loki for log aggregation
✅ Install Promtail for log collection
✅ Visualize logs in Grafana
✅ Enable log queries for efficient debugging 

Let’s get started! 

Deploying Loki with Helm

The easiest way to install Loki in Kubernetes is via Helm, which automates resource creation and configuration.

Step 1: Add the Grafana Helm Repository

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Step 2: Install Loki in Kubernetes

helm install loki grafana/loki-stack -n logging --create-namespace

This command deploys:
✅ Loki (log aggregator)
✅ Promtail (log forwarder)
✅ Grafana (log visualization)

Verify that the pods are running:

kubectl get pods -n logging

Configuring Promtail for Log Collection

Promtail collects logs from Kubernetes nodes and sends them to Loki. Let’s configure it properly.

promtail-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-config
  namespace: logging
data:
  promtail.yaml: |
    server:
      http_listen_port: 3101
      grpc_listen_port: 9095
    positions:
      filename: /var/log/positions.yaml
    clients:
      - url: http://loki:3100/loki/api/v1/push
    scrape_configs:
      - job_name: kubernetes-pods
        pipeline_stages:
          - cri: {}
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_label_app]
            action: keep
            regex: .+
Click Here to Copy YAML

Apply the Promtail Configuration

kubectl apply -f promtail-config.yaml

This config scrapes logs from Kubernetes pods and sends them to Loki for indexing.

Deploying Grafana for Log Visualization

Grafana provides a user-friendly dashboard to analyze logs efficiently.

Step 1: Install Grafana via Helm

helm install grafana grafana/grafana -n logging

Step 2: Access Grafana

kubectl port-forward svc/grafana -n logging 3000:80

Now, open http://localhost:3000 in your browser.

  • Username: admin
  • Password: Retrieve using:
kubectl get secret -n logging grafana -o jsonpath="{.data.admin-password}" | base64 --decode

Connecting Loki as a Data Source in Grafana

Once inside Grafana:

  1. Navigate to Configuration → Data Sources
  2. Click Add Data Source
  3. Select Loki
  4. Set the URL to http://loki:3100
  5. Click Save & Test

Now, Grafana can query logs directly from Loki! 

Querying and Analyzing Logs

Grafana allows you to filter logs with powerful queries. Here are some common ones:

View all logs for a specific namespace

{namespace="myapp"}

Filter logs from a specific pod

{pod="myapp-56c8d9df6d-p7tkg"}

Search logs for errors

{app="myapp"} |= "error"

LogQL (Loki Query Language) enables efficient log analysis, making debugging easier.

Verifying the Setup

Check the status of your Loki stack:

kubectl get pods -n logging

If everything is running, you successfully deployed a scalable logging system for Kubernetes! 

Conclusion: Why Use Loki for Logging?

By implementing Loki with Grafana, we achieved:
✅ Centralized logging for Kubernetes workloads
✅ Lightweight and cost-effective log storage
✅ Powerful query capabilities with LogQL
✅ Seamless integration with Grafana dashboards

Unlike traditional logging stacks (like ELK), Loki eliminates the need for heavy indexing, reducing storage costs and improving query speeds.

Let me know if you have any questions in the comments!👇