Setting Up and Troubleshooting Rook-Ceph on Minikube

Introduction

Storage is a critical component in Kubernetes environments, and Rook-Ceph is a powerful storage orchestrator that brings Ceph into Kubernetes clusters. In this post, we’ll go through deploying Rook-Ceph on Minikube, setting up CephFS, and resolving common issues encountered during the process.

Prerequisites

Before we begin, ensure you have the following installed on your machine:

  • Minikube (running Kubernetes v1.32.0)
  • Kubectl
  • Helm

If you don’t have Minikube installed, you can start a new cluster with:

minikube start --memory=4096 --cpus=2 --disk-size=40g

Enable required Minikube features:

minikube addons enable default-storageclass
minikube addons enable storage-provisioner

Step 1: Install Rook-Ceph with Helm

Add the Rook Helm repository:

helm repo add rook-release https://charts.rook.io/release
helm repo update

Install Rook-Ceph Operator

helm install rook-ceph rook-release/rook-ceph --namespace rook-ceph --create-namespace

Verify the operator is running:

kubectl -n rook-ceph get pods

Expected output:

NAME                            READY   STATUS    RESTARTS   AGE
rook-ceph-operator-59dcf6d55b   1/1     Running   0          1m

Step 2: Deploy the Ceph Cluster

Now, create a CephCluster resource to deploy Ceph in Kubernetes.

Create ceph-cluster.yaml

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: my-ceph-cluster
  namespace: rook-ceph
spec:
  cephVersion:
    image: quay.io/ceph/ceph:v18.2.0
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: true
  storage:
    useAllNodes: true
    useAllDevices: true
Click Here to Copy YAML

Apply the Ceph Cluster Configuration

kubectl apply -f ceph-cluster.yaml

Check the CephCluster status:

kubectl -n rook-ceph get cephcluster

Expected output (once ready):

NAME              DATADIRHOSTPATH   MONCOUNT   AGE   PHASE     MESSAGE      HEALTH
my-ceph-cluster   /var/lib/rook     3          5m    Ready     Cluster OK   HEALTH_OK

Step 3: Set Up CephFS (Ceph Filesystem)

Create cephfs.yaml

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: my-cephfs
  namespace: rook-ceph
spec:
  metadataServer:
    activeCount: 1
    activeStandby: true
  dataPools:
    - replicated:
        size: 3
  metadataPool:
    replicated:
      size: 3
Click Here to Copy YAML

Apply CephFS Configuration

kubectl apply -f cephfs.yaml

Verify the filesystem:

kubectl -n rook-ceph get cephfilesystem

Expected output:

NAME        ACTIVEMDS   AGE   PHASE
my-cephfs   1           2m    Ready

Step 4: Create a StorageClass for CephFS

Create cephfs-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
  clusterID: rook-ceph
  fsName: my-cephfs
  pool: my-cephfs-data0
  mounter: fuse
reclaimPolicy: Delete
Click Here to Copy YAML

Apply the StorageClass Configuration

kubectl apply -f cephfs-storageclass.yaml

List available StorageClasses:

kubectl get storageclass

Step 5: Create a PersistentVolumeClaim (PVC)

Create pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs
Click Here to Copy YAML

Apply the PVC Configuration

kubectl apply -f pvc.yaml

Check if the PVC is bound:

kubectl get pvc

Expected output:

NAME            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-cephfs-pvc   Bound    pvc-xyz  1Gi        RWX            rook-cephfs    1m

Step 6: Deploy a Test Pod Using CephFS Storage

Create cephfs-test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: cephfs-app
spec:
  containers:
  - name: app
    image: busybox
    command: ["/bin/sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: "/mnt"
      name: mypvc
  volumes:
  - name: mypvc
    persistentVolumeClaim:
      claimName: my-cephfs-pvc
Click Here to Copy YAML

Deploy the Test Pod

kubectl apply -f cephfs-test-pod.yaml

Check if the pod is running:

kubectl get pods

Troubleshooting Issues

Issue: CephCluster Stuck in “Progressing” State

Run:

kubectl -n rook-ceph describe cephcluster my-ceph-cluster

If you see an error like:

failed the ceph version check: the version does not meet the minimum version "18.2.0-0 reef"

Solution:

  1. Ensure you are using Ceph v18.2.0 in ceph-cluster.yaml.
  2. Restart the Ceph operator: kubectl -n rook-ceph delete pod -l app=rook-ceph-operator

Issue: CephCluster Doesn’t Delete

If kubectl delete cephcluster hangs, remove finalizers:

kubectl -n rook-ceph patch cephcluster my-ceph-cluster --type='merge' -p '{"metadata":{"finalizers":null}}'

Then force delete:

kubectl -n rook-ceph delete cephcluster my-ceph-cluster --wait=false

Conclusion

Setting up Rook-Ceph on Minikube provides a robust storage solution within Kubernetes. However, issues such as Ceph version mismatches and stuck deletions are common. By following this guide, you can successfully deploy CephFS, configure a persistent storage layer, and troubleshoot issues effectively.

Would you like a deep dive into RBD or Object Storage in Rook-Ceph next? Let me know in the comments!👇

Setting Up MongoDB Replication in Kubernetes with StatefulSets

Introduction

Running databases in Kubernetes comes with challenges such as maintaining persistent state, stable network identities, and automated recovery. For MongoDB, high availability and data consistency are critical, making replication a fundamental requirement.

In this guide, we’ll deploy a MongoDB Replica Set in Kubernetes using StatefulSets, ensuring that each MongoDB instance maintains a stable identity, persistent storage, and seamless failover.

Why StatefulSets for MongoDB?

Unlike Deployments, which assign dynamic pod names, StatefulSets ensure stable, ordered identities, making them ideal for running database clusters.

✅ Stable Hostnames → Essential for MongoDB replica set communication
✅ Persistent Storage → Ensures data consistency across restarts
✅ Automatic Scaling → Easily add more replica nodes
✅ Pod Ordering & Control → Ensures correct initialization sequence

Prerequisites

Before proceeding, ensure:

  • A running Kubernetes cluster (Minikube, RKE2, or any self-managed setup)
  • kubectl installed and configured
  • A StorageClass for persistent volumes

Step 1: Create a ConfigMap for MongoDB Initialization

A ConfigMap helps configure MongoDB startup settings.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-config
  namespace: mongo
data:
  mongod.conf: |
    storage:
      dbPath: /data/db
    net:
      bindIp: 0.0.0.0
    replication:
      replSetName: rs0
Click Here to Copy YAML

Apply the ConfigMap:

kubectl apply -f mongodb-configmap.yaml

Step 2: Define a Headless Service

A headless service ensures stable DNS resolution for MongoDB pods.

apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: mongo
spec:
  clusterIP: None
  selector:
    app: mongodb
  ports:
    - name: mongo
      port: 27017
Click Here to Copy YAML

Apply the Service:

kubectl apply -f mongodb-service.yaml

Step 3: Deploy MongoDB with a StatefulSet

We define a StatefulSet with three MongoDB replicas.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
  namespace: mongo
spec:
  serviceName: mongodb
  replicas: 3
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo:6.0
          command:
            - "mongod"
            - "--config"
            - "/config/mongod.conf"
          volumeMounts:
            - name: config-volume
              mountPath: /config
            - name: mongo-data
              mountPath: /data/db
          ports:
            - containerPort: 27017
      volumes:
        - name: config-volume
          configMap:
            name: mongodb-config
  volumeClaimTemplates:
    - metadata:
        name: mongo-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 5Gi
Click Here to Copy YAML

Apply the StatefulSet:

kubectl apply -f mongodb-statefulset.yaml

Step 4: Initialize the MongoDB Replica Set

Once all pods are running, initialize the replica set from the first MongoDB pod:

kubectl exec -it mongodb-0 -n mongo -- mongosh

Inside the MongoDB shell, run:

rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "mongodb-0.mongodb.mongo.svc.cluster.local:27017" },
    { _id: 1, host: "mongodb-1.mongodb.mongo.svc.cluster.local:27017" },
    { _id: 2, host: "mongodb-2.mongodb.mongo.svc.cluster.local:27017" }
  ]
})

Check the replica set status:

rs.status()

Step 5: Verify Replication

To verify the secondary nodes, log into any of the replica pods:

kubectl exec -it mongodb-1 -n mongo -- mongosh

Run the following to check the node’s role:

rs.isMaster()

Conclusion

With StatefulSets, MongoDB can maintain stable identities, ensuring smooth replication and high availability in Kubernetes.

✅ Automatic failover → If a primary node fails, a secondary is promoted.
✅ Stable DNS-based discovery → Ensures seamless communication between replicas.
✅ Persistent storage → Data remains intact across pod restarts.

Want to scale your replica set? Just update the replicas count, and Kubernetes handles the rest!

Have you deployed databases in Kubernetes? Share your experience below!👇

Dynamic Volume Provisioning in Kubernetes: A Practical Guide

Introduction

Storage management in Kubernetes can be challenging, especially when handling stateful applications. Manually provisioning PersistentVolumes (PVs) adds operational overhead and limits scalability. Dynamic Volume Provisioning allows Kubernetes to create storage resources on demand, automating storage allocation for workloads.

In this post, we’ll set up dynamic provisioning using Local Path Provisioner, enabling seamless storage for Kubernetes applications.

Step 1: Deploy Local Path Provisioner

To enable dynamic provisioning, we need a StorageClass that defines how PVs are created. The Local Path Provisioner is a simple and efficient solution for clusters without cloud storage.

Apply Local Path Provisioner (For Minikube or Local Clusters)

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Verify the provisioner is running:

kubectl get pods -n local-path-storage

Expected output:

NAME                           READY   STATUS    RESTARTS   AGE
local-path-provisioner-xx      1/1     Running   0          1m

Step 2: Create a StorageClass

We need a StorageClass to define the storage provisioning behavior.

storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
Click Here to Copy YAML

Apply the StorageClass:

kubectl apply -f storage-class.yaml

Verify:

kubectl get storageclass

Expected output:

NAME        PROVISIONER         RECLAIMPOLICY  VOLUMEBINDINGMODE
local-path  rancher.io/local-path Delete    WaitForFirstConsumer

Step 3: Create a PersistentVolumeClaim (PVC)

A PersistentVolumeClaim (PVC) requests storage dynamically from the StorageClass.

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-dynamic-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-path
Click Here to Copy YAML

Apply the PVC:

kubectl apply -f pvc.yaml

Check the PVC status:

kubectl get pvc

Expected output:

NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS
my-dynamic-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi       RWO            local-path

Step 4: Deploy a Pod Using the Dynamic Volume

Now, we’ll create a Pod that mounts the dynamically provisioned volume.

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: storage-test
spec:
  containers:
  - name: test-container
    image: busybox
    command: [ "sleep", "3600" ]
    volumeMounts:
    - mountPath: "/data"
      name: test-volume
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: my-dynamic-pvc
Click Here to Copy YAML

Apply the pod:

kubectl apply -f pod.yaml

Verify the pod:

kubectl get pods

Expected output:

NAME             READY   STATUS    RESTARTS   AGE
storage-test     1/1     Running   0          1m

Step 5: Verify Data Persistence

Enter the pod and create a test file inside the volume:

kubectl exec -it storage-test -- sh

Inside the container:

echo "Hello, Kubernetes!" > /data/testfile.txt
exit

Now, delete the pod:

kubectl delete pod storage-test

Recreate the pod:

kubectl apply -f pod.yaml

Enter the pod again and check if the file persists:

kubectl exec -it storage-test -- cat /data/testfile.txt

Expected output:

Hello, Kubernetes!

This confirms that the PersistentVolume (PV) remains intact even after pod deletion.

Conclusion

Dynamic Volume Provisioning in Kubernetes eliminates manual storage management by allowing on-demand PV creation. This ensures:
✅ Scalability – Storage is provisioned dynamically as needed.
✅ Automation – No manual intervention required.
✅ Persistence – Data is retained across pod restarts.

By implementing Local Path Provisioner and StorageClasses, Kubernetes clusters can achieve seamless, automated storage provisioning, optimizing performance and resource utilization.

Adopt Dynamic Volume Provisioning today and simplify storage management in your Kubernetes environment!

How do you currently handle persistent storage in Kubernetes? Have you faced challenges with dynamic storage provisioning? Drop your thoughts in the comments! 👇

Implementing Backup and Restore for Kubernetes Applications with Velero

Introduction

In modern cloud-native environments, ensuring data protection and disaster recovery is crucial. Kubernetes does not natively offer a comprehensive backup and restore solution, making it necessary to integrate third-party tools. Velero is an open-source tool that enables backup, restoration, and migration of Kubernetes applications and persistent volumes. In this blog post, we will explore setting up Velero on a Kubernetes cluster with MinIO as the storage backend, automating backups, and restoring applications when needed.

Prerequisites

  • A running Kubernetes cluster (Minikube, RKE2, or self-managed cluster)
  • kubectl installed and configured
  • Helm installed for package management
  • MinIO deployed as an object storage backend
  • Velero CLI installed

Step 1: Deploy MinIO as the Backup Storage

MinIO is a high-performance, S3-compatible object storage server, ideal for Kubernetes environments. We will deploy MinIO in the velero namespace.

Deploy MinIO with Persistent Storage

apiVersion: v1
kind: Namespace
metadata:
  name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio
  namespace: velero
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
        - name: minio
          image: minio/minio
          args:
            - server
            - /data
            - --console-address=:9001
          env:
            - name: MINIO_ROOT_USER
              value: "minioadmin"
            - name: MINIO_ROOT_PASSWORD
              value: "minioadmin"
          ports:
            - containerPort: 9000
            - containerPort: 9001
          volumeMounts:
            - name: minio-storage
              mountPath: /data
      volumes:
        - name: minio-storage
          persistentVolumeClaim:
            claimName: minio-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: velero
spec:
  ports:
    - port: 9000
      targetPort: 9000
      name: api
    - port: 9001
      targetPort: 9001
      name: console
  selector:    app: minio
Click Here to Copy YAML

Create Persistent Volume for MinIO

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
  namespace: velero
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Click Here to Copy YAML

Apply the configurations:

kubectl apply -f minio.yaml

Step 2: Install Velero

We will install Velero using Helm and configure it to use MinIO as a storage backend.

Add Helm Repository and Install Velero

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm install velero vmware-tanzu/velero --namespace velero \
  --set configuration.provider=aws \
  --set configuration.backupStorageLocation.name=default \
  --set configuration.backupStorageLocation.bucket=velero-backup \
  --set configuration.backupStorageLocation.config.s3Url=http://minio.velero.svc.cluster.local:9000 \
  --set configuration.volumeSnapshotLocation.name=default

Step 3: Configure Credentials for Velero

Velero needs credentials to interact with MinIO. We create a Kubernetes secret for this purpose.

apiVersion: v1
kind: Secret
metadata:
  name: cloud-credentials
  namespace: velero
data:
  credentials-velero: W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkPW1pbmlvYWRtaW4KYXdzX3NlY3JldF9hY2Nlc3Nfa2tleT1taW5pb2FkbWluCg==
type: Opaque
Click Here to Copy YAML

Apply the secret:

kubectl apply -f credentials.yaml

Restart Velero for the changes to take effect:

kubectl delete pod -n velero -l app.kubernetes.io/name=velero

Step 4: Create a Backup

We now create a backup of a sample namespace.

velero backup create my-backup --include-namespaces=default

To check the backup status:

velero backup get

Step 5: Restore from Backup

In case of failure, we can restore our applications using:

velero restore create --from-backup my-backup

To check the restore status:

velero restore get

Step 6: Automate Backups with a Schedule

To automate backups every 12 hours:

velero schedule create daily-backup --schedule "0 */12 * * *"

To list scheduled backups:

velero schedule get

Conclusion

By implementing Velero with MinIO, we have built a complete backup and disaster recovery solution for Kubernetes applications. This setup allows us to automate backups, perform point-in-time recovery, and ensure data protection. In real-world scenarios, it is recommended to:

  • Secure MinIO with external authentication
  • Store backups in an off-cluster storage location
  • Regularly test restoration procedures

By integrating Velero into your Kubernetes environment, you enhance resilience and minimize data loss risks. Start implementing backups today to safeguard your critical applications!

Stay tuned for more Kubernetes insights! If you have any issues? Let’s discuss in the comments!👇

Running Stateful Applications in Kubernetes: MySQL Cluster Tutorial

Introduction

Running databases in Kubernetes introduces unique challenges, such as state persistence, leader election, and automatic failover. Unlike stateless applications, databases require stable storage, unique identities, and careful management of scaling and replication.

In this tutorial, we will set up a highly available MySQL cluster using a StatefulSet, which ensures:
✅ Unique identities for MySQL instances (mysql-0, mysql-1, mysql-2)
✅ Persistent storage for database files
✅ Automatic failover for database resilience
✅ MySQL GTID-based replication for data consistency

Step 1: Create a ConfigMap for MySQL Configuration

We first define a ConfigMap to store MySQL settings. The server-id is dynamically set for each pod using environment variables.

Create a file mysql-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: default
data:
  my.cnf: |
    [mysqld]
    server-id=${MYSQL_SERVER_ID}
    log_bin=mysql-bin
    binlog_format=ROW
    enforce-gtid-consistency=ON
    gtid-mode=ON
    master-info-repository=TABLE
    relay-log-info-repository=TABLE
    log_slave_updates=ON
    read-only=ON
    skip-name-resolve
Click Here to Copy YAML

Apply the ConfigMap

kubectl apply -f mysql-config.yaml

Step 2: Create a Persistent Volume Claim (PVC)

Databases require persistent storage to avoid losing data when pods restart.

Create a file mysql-storage.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
Click Here to Copy YAML

Apply the PVC

kubectl apply -f mysql-storage.yaml

Step 3: Deploy MySQL Cluster with StatefulSet

Now, we deploy a StatefulSet to ensure stable pod names and persistent storage.

Create a file mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: default
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "yourpassword"
        - name: MYSQL_REPLICATION_USER
          value: "replica"
        - name: MYSQL_REPLICATION_PASSWORD
          value: "replicapassword"
        - name: MYSQL_SERVER_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
        - name: config-volume
          mountPath: /etc/mysql/conf.d
      volumes:
      - name: config-volume
        configMap:
          name: mysql-config
  volumeClaimTemplates:
  - metadata:
      name: mysql-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
Click Here to Copy YAML

Apply the StatefulSet

kubectl apply -f mysql-statefulset.yaml

Step 4: Create a Headless Service for MySQL

A headless service is required for MySQL nodes to discover each other.

Create a file mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
spec:
  selector:
    app: mysql
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
    targetPort: 3306
Click Here to Copy YAML

Apply the Service

kubectl apply -f mysql-service.yaml

Step 5: Verify the Cluster Setup

Check MySQL Pods

kubectl get pods -l app=mysql

Expected Output:

NAME      READY   STATUS    RESTARTS   AGE
mysql-0   1/1     Running   0          1m
mysql-1   1/1     Running   0          1m
mysql-2   1/1     Running   0          1m

Check MySQL Configuration in Each Pod

kubectl exec -it mysql-0 -- cat /etc/mysql/conf.d/my.cnf | grep server-id

Expected Output:

server-id=1

Step 6: Set Up MySQL Replication

Configure the Primary (mysql-0)

Log in to mysql-0:

kubectl exec -it mysql-0 -- mysql -u root -p

Run:

CREATE USER 'replica'@'%' IDENTIFIED WITH mysql_native_password BY 'replicapassword';
GRANT REPLICATION SLAVE ON *.* TO 'replica'@'%';
FLUSH PRIVILEGES;
SHOW MASTER STATUS;

Note down the File and Position values.

Configure the Replicas (mysql-1, mysql-2)

Log in to mysql-1 and mysql-2:

kubectl exec -it mysql-1 -- mysql -u root -p

Run:

CHANGE MASTER TO 
MASTER_HOST='mysql-0.mysql.default.svc.cluster.local',
MASTER_USER='replica',
MASTER_PASSWORD='replicapassword',
MASTER_LOG_FILE='mysql-bin.000001', -- Replace with actual file
MASTER_LOG_POS=12345; -- Replace with actual position
START SLAVE;
SHOW SLAVE STATUS \G;

Expected Output:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Step 7: Test Failover

To simulate a primary failure, delete the MySQL leader:

kubectl delete pod mysql-0

Expected behavior:

  • Kubernetes restarts mysql-0.
  • Replicas continue serving reads.
  • New primary can be promoted manually.

Conclusion

In this tutorial, we built a highly available MySQL cluster in Kubernetes using:
✅ StatefulSet for stable identities
✅ Persistent Volumes for data durability
✅ ConfigMaps for centralized configuration
✅ Replication for fault tolerance

This setup ensures automatic recovery and high availability while maintaining data consistency in a Kubernetes environment. What do you think? Let’s discuss in the comments!👇

ExternalDNS: Automating DNS Management for Kubernetes Services

Introduction

Managing DNS records manually in Kubernetes can be time-consuming and error-prone. As services scale and change dynamically, updating DNS records manually becomes inefficient. ExternalDNS automates DNS record management by dynamically syncing records with Kubernetes objects.

In this blog, we will cover:
✅ What is ExternalDNS?
✅ How it works with Kubernetes
✅ Steps to deploy and configure it
✅ Best practices for seamless automation

What is ExternalDNS?

ExternalDNS is a Kubernetes add-on that automatically manages DNS records for services and ingress resources. It eliminates manual updates by dynamically syncing DNS records with Kubernetes objects.

Key Benefits:

  • Automated DNS Updates – No manual intervention required.
  • Multi-Cloud Support – Works with AWS Route 53, Cloudflare, Google Cloud DNS, etc.
  • Scalability – Adapts to dynamic changes in Kubernetes services.
  • Improved Reliability – Reduces misconfiguration and ensures consistency.

Deploying ExternalDNS in Kubernetes

Install ExternalDNS using Helm

helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update

For AWS Route 53:

helm install external-dns external-dns/external-dns \
  --namespace kube-system \
  --set provider=aws \
  --set txtOwnerId="my-cluster"

For Cloudflare:

helm install external-dns external-dns/external-dns \
  --namespace kube-system \
  --set provider=cloudflare \
  --set cloudflare.apiToken="YOUR_CLOUDFLARE_API_TOKEN" \
  --set txtOwnerId="my-cluster"

Verify Installation

kubectl get pods -n kube-system -l app.kubernetes.io/name=external-dns

Configuring ExternalDNS for Kubernetes Services

Service Example (LoadBalancer Type)

apiVersion: v1
kind: Service
metadata:
  name: my-app
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080
Click Here to Copy YAML

Apply the service:

kubectl apply -f service.yaml

Configuring ExternalDNS for Ingress Resources

Ingress Example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com
spec:
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80
Click Here to Copy YAML

Apply the Ingress resource:

kubectl apply -f ingress.yaml

Verifying DNS Records

Check ExternalDNS Logs

kubectl logs -l app.kubernetes.io/name=external-dns -n kube-system

Validate DNS Resolution

dig myapp.example.com

Expected output should contain the correct A record pointing to your service.

Conclusion

ExternalDNS simplifies DNS management in Kubernetes by automating record updates, reducing manual errors, and ensuring service availability.

Key Takeaways:

✅ Automates DNS record creation and updates
✅ Works with multiple cloud DNS providers
✅ Integrates seamlessly with Kubernetes services and ingress

By integrating ExternalDNS, Kubernetes administrators can enhance scalability, automation, and reliability in their infrastructure.

Have you used ExternalDNS in your Kubernetes setup? Share your experience!👇

Implementing mTLS in Kubernetes with Cert-Manager

Introduction

Securing internal communication between services in Kubernetes is a critical security practice. Mutual TLS (mTLS) ensures encrypted traffic while also verifying the identity of both the client and server. In this guide, we will configure mTLS between two microservices using Cert-Manager for automated certificate issuance and renewal.

Problem Statement

By default, Kubernetes services communicate in plaintext, making them vulnerable to man-in-the-middle attacks. We need a solution that:

  • Encrypts communication between services.
  • Ensures only trusted services can talk to each other.
  • Automates certificate management to avoid manual rotation.

Solution: mTLS with Cert-Manager

We will deploy:
✅ A Certificate Authority (CA) to issue certificates.
✅ A Kubernetes Issuer to generate TLS certificates.
✅ Two microservices (App One and App Two) configured with mTLS.
✅ A test pod to verify secure service-to-service communication.

Step 1: Install Cert-Manager

Cert-Manager automates TLS certificate lifecycle management. If you haven’t installed it yet, deploy it using Helm:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true

Verify installation:

kubectl get pods -n cert-manager

Step 2: Configure a Certificate Authority (CA)

First, we need a CA to issue certificates for our services.

Create a self-signed root CA:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ca-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: root-ca
  namespace: cert-manager
spec:
  secretName: root-ca-secret
  isCA: true
  duration: 365d
  renewBefore: 30d
  subject:
    organizations:
      - MyOrg
  commonName: root-ca
  privateKey:
    algorithm: RSA
    size: 2048
  issuerRef:
    name: ca-issuer
    kind: ClusterIssuer
Click Here to Copy YAML

Apply it:

kubectl apply -f ca.yaml

Step 3: Issue TLS Certificates for Services

Now, let’s create an Issuer that will generate certificates signed by our CA:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: ca-issuer
  namespace: default
spec:
  ca:
    secretName: root-ca-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f issuer.yaml

Now, request certificates for App One and App Two:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: app-one-tls
  namespace: default
spec:
  secretName: app-one-tls-secret
  duration: 90d
  renewBefore: 2160h
  issuerRef:
    name: ca-issuer
    kind: Issuer
  dnsNames:
    - app-one.default.svc.cluster.local
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: app-two-tls
  namespace: default
spec:
  secretName: app-two-tls-secret
  duration: 90d
  renewBefore: 2160h
  issuerRef:
    name: ca-issuer
    kind: Issuer
  dnsNames:
    - app-two.default.svc.cluster.local
Click Here to Copy YAML

Apply it:

kubectl apply -f app-certs.yaml

Step 4: Deploy the Services with TLS

Now, let’s deploy App One and App Two, mounting the certificates.

App One Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-one
  labels:
    app: app-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-one
  template:
    metadata:
      labels:
        app: app-one
    spec:
      containers:
      - name: app-one
        image: nginx
        ports:
        - containerPort: 443
        volumeMounts:
        - name: tls
          mountPath: "/etc/tls"
          readOnly: true
      volumes:
      - name: tls
        secret:
          secretName: app-one-tls-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f app-one.yaml

App Two Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-two
  labels:
    app: app-two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-two
  template:
    metadata:
      labels:
        app: app-two
    spec:
      containers:
      - name: app-two
        image: nginx
        ports:
        - containerPort: 443
        volumeMounts:
        - name: tls
          mountPath: "/etc/tls"
          readOnly: true
      volumes:
      - name: tls
        secret:
          secretName: app-two-tls-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f app-two.yaml

Step 5: Test mTLS Communication

We will now test service-to-service communication using mTLS.

Run a test pod with curl:

kubectl run curl-test --rm -it --image=curlimages/curl -- /bin/sh

Inside the pod, run:

curl --cacert /etc/tls/ca.crt --cert /etc/tls/tls.crt --key /etc/tls/tls.key https://app-two.default.svc.cluster.local:443

Expected Output:

Hello from App Two

If the request fails, check logs and ensure the correct ports are used.

Conclusion

With this setup, we’ve successfully implemented Mutual TLS (mTLS) in Kubernetes using Cert-Manager.

✅ Encrypted Communication – All traffic is secured via TLS.
✅ Mutual Authentication – Both services verify each other.
✅ Automated Certificate Lifecycle – Cert-Manager handles issuance & renewal.

Have you implemented mTLS in your Kubernetes clusters? Share your experiences in the comments! 👇

Setting Up an Ingress Controller with Advanced Routing in Kubernetes

Introduction

In a Kubernetes environment, managing external access to multiple services efficiently is crucial. This post walks through setting up the NGINX Ingress Controller in minikube with advanced routing rules, including authentication and rate limiting. By the end of this guide, you’ll have a working setup where different services are exposed through a single Ingress with sophisticated HTTP routing.

Step 1: Deploying the NGINX Ingress Controller

Minikube does not include an Ingress controller by default, so we need to enable it:

minikube addons enable ingress

Verify the Ingress controller is running:

kubectl get pods -n kube-system | grep ingress

Expected output:

NAME                           READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-xx     1/1     Running   0          2m

Step 2: Deploying Sample Applications

We’ll create two sample deployments with simple HTTP responses.

app-one.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-one
  labels:
    app: app-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-one
  template:
    metadata:
      labels:
        app: app-one
    spec:
      containers:
      - name: app-one
        image: hashicorp/http-echo
        args:
        - "-text=Hello from App One"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app-one
spec:
  selector:
    app: app-one
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

app-two.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-two
  labels:
    app: app-two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-two
  template:
    metadata:
      labels:
        app: app-two
    spec:
      containers:
      - name: app-two
        image: hashicorp/http-echo
        args:
        - "-text=Hello from App Two"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app-two
spec:
  selector:
    app: app-two
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Apply these configurations:

kubectl apply -f app-one.yaml
kubectl apply -f app-two.yaml

Step 3: Creating the Advanced Ingress

Now, we’ll create an Ingress resource to route traffic based on the request path.

advanced-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: advanced-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/limit-rps: "5"  # Rate limiting: max 5 requests per second
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.local
    http:
      paths:
      - path: /app-one
        pathType: Prefix
        backend:
          service:
            name: app-one
            port:
              number: 80
      - path: /app-two
        pathType: Prefix
        backend:
          service:
            name: app-two
            port:
              number: 80
Click Here to Copy YAML

Apply the Ingress:

kubectl apply -f advanced-ingress.yaml

Check if the Ingress is created:

kubectl get ingress

Expected output:

NAME            CLASS   HOSTS         ADDRESS        PORTS   AGE
advanced-ingress nginx   myapp.local   192.168.49.2   80      5s

Step 4: Testing the Ingress

First, update your /etc/hosts file to map myapp.local to Minikube’s IP:

echo "$(minikube ip) myapp.local" | sudo tee -a /etc/hosts

Test the routes:

curl -H "Host: myapp.local" http://myapp.local/app-one
curl -H "Host: myapp.local" http://myapp.local/app-two

Expected responses:

Hello from App One
Hello from App Two

Step 5: Enforcing Basic Authentication

To secure access, we add Basic Authentication for app-one.

First, create a username-password pair:

echo "admin:$(openssl passwd -stdin -apr1)" | kubectl create secret generic my-auth-secret --from-file=auth -n default

Modify advanced-ingress.yaml to enforce authentication:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: advanced-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/limit-rps: "5"
    nginx.ingress.kubernetes.io/auth-type: "basic"
    nginx.ingress.kubernetes.io/auth-secret: "my-auth-secret"
    nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.local
    http:
      paths:
      - path: /app-one
        pathType: Prefix
        backend:
          service:
            name: app-one
            port:
              number: 80
      - path: /app-two
        pathType: Prefix
        backend:
          service:
            name: app-two
            port:
              number: 80
Click Here to Copy YAML

Reapply the Ingress:

kubectl apply -f advanced-ingress.yaml

Test authentication:

curl -u admin:your-password -H "Host: myapp.local" http://myapp.local/app-one

If correct, it will return:

Hello from App One

Without credentials, it returns:

401 Unauthorized

Conclusion

By following this guide, we have:

✅ Deployed an NGINX Ingress Controller in Minikube.
✅ Configured multiple applications behind a single Ingress resource.
✅ Implemented rate limiting to control excessive requests.
✅ Secured an endpoint using Basic Authentication.

These techniques are essential when deploying microservices in production environments. You can further extend this setup with TLS termination, JWT authentication, or OAuth integration.

Let me know in the comments if you have any questions!👇

Practical Traffic Splitting and Canary Deployments with Istio

Introduction

As applications evolve, releasing new versions safely is crucial. Traditional deployment methods often risk downtime or entire system failures if a new release is faulty. Canary deployments allow gradual rollout of new versions while monitoring performance.

With Istio, we can implement traffic splitting to control how much traffic goes to each version, ensuring a smooth transition without disruptions.

In this post, we’ll walk through:
✅ Setting up Istio in Minikube
✅ Deploying two versions of an app
✅ Using Istio’s VirtualService and DestinationRule for canary rollout

Step 1: Install and Configure Istio in Minikube

Since we are working in a local Minikube cluster, first enable Istio:

minikube start
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled

The istio-injection=enabled label ensures that Istio automatically injects sidecar proxies into our pods.

Step 2: Deploy Application Versions

We’ll deploy two versions of our application (v1 and v2).

Create myapp:v1 Deployment

Save the following YAML as deployment-v1.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
        - name: myapp
          image: myapp:v1  # Using locally built image
          ports:
            - containerPort: 80
Click Here to Copy YAML

Create myapp:v2 Deployment

Save the following YAML as deployment-v2.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
        - name: myapp
          image: myapp:v2  # Using locally built image
          ports:
            - containerPort: 80
Click Here to Copy YAML

Apply both deployments:

kubectl apply -f deployment-v1.yaml
kubectl apply -f deployment-v2.yaml

Step 3: Define an Istio Service

We need a service to route traffic to both versions.

Save the following as service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Click Here to Copy YAML

Apply the service:

kubectl apply -f service.yaml

Step 4: Create Istio VirtualService for Traffic Splitting

Now, let’s configure Istio to split traffic between v1 and v2.

Save the following YAML as virtual-service.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
    - myapp
  http:
    - route:
        - destination:
            host: myapp
            subset: v1
          weight: 80
        - destination:
            host: myapp
            subset: v2
          weight: 20
Click Here to Copy YAML

This configuration sends 80% of traffic to v1 and 20% to v2.
Apply the VirtualService:

kubectl apply -f virtual-service.yaml

Step 5: Define an Istio DestinationRule

To allow version-based routing, we need a DestinationRule.

Save the following YAML as destination-rule.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: myapp
spec:
  host: myapp
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
Click Here to Copy YAML

Apply the DestinationRule:

kubectl apply -f destination-rule.yaml

Step 6: Test Traffic Splitting

Now, let’s check the traffic distribution:

kubectl run -it --rm --image=curlimages/curl test -- curl http://myapp

Run this multiple times—you should see 80% responses from v1 and 20% from v2.

Conclusion

By implementing Istio’s VirtualService and DestinationRule, we successfully built a canary deployment that gradually rolls out a new version without impacting all users at once.

Key Takeaways:
✅ Istio simplifies traffic control for Kubernetes applications.
✅ Canary deployments allow safe testing of new versions.
✅ Traffic splitting can be adjusted dynamically as confidence in v2 increases.

This approach ensures zero downtime deployments, improving stability and user experience. 

What’s your experience with Istio? Drop a comment below!👇

Implementing Istio: A Step-by-Step Service Mesh Tutorial

Introduction

Modern applications rely on microservices, making service-to-service communication complex. Managing traffic routing, security, and observability becomes crucial.

Istio is a powerful service mesh that provides:
✅ Traffic Management – Fine-grained control over requests.
✅ Security – Mutual TLS (mTLS) for encrypted communication.
✅ Observability – Insights into service interactions and performance.

This step-by-step guide covers:

  • Installing Istio on a Kubernetes cluster.
  • Deploying microservices with Istio sidecars.
  • Configuring traffic routing and security.
  • Enabling monitoring with Grafana, Kiali, and Jaeger.

Step 1: Install Istio in Kubernetes

1.1 Download and Install Istio CLI

curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH

1.2 Install Istio with the Default Profile

istioctl install --set profile=demo -y

1.3 Enable Istio Injection

Enable automatic sidecar injection in the default namespace:

kubectl label namespace default istio-injection=enabled

Step 2: Deploy Microservices with Istio

We will deploy two microservices:
web – Calls the api service.
api – Responds with “Hello from API”.

2.1 Deploy web Service

Create web-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx
        ports:
        - containerPort: 80
Click Here to Copy YAML

Create web-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  selector:
    app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f web-deployment.yaml
kubectl apply -f web-service.yaml

2.2 Deploy api Service

Create api-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: hashicorp/http-echo
        args: ["-text=Hello from API"]
        ports:
        - containerPort: 5678
Click Here to Copy YAML

Create api-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5678
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f api-deployment.yaml
kubectl apply -f api-service.yaml

Step 3: Configure Istio Traffic Routing

3.1 Create a VirtualService for Traffic Control

Create api-virtualservice.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: api
spec:
  hosts:
  - api
  http:
  - route:
    - destination:
        host: api
        subset: v1
Click Here to Copy YAML

Apply the rule:

kubectl apply -f api-virtualservice.yaml

Step 4: Enable Observability & Monitoring

4.1 Install Kiali, Jaeger, Prometheus, and Grafana

kubectl apply -f samples/addons

4.2 Access the Monitoring Dashboards

kubectl port-forward svc/kiali 20001 -n istio-system

Open http://localhost:20001 to view the Kiali dashboard.

Step 5: Secure Service-to-Service Communication

5.1 Enable mTLS Between Services

Create peerauthentication.yaml:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
Click Here to Copy YAML

Apply the policy:

kubectl apply -f peerauthentication.yaml

Conclusion

We have successfully:
✅ Installed Istio and enabled sidecar injection.
✅ Deployed microservices inside the service mesh.
✅ Configured traffic routing using VirtualServices.
✅ Enabled observability tools like Grafana, Jaeger, and Kiali.
✅ Secured communication using mTLS encryption.

Istio simplifies microservices networking while enhancing security and visibility. Start using it today!

Are you using Istio in production? Share your experiences below!👇