Setting Up and Troubleshooting Rook-Ceph on Minikube

Introduction

Storage is a critical component in Kubernetes environments, and Rook-Ceph is a powerful storage orchestrator that brings Ceph into Kubernetes clusters. In this post, we’ll go through deploying Rook-Ceph on Minikube, setting up CephFS, and resolving common issues encountered during the process.

Prerequisites

Before we begin, ensure you have the following installed on your machine:

  • Minikube (running Kubernetes v1.32.0)
  • Kubectl
  • Helm

If you don’t have Minikube installed, you can start a new cluster with:

minikube start --memory=4096 --cpus=2 --disk-size=40g

Enable required Minikube features:

minikube addons enable default-storageclass
minikube addons enable storage-provisioner

Step 1: Install Rook-Ceph with Helm

Add the Rook Helm repository:

helm repo add rook-release https://charts.rook.io/release
helm repo update

Install Rook-Ceph Operator

helm install rook-ceph rook-release/rook-ceph --namespace rook-ceph --create-namespace

Verify the operator is running:

kubectl -n rook-ceph get pods

Expected output:

NAME                            READY   STATUS    RESTARTS   AGE
rook-ceph-operator-59dcf6d55b   1/1     Running   0          1m

Step 2: Deploy the Ceph Cluster

Now, create a CephCluster resource to deploy Ceph in Kubernetes.

Create ceph-cluster.yaml

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: my-ceph-cluster
  namespace: rook-ceph
spec:
  cephVersion:
    image: quay.io/ceph/ceph:v18.2.0
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: true
  storage:
    useAllNodes: true
    useAllDevices: true
Click Here to Copy YAML

Apply the Ceph Cluster Configuration

kubectl apply -f ceph-cluster.yaml

Check the CephCluster status:

kubectl -n rook-ceph get cephcluster

Expected output (once ready):

NAME              DATADIRHOSTPATH   MONCOUNT   AGE   PHASE     MESSAGE      HEALTH
my-ceph-cluster   /var/lib/rook     3          5m    Ready     Cluster OK   HEALTH_OK

Step 3: Set Up CephFS (Ceph Filesystem)

Create cephfs.yaml

apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
  name: my-cephfs
  namespace: rook-ceph
spec:
  metadataServer:
    activeCount: 1
    activeStandby: true
  dataPools:
    - replicated:
        size: 3
  metadataPool:
    replicated:
      size: 3
Click Here to Copy YAML

Apply CephFS Configuration

kubectl apply -f cephfs.yaml

Verify the filesystem:

kubectl -n rook-ceph get cephfilesystem

Expected output:

NAME        ACTIVEMDS   AGE   PHASE
my-cephfs   1           2m    Ready

Step 4: Create a StorageClass for CephFS

Create cephfs-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
  clusterID: rook-ceph
  fsName: my-cephfs
  pool: my-cephfs-data0
  mounter: fuse
reclaimPolicy: Delete
Click Here to Copy YAML

Apply the StorageClass Configuration

kubectl apply -f cephfs-storageclass.yaml

List available StorageClasses:

kubectl get storageclass

Step 5: Create a PersistentVolumeClaim (PVC)

Create pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs
Click Here to Copy YAML

Apply the PVC Configuration

kubectl apply -f pvc.yaml

Check if the PVC is bound:

kubectl get pvc

Expected output:

NAME            STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-cephfs-pvc   Bound    pvc-xyz  1Gi        RWX            rook-cephfs    1m

Step 6: Deploy a Test Pod Using CephFS Storage

Create cephfs-test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: cephfs-app
spec:
  containers:
  - name: app
    image: busybox
    command: ["/bin/sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: "/mnt"
      name: mypvc
  volumes:
  - name: mypvc
    persistentVolumeClaim:
      claimName: my-cephfs-pvc
Click Here to Copy YAML

Deploy the Test Pod

kubectl apply -f cephfs-test-pod.yaml

Check if the pod is running:

kubectl get pods

Troubleshooting Issues

Issue: CephCluster Stuck in “Progressing” State

Run:

kubectl -n rook-ceph describe cephcluster my-ceph-cluster

If you see an error like:

failed the ceph version check: the version does not meet the minimum version "18.2.0-0 reef"

Solution:

  1. Ensure you are using Ceph v18.2.0 in ceph-cluster.yaml.
  2. Restart the Ceph operator: kubectl -n rook-ceph delete pod -l app=rook-ceph-operator

Issue: CephCluster Doesn’t Delete

If kubectl delete cephcluster hangs, remove finalizers:

kubectl -n rook-ceph patch cephcluster my-ceph-cluster --type='merge' -p '{"metadata":{"finalizers":null}}'

Then force delete:

kubectl -n rook-ceph delete cephcluster my-ceph-cluster --wait=false

Conclusion

Setting up Rook-Ceph on Minikube provides a robust storage solution within Kubernetes. However, issues such as Ceph version mismatches and stuck deletions are common. By following this guide, you can successfully deploy CephFS, configure a persistent storage layer, and troubleshoot issues effectively.

Would you like a deep dive into RBD or Object Storage in Rook-Ceph next? Let me know in the comments!👇

Setting Up MongoDB Replication in Kubernetes with StatefulSets

Introduction

Running databases in Kubernetes comes with challenges such as maintaining persistent state, stable network identities, and automated recovery. For MongoDB, high availability and data consistency are critical, making replication a fundamental requirement.

In this guide, we’ll deploy a MongoDB Replica Set in Kubernetes using StatefulSets, ensuring that each MongoDB instance maintains a stable identity, persistent storage, and seamless failover.

Why StatefulSets for MongoDB?

Unlike Deployments, which assign dynamic pod names, StatefulSets ensure stable, ordered identities, making them ideal for running database clusters.

✅ Stable Hostnames → Essential for MongoDB replica set communication
✅ Persistent Storage → Ensures data consistency across restarts
✅ Automatic Scaling → Easily add more replica nodes
✅ Pod Ordering & Control → Ensures correct initialization sequence

Prerequisites

Before proceeding, ensure:

  • A running Kubernetes cluster (Minikube, RKE2, or any self-managed setup)
  • kubectl installed and configured
  • A StorageClass for persistent volumes

Step 1: Create a ConfigMap for MongoDB Initialization

A ConfigMap helps configure MongoDB startup settings.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-config
  namespace: mongo
data:
  mongod.conf: |
    storage:
      dbPath: /data/db
    net:
      bindIp: 0.0.0.0
    replication:
      replSetName: rs0
Click Here to Copy YAML

Apply the ConfigMap:

kubectl apply -f mongodb-configmap.yaml

Step 2: Define a Headless Service

A headless service ensures stable DNS resolution for MongoDB pods.

apiVersion: v1
kind: Service
metadata:
  name: mongodb
  namespace: mongo
spec:
  clusterIP: None
  selector:
    app: mongodb
  ports:
    - name: mongo
      port: 27017
Click Here to Copy YAML

Apply the Service:

kubectl apply -f mongodb-service.yaml

Step 3: Deploy MongoDB with a StatefulSet

We define a StatefulSet with three MongoDB replicas.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
  namespace: mongo
spec:
  serviceName: mongodb
  replicas: 3
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
        - name: mongo
          image: mongo:6.0
          command:
            - "mongod"
            - "--config"
            - "/config/mongod.conf"
          volumeMounts:
            - name: config-volume
              mountPath: /config
            - name: mongo-data
              mountPath: /data/db
          ports:
            - containerPort: 27017
      volumes:
        - name: config-volume
          configMap:
            name: mongodb-config
  volumeClaimTemplates:
    - metadata:
        name: mongo-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 5Gi
Click Here to Copy YAML

Apply the StatefulSet:

kubectl apply -f mongodb-statefulset.yaml

Step 4: Initialize the MongoDB Replica Set

Once all pods are running, initialize the replica set from the first MongoDB pod:

kubectl exec -it mongodb-0 -n mongo -- mongosh

Inside the MongoDB shell, run:

rs.initiate({
  _id: "rs0",
  members: [
    { _id: 0, host: "mongodb-0.mongodb.mongo.svc.cluster.local:27017" },
    { _id: 1, host: "mongodb-1.mongodb.mongo.svc.cluster.local:27017" },
    { _id: 2, host: "mongodb-2.mongodb.mongo.svc.cluster.local:27017" }
  ]
})

Check the replica set status:

rs.status()

Step 5: Verify Replication

To verify the secondary nodes, log into any of the replica pods:

kubectl exec -it mongodb-1 -n mongo -- mongosh

Run the following to check the node’s role:

rs.isMaster()

Conclusion

With StatefulSets, MongoDB can maintain stable identities, ensuring smooth replication and high availability in Kubernetes.

✅ Automatic failover → If a primary node fails, a secondary is promoted.
✅ Stable DNS-based discovery → Ensures seamless communication between replicas.
✅ Persistent storage → Data remains intact across pod restarts.

Want to scale your replica set? Just update the replicas count, and Kubernetes handles the rest!

Have you deployed databases in Kubernetes? Share your experience below!👇

Dynamic Volume Provisioning in Kubernetes: A Practical Guide

Introduction

Storage management in Kubernetes can be challenging, especially when handling stateful applications. Manually provisioning PersistentVolumes (PVs) adds operational overhead and limits scalability. Dynamic Volume Provisioning allows Kubernetes to create storage resources on demand, automating storage allocation for workloads.

In this post, we’ll set up dynamic provisioning using Local Path Provisioner, enabling seamless storage for Kubernetes applications.

Step 1: Deploy Local Path Provisioner

To enable dynamic provisioning, we need a StorageClass that defines how PVs are created. The Local Path Provisioner is a simple and efficient solution for clusters without cloud storage.

Apply Local Path Provisioner (For Minikube or Local Clusters)

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Verify the provisioner is running:

kubectl get pods -n local-path-storage

Expected output:

NAME                           READY   STATUS    RESTARTS   AGE
local-path-provisioner-xx      1/1     Running   0          1m

Step 2: Create a StorageClass

We need a StorageClass to define the storage provisioning behavior.

storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
Click Here to Copy YAML

Apply the StorageClass:

kubectl apply -f storage-class.yaml

Verify:

kubectl get storageclass

Expected output:

NAME        PROVISIONER         RECLAIMPOLICY  VOLUMEBINDINGMODE
local-path  rancher.io/local-path Delete    WaitForFirstConsumer

Step 3: Create a PersistentVolumeClaim (PVC)

A PersistentVolumeClaim (PVC) requests storage dynamically from the StorageClass.

pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-dynamic-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-path
Click Here to Copy YAML

Apply the PVC:

kubectl apply -f pvc.yaml

Check the PVC status:

kubectl get pvc

Expected output:

NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS
my-dynamic-pvc   Bound    pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx   1Gi       RWO            local-path

Step 4: Deploy a Pod Using the Dynamic Volume

Now, we’ll create a Pod that mounts the dynamically provisioned volume.

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: storage-test
spec:
  containers:
  - name: test-container
    image: busybox
    command: [ "sleep", "3600" ]
    volumeMounts:
    - mountPath: "/data"
      name: test-volume
  volumes:
  - name: test-volume
    persistentVolumeClaim:
      claimName: my-dynamic-pvc
Click Here to Copy YAML

Apply the pod:

kubectl apply -f pod.yaml

Verify the pod:

kubectl get pods

Expected output:

NAME             READY   STATUS    RESTARTS   AGE
storage-test     1/1     Running   0          1m

Step 5: Verify Data Persistence

Enter the pod and create a test file inside the volume:

kubectl exec -it storage-test -- sh

Inside the container:

echo "Hello, Kubernetes!" > /data/testfile.txt
exit

Now, delete the pod:

kubectl delete pod storage-test

Recreate the pod:

kubectl apply -f pod.yaml

Enter the pod again and check if the file persists:

kubectl exec -it storage-test -- cat /data/testfile.txt

Expected output:

Hello, Kubernetes!

This confirms that the PersistentVolume (PV) remains intact even after pod deletion.

Conclusion

Dynamic Volume Provisioning in Kubernetes eliminates manual storage management by allowing on-demand PV creation. This ensures:
✅ Scalability – Storage is provisioned dynamically as needed.
✅ Automation – No manual intervention required.
✅ Persistence – Data is retained across pod restarts.

By implementing Local Path Provisioner and StorageClasses, Kubernetes clusters can achieve seamless, automated storage provisioning, optimizing performance and resource utilization.

Adopt Dynamic Volume Provisioning today and simplify storage management in your Kubernetes environment!

How do you currently handle persistent storage in Kubernetes? Have you faced challenges with dynamic storage provisioning? Drop your thoughts in the comments! 👇

Implementing Backup and Restore for Kubernetes Applications with Velero

Introduction

In modern cloud-native environments, ensuring data protection and disaster recovery is crucial. Kubernetes does not natively offer a comprehensive backup and restore solution, making it necessary to integrate third-party tools. Velero is an open-source tool that enables backup, restoration, and migration of Kubernetes applications and persistent volumes. In this blog post, we will explore setting up Velero on a Kubernetes cluster with MinIO as the storage backend, automating backups, and restoring applications when needed.

Prerequisites

  • A running Kubernetes cluster (Minikube, RKE2, or self-managed cluster)
  • kubectl installed and configured
  • Helm installed for package management
  • MinIO deployed as an object storage backend
  • Velero CLI installed

Step 1: Deploy MinIO as the Backup Storage

MinIO is a high-performance, S3-compatible object storage server, ideal for Kubernetes environments. We will deploy MinIO in the velero namespace.

Deploy MinIO with Persistent Storage

apiVersion: v1
kind: Namespace
metadata:
  name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio
  namespace: velero
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
        - name: minio
          image: minio/minio
          args:
            - server
            - /data
            - --console-address=:9001
          env:
            - name: MINIO_ROOT_USER
              value: "minioadmin"
            - name: MINIO_ROOT_PASSWORD
              value: "minioadmin"
          ports:
            - containerPort: 9000
            - containerPort: 9001
          volumeMounts:
            - name: minio-storage
              mountPath: /data
      volumes:
        - name: minio-storage
          persistentVolumeClaim:
            claimName: minio-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: velero
spec:
  ports:
    - port: 9000
      targetPort: 9000
      name: api
    - port: 9001
      targetPort: 9001
      name: console
  selector:    app: minio
Click Here to Copy YAML

Create Persistent Volume for MinIO

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
  namespace: velero
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Click Here to Copy YAML

Apply the configurations:

kubectl apply -f minio.yaml

Step 2: Install Velero

We will install Velero using Helm and configure it to use MinIO as a storage backend.

Add Helm Repository and Install Velero

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm install velero vmware-tanzu/velero --namespace velero \
  --set configuration.provider=aws \
  --set configuration.backupStorageLocation.name=default \
  --set configuration.backupStorageLocation.bucket=velero-backup \
  --set configuration.backupStorageLocation.config.s3Url=http://minio.velero.svc.cluster.local:9000 \
  --set configuration.volumeSnapshotLocation.name=default

Step 3: Configure Credentials for Velero

Velero needs credentials to interact with MinIO. We create a Kubernetes secret for this purpose.

apiVersion: v1
kind: Secret
metadata:
  name: cloud-credentials
  namespace: velero
data:
  credentials-velero: W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkPW1pbmlvYWRtaW4KYXdzX3NlY3JldF9hY2Nlc3Nfa2tleT1taW5pb2FkbWluCg==
type: Opaque
Click Here to Copy YAML

Apply the secret:

kubectl apply -f credentials.yaml

Restart Velero for the changes to take effect:

kubectl delete pod -n velero -l app.kubernetes.io/name=velero

Step 4: Create a Backup

We now create a backup of a sample namespace.

velero backup create my-backup --include-namespaces=default

To check the backup status:

velero backup get

Step 5: Restore from Backup

In case of failure, we can restore our applications using:

velero restore create --from-backup my-backup

To check the restore status:

velero restore get

Step 6: Automate Backups with a Schedule

To automate backups every 12 hours:

velero schedule create daily-backup --schedule "0 */12 * * *"

To list scheduled backups:

velero schedule get

Conclusion

By implementing Velero with MinIO, we have built a complete backup and disaster recovery solution for Kubernetes applications. This setup allows us to automate backups, perform point-in-time recovery, and ensure data protection. In real-world scenarios, it is recommended to:

  • Secure MinIO with external authentication
  • Store backups in an off-cluster storage location
  • Regularly test restoration procedures

By integrating Velero into your Kubernetes environment, you enhance resilience and minimize data loss risks. Start implementing backups today to safeguard your critical applications!

Stay tuned for more Kubernetes insights! If you have any issues? Let’s discuss in the comments!👇

Running Stateful Applications in Kubernetes: MySQL Cluster Tutorial

Introduction

Running databases in Kubernetes introduces unique challenges, such as state persistence, leader election, and automatic failover. Unlike stateless applications, databases require stable storage, unique identities, and careful management of scaling and replication.

In this tutorial, we will set up a highly available MySQL cluster using a StatefulSet, which ensures:
✅ Unique identities for MySQL instances (mysql-0, mysql-1, mysql-2)
✅ Persistent storage for database files
✅ Automatic failover for database resilience
✅ MySQL GTID-based replication for data consistency

Step 1: Create a ConfigMap for MySQL Configuration

We first define a ConfigMap to store MySQL settings. The server-id is dynamically set for each pod using environment variables.

Create a file mysql-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: default
data:
  my.cnf: |
    [mysqld]
    server-id=${MYSQL_SERVER_ID}
    log_bin=mysql-bin
    binlog_format=ROW
    enforce-gtid-consistency=ON
    gtid-mode=ON
    master-info-repository=TABLE
    relay-log-info-repository=TABLE
    log_slave_updates=ON
    read-only=ON
    skip-name-resolve
Click Here to Copy YAML

Apply the ConfigMap

kubectl apply -f mysql-config.yaml

Step 2: Create a Persistent Volume Claim (PVC)

Databases require persistent storage to avoid losing data when pods restart.

Create a file mysql-storage.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
Click Here to Copy YAML

Apply the PVC

kubectl apply -f mysql-storage.yaml

Step 3: Deploy MySQL Cluster with StatefulSet

Now, we deploy a StatefulSet to ensure stable pod names and persistent storage.

Create a file mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: default
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "yourpassword"
        - name: MYSQL_REPLICATION_USER
          value: "replica"
        - name: MYSQL_REPLICATION_PASSWORD
          value: "replicapassword"
        - name: MYSQL_SERVER_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
        - name: config-volume
          mountPath: /etc/mysql/conf.d
      volumes:
      - name: config-volume
        configMap:
          name: mysql-config
  volumeClaimTemplates:
  - metadata:
      name: mysql-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
Click Here to Copy YAML

Apply the StatefulSet

kubectl apply -f mysql-statefulset.yaml

Step 4: Create a Headless Service for MySQL

A headless service is required for MySQL nodes to discover each other.

Create a file mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
spec:
  selector:
    app: mysql
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
    targetPort: 3306
Click Here to Copy YAML

Apply the Service

kubectl apply -f mysql-service.yaml

Step 5: Verify the Cluster Setup

Check MySQL Pods

kubectl get pods -l app=mysql

Expected Output:

NAME      READY   STATUS    RESTARTS   AGE
mysql-0   1/1     Running   0          1m
mysql-1   1/1     Running   0          1m
mysql-2   1/1     Running   0          1m

Check MySQL Configuration in Each Pod

kubectl exec -it mysql-0 -- cat /etc/mysql/conf.d/my.cnf | grep server-id

Expected Output:

server-id=1

Step 6: Set Up MySQL Replication

Configure the Primary (mysql-0)

Log in to mysql-0:

kubectl exec -it mysql-0 -- mysql -u root -p

Run:

CREATE USER 'replica'@'%' IDENTIFIED WITH mysql_native_password BY 'replicapassword';
GRANT REPLICATION SLAVE ON *.* TO 'replica'@'%';
FLUSH PRIVILEGES;
SHOW MASTER STATUS;

Note down the File and Position values.

Configure the Replicas (mysql-1, mysql-2)

Log in to mysql-1 and mysql-2:

kubectl exec -it mysql-1 -- mysql -u root -p

Run:

CHANGE MASTER TO 
MASTER_HOST='mysql-0.mysql.default.svc.cluster.local',
MASTER_USER='replica',
MASTER_PASSWORD='replicapassword',
MASTER_LOG_FILE='mysql-bin.000001', -- Replace with actual file
MASTER_LOG_POS=12345; -- Replace with actual position
START SLAVE;
SHOW SLAVE STATUS \G;

Expected Output:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Step 7: Test Failover

To simulate a primary failure, delete the MySQL leader:

kubectl delete pod mysql-0

Expected behavior:

  • Kubernetes restarts mysql-0.
  • Replicas continue serving reads.
  • New primary can be promoted manually.

Conclusion

In this tutorial, we built a highly available MySQL cluster in Kubernetes using:
✅ StatefulSet for stable identities
✅ Persistent Volumes for data durability
✅ ConfigMaps for centralized configuration
✅ Replication for fault tolerance

This setup ensures automatic recovery and high availability while maintaining data consistency in a Kubernetes environment. What do you think? Let’s discuss in the comments!👇