Introduction
Storage is a critical component in Kubernetes environments, and Rook-Ceph is a powerful storage orchestrator that brings Ceph into Kubernetes clusters. In this post, we’ll go through deploying Rook-Ceph on Minikube, setting up CephFS, and resolving common issues encountered during the process.
Prerequisites
Before we begin, ensure you have the following installed on your machine:
- Minikube (running Kubernetes v1.32.0)
- Kubectl
- Helm
If you don’t have Minikube installed, you can start a new cluster with:
minikube start --memory=4096 --cpus=2 --disk-size=40g
Enable required Minikube features:
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
Step 1: Install Rook-Ceph with Helm
Add the Rook Helm repository:
helm repo add rook-release https://charts.rook.io/release
helm repo update
Install Rook-Ceph Operator
helm install rook-ceph rook-release/rook-ceph --namespace rook-ceph --create-namespace
Verify the operator is running:
kubectl -n rook-ceph get pods
Expected output:
NAME READY STATUS RESTARTS AGE
rook-ceph-operator-59dcf6d55b 1/1 Running 0 1m
Step 2: Deploy the Ceph Cluster
Now, create a CephCluster resource to deploy Ceph in Kubernetes.
Create ceph-cluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: my-ceph-cluster
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.0
dataDirHostPath: /var/lib/rook
mon:
count: 3
allowMultiplePerNode: true
storage:
useAllNodes: true
useAllDevices: true
Apply the Ceph Cluster Configuration
kubectl apply -f ceph-cluster.yaml
Check the CephCluster status:
kubectl -n rook-ceph get cephcluster
Expected output (once ready):
NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH
my-ceph-cluster /var/lib/rook 3 5m Ready Cluster OK HEALTH_OK
Step 3: Set Up CephFS (Ceph Filesystem)
Create cephfs.yaml
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: my-cephfs
namespace: rook-ceph
spec:
metadataServer:
activeCount: 1
activeStandby: true
dataPools:
- replicated:
size: 3
metadataPool:
replicated:
size: 3
Apply CephFS Configuration
kubectl apply -f cephfs.yaml
Verify the filesystem:
kubectl -n rook-ceph get cephfilesystem
Expected output:
NAME ACTIVEMDS AGE PHASE
my-cephfs 1 2m Ready
Step 4: Create a StorageClass for CephFS
Create cephfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
clusterID: rook-ceph
fsName: my-cephfs
pool: my-cephfs-data0
mounter: fuse
reclaimPolicy: Delete
Apply the StorageClass Configuration
kubectl apply -f cephfs-storageclass.yaml
List available StorageClasses:
kubectl get storageclass
Step 5: Create a PersistentVolumeClaim (PVC)
Create pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: rook-cephfs
Apply the PVC Configuration
kubectl apply -f pvc.yaml
Check if the PVC is bound:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-cephfs-pvc Bound pvc-xyz 1Gi RWX rook-cephfs 1m
Step 6: Deploy a Test Pod Using CephFS Storage
Create cephfs-test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: cephfs-app
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh", "-c", "sleep 3600"]
volumeMounts:
- mountPath: "/mnt"
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: my-cephfs-pvc
Deploy the Test Pod
kubectl apply -f cephfs-test-pod.yaml
Check if the pod is running:
kubectl get pods
Troubleshooting Issues
Issue: CephCluster Stuck in “Progressing” State
Run:
kubectl -n rook-ceph describe cephcluster my-ceph-cluster
If you see an error like:
failed the ceph version check: the version does not meet the minimum version "18.2.0-0 reef"
Solution:
- Ensure you are using Ceph v18.2.0 in ceph-cluster.yaml.
- Restart the Ceph operator: kubectl -n rook-ceph delete pod -l app=rook-ceph-operator
Issue: CephCluster Doesn’t Delete
If kubectl delete cephcluster hangs, remove finalizers:
kubectl -n rook-ceph patch cephcluster my-ceph-cluster --type='merge' -p '{"metadata":{"finalizers":null}}'
Then force delete:
kubectl -n rook-ceph delete cephcluster my-ceph-cluster --wait=false
Conclusion
Setting up Rook-Ceph on Minikube provides a robust storage solution within Kubernetes. However, issues such as Ceph version mismatches and stuck deletions are common. By following this guide, you can successfully deploy CephFS, configure a persistent storage layer, and troubleshoot issues effectively.
Would you like a deep dive into RBD or Object Storage in Rook-Ceph next? Let me know in the comments!
Stable Hostnames → Essential for MongoDB replica set communication