Running Stateful Applications in Kubernetes: MySQL Cluster Tutorial

Introduction

Running databases in Kubernetes introduces unique challenges, such as state persistence, leader election, and automatic failover. Unlike stateless applications, databases require stable storage, unique identities, and careful management of scaling and replication.

In this tutorial, we will set up a highly available MySQL cluster using a StatefulSet, which ensures:
✅ Unique identities for MySQL instances (mysql-0, mysql-1, mysql-2)
✅ Persistent storage for database files
✅ Automatic failover for database resilience
✅ MySQL GTID-based replication for data consistency

Step 1: Create a ConfigMap for MySQL Configuration

We first define a ConfigMap to store MySQL settings. The server-id is dynamically set for each pod using environment variables.

Create a file mysql-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config
  namespace: default
data:
  my.cnf: |
    [mysqld]
    server-id=${MYSQL_SERVER_ID}
    log_bin=mysql-bin
    binlog_format=ROW
    enforce-gtid-consistency=ON
    gtid-mode=ON
    master-info-repository=TABLE
    relay-log-info-repository=TABLE
    log_slave_updates=ON
    read-only=ON
    skip-name-resolve
Click Here to Copy YAML

Apply the ConfigMap

kubectl apply -f mysql-config.yaml

Step 2: Create a Persistent Volume Claim (PVC)

Databases require persistent storage to avoid losing data when pods restart.

Create a file mysql-storage.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
Click Here to Copy YAML

Apply the PVC

kubectl apply -f mysql-storage.yaml

Step 3: Deploy MySQL Cluster with StatefulSet

Now, we deploy a StatefulSet to ensure stable pod names and persistent storage.

Create a file mysql-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  namespace: default
spec:
  serviceName: "mysql"
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:8.0
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "yourpassword"
        - name: MYSQL_REPLICATION_USER
          value: "replica"
        - name: MYSQL_REPLICATION_PASSWORD
          value: "replicapassword"
        - name: MYSQL_SERVER_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: mysql-storage
          mountPath: /var/lib/mysql
        - name: config-volume
          mountPath: /etc/mysql/conf.d
      volumes:
      - name: config-volume
        configMap:
          name: mysql-config
  volumeClaimTemplates:
  - metadata:
      name: mysql-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi
Click Here to Copy YAML

Apply the StatefulSet

kubectl apply -f mysql-statefulset.yaml

Step 4: Create a Headless Service for MySQL

A headless service is required for MySQL nodes to discover each other.

Create a file mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: default
spec:
  selector:
    app: mysql
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
    targetPort: 3306
Click Here to Copy YAML

Apply the Service

kubectl apply -f mysql-service.yaml

Step 5: Verify the Cluster Setup

Check MySQL Pods

kubectl get pods -l app=mysql

Expected Output:

NAME      READY   STATUS    RESTARTS   AGE
mysql-0   1/1     Running   0          1m
mysql-1   1/1     Running   0          1m
mysql-2   1/1     Running   0          1m

Check MySQL Configuration in Each Pod

kubectl exec -it mysql-0 -- cat /etc/mysql/conf.d/my.cnf | grep server-id

Expected Output:

server-id=1

Step 6: Set Up MySQL Replication

Configure the Primary (mysql-0)

Log in to mysql-0:

kubectl exec -it mysql-0 -- mysql -u root -p

Run:

CREATE USER 'replica'@'%' IDENTIFIED WITH mysql_native_password BY 'replicapassword';
GRANT REPLICATION SLAVE ON *.* TO 'replica'@'%';
FLUSH PRIVILEGES;
SHOW MASTER STATUS;

Note down the File and Position values.

Configure the Replicas (mysql-1, mysql-2)

Log in to mysql-1 and mysql-2:

kubectl exec -it mysql-1 -- mysql -u root -p

Run:

CHANGE MASTER TO 
MASTER_HOST='mysql-0.mysql.default.svc.cluster.local',
MASTER_USER='replica',
MASTER_PASSWORD='replicapassword',
MASTER_LOG_FILE='mysql-bin.000001', -- Replace with actual file
MASTER_LOG_POS=12345; -- Replace with actual position
START SLAVE;
SHOW SLAVE STATUS \G;

Expected Output:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Step 7: Test Failover

To simulate a primary failure, delete the MySQL leader:

kubectl delete pod mysql-0

Expected behavior:

  • Kubernetes restarts mysql-0.
  • Replicas continue serving reads.
  • New primary can be promoted manually.

Conclusion

In this tutorial, we built a highly available MySQL cluster in Kubernetes using:
✅ StatefulSet for stable identities
✅ Persistent Volumes for data durability
✅ ConfigMaps for centralized configuration
✅ Replication for fault tolerance

This setup ensures automatic recovery and high availability while maintaining data consistency in a Kubernetes environment. What do you think? Let’s discuss in the comments!👇

Leave a comment