Implementing Backup and Restore for Kubernetes Applications with Velero

Introduction

In modern cloud-native environments, ensuring data protection and disaster recovery is crucial. Kubernetes does not natively offer a comprehensive backup and restore solution, making it necessary to integrate third-party tools. Velero is an open-source tool that enables backup, restoration, and migration of Kubernetes applications and persistent volumes. In this blog post, we will explore setting up Velero on a Kubernetes cluster with MinIO as the storage backend, automating backups, and restoring applications when needed.

Prerequisites

  • A running Kubernetes cluster (Minikube, RKE2, or self-managed cluster)
  • kubectl installed and configured
  • Helm installed for package management
  • MinIO deployed as an object storage backend
  • Velero CLI installed

Step 1: Deploy MinIO as the Backup Storage

MinIO is a high-performance, S3-compatible object storage server, ideal for Kubernetes environments. We will deploy MinIO in the velero namespace.

Deploy MinIO with Persistent Storage

apiVersion: v1
kind: Namespace
metadata:
  name: velero
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: minio
  namespace: velero
spec:
  replicas: 1
  selector:
    matchLabels:
      app: minio
  template:
    metadata:
      labels:
        app: minio
    spec:
      containers:
        - name: minio
          image: minio/minio
          args:
            - server
            - /data
            - --console-address=:9001
          env:
            - name: MINIO_ROOT_USER
              value: "minioadmin"
            - name: MINIO_ROOT_PASSWORD
              value: "minioadmin"
          ports:
            - containerPort: 9000
            - containerPort: 9001
          volumeMounts:
            - name: minio-storage
              mountPath: /data
      volumes:
        - name: minio-storage
          persistentVolumeClaim:
            claimName: minio-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: velero
spec:
  ports:
    - port: 9000
      targetPort: 9000
      name: api
    - port: 9001
      targetPort: 9001
      name: console
  selector:    app: minio
Click Here to Copy YAML

Create Persistent Volume for MinIO

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: minio-pvc
  namespace: velero
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
Click Here to Copy YAML

Apply the configurations:

kubectl apply -f minio.yaml

Step 2: Install Velero

We will install Velero using Helm and configure it to use MinIO as a storage backend.

Add Helm Repository and Install Velero

helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm repo update
helm install velero vmware-tanzu/velero --namespace velero \
  --set configuration.provider=aws \
  --set configuration.backupStorageLocation.name=default \
  --set configuration.backupStorageLocation.bucket=velero-backup \
  --set configuration.backupStorageLocation.config.s3Url=http://minio.velero.svc.cluster.local:9000 \
  --set configuration.volumeSnapshotLocation.name=default

Step 3: Configure Credentials for Velero

Velero needs credentials to interact with MinIO. We create a Kubernetes secret for this purpose.

apiVersion: v1
kind: Secret
metadata:
  name: cloud-credentials
  namespace: velero
data:
  credentials-velero: W2RlZmF1bHRdCmF3c19hY2Nlc3Nfa2V5X2lkPW1pbmlvYWRtaW4KYXdzX3NlY3JldF9hY2Nlc3Nfa2tleT1taW5pb2FkbWluCg==
type: Opaque
Click Here to Copy YAML

Apply the secret:

kubectl apply -f credentials.yaml

Restart Velero for the changes to take effect:

kubectl delete pod -n velero -l app.kubernetes.io/name=velero

Step 4: Create a Backup

We now create a backup of a sample namespace.

velero backup create my-backup --include-namespaces=default

To check the backup status:

velero backup get

Step 5: Restore from Backup

In case of failure, we can restore our applications using:

velero restore create --from-backup my-backup

To check the restore status:

velero restore get

Step 6: Automate Backups with a Schedule

To automate backups every 12 hours:

velero schedule create daily-backup --schedule "0 */12 * * *"

To list scheduled backups:

velero schedule get

Conclusion

By implementing Velero with MinIO, we have built a complete backup and disaster recovery solution for Kubernetes applications. This setup allows us to automate backups, perform point-in-time recovery, and ensure data protection. In real-world scenarios, it is recommended to:

  • Secure MinIO with external authentication
  • Store backups in an off-cluster storage location
  • Regularly test restoration procedures

By integrating Velero into your Kubernetes environment, you enhance resilience and minimize data loss risks. Start implementing backups today to safeguard your critical applications!

Stay tuned for more Kubernetes insights! If you have any issues? Let’s discuss in the comments!👇

Dynamic Configuration Management in Kubernetes with ConfigMaps and Secrets

Introduction

In Kubernetes, managing application configurations dynamically is essential for scalability, security, and zero-downtime deployments. Traditionally, configurations were hardcoded inside applications, requiring a restart to apply changes. However, Kubernetes ConfigMaps and Secrets allow us to separate configurations from application code, enabling real-time updates without affecting running pods.

In this guide, we will:
âś… Use ConfigMaps for non-sensitive configuration data.
âś… Use Secrets for storing sensitive credentials securely.
âś… Mount configurations as volumes for dynamic updates without pod restarts.

Step 1: Creating a ConfigMap

A ConfigMap stores application settings like environment variables, log levels, and external API URLs.

ConfigMap YAML (configmap.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_ENV: "production"
  LOG_LEVEL: "info"
Click Here to Copy YAML

Apply the ConfigMap:

kubectl apply -f configmap.yaml

Step 2: Creating a Secret

A Secret securely stores sensitive data, such as database credentials and API keys.

Secret YAML (secret.yaml)

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  DB_PASSWORD: Ym9zcw==
Click Here to Copy YAML

To encode a password in base64:

echo -n "boss" | base64

Apply the Secret:

kubectl apply -f secret.yaml

Step 3: Injecting ConfigMap and Secret into a Deployment

Now, let’s inject these values into a Kubernetes Deployment.

Deployment YAML (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: nginx:latest
        env:
        - name: APP_ENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: APP_ENV
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: LOG_LEVEL
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secret
              key: DB_PASSWORD
Click Here to Copy YAML

Apply the Deployment:

kubectl apply -f deployment.yaml

Step 4: Enabling Automatic Updates for ConfigMaps

By default, ConfigMaps do not update dynamically inside a running pod when changed. To enable automatic updates, we can mount the ConfigMap as a volume.

Modify Deployment to Use ConfigMap as a Volume

spec:
  volumes:
    - name: config-volume
      configMap:
        name: app-config
  containers:
    - name: my-app
      image: nginx:latest
      volumeMounts:
        - name: config-volume
          mountPath: "/etc/config"
          readOnly: true
Click Here to Copy YAML

Apply the Updated Deployment:

kubectl apply -f deployment.yaml

Now, whenever the ConfigMap is updated, the mounted file inside the pod will automatically reflect the changes without requiring a pod restart.

Step 5: Verifying ConfigMap and Secret Usage

Check ConfigMap Values in a Running Pod

kubectl exec -it <pod-name> -- printenv | grep APP_ENV

Verify Secret Values (Base64 Encoded)

kubectl get secret app-secret -o yaml

Important: Kubernetes does not allow printing Secrets in plaintext for security reasons.

Conclusion

With ConfigMaps and Secrets, we have achieved:
âś… Separation of application and configuration for better maintainability.
âś… Dynamic configuration updates without pod restarts.
âś… Secure handling of sensitive data with Secrets.

This approach ensures zero downtime, scalable deployments, and strong security for your Kubernetes applications.

Have you implemented ConfigMaps and Secrets in your Kubernetes projects? Share your experiences in the comments!👇

Helm Deep Dive: Creating Production-Ready Charts

Introduction

In Kubernetes environments, managing application deployments efficiently is crucial. Helm, the Kubernetes package manager, simplifies this process by providing a standardized way to define, install, and upgrade applications.

In this guide, we will build a production-ready Helm chart, ensuring reusability, parameterization, and best practices.

Step 1: Setting Up a Helm Chart

To begin, we create a new Helm chart:

helm create myapp
cd myapp

This command generates a default Helm chart structure, including templates/ for Kubernetes manifests and values.yaml for configuration management.

Step 2: Defining values.yaml

A well-structured values.yaml enables customization without modifying templates. Here’s an optimized configuration:

replicaCount: 2

image:
  repository: nginx
  tag: latest
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  hosts:
    - host: myapp.local
      paths:
        - path: /
          pathType: ImplementationSpecific

resources:
  limits:
    cpu: 500m
    memory: 256Mi
  requests:
    cpu: 250m
    memory: 128Mi

autoscaling:
  enabled: false

serviceAccount:
  create: true
Click Here to Copy YAML

This structure allows flexibility in modifying configurations at runtime.

Step 3: Customizing Kubernetes Manifests

Next, we modify templates/deployment.yaml to dynamically use the values from values.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Release.Name }}
  labels:
    app: {{ .Release.Name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
        - name: nginx
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          ports:
            - containerPort: 80
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
Click Here to Copy YAML

This template ensures a scalable and parameterized deployment.

Step 4: Installing and Managing the Helm Chart

To install the Helm chart:

helm install myapp ./myapp

To update the deployment after modifying values.yaml:

helm upgrade myapp ./myapp

To uninstall and remove the release:

helm uninstall myapp

Conclusion

Using Helm for Kubernetes deployments provides several advantages:

âś… Modularity & Reusability: Define once, deploy multiple times with different values
âś… Scalability: Easily manage replicas, resources, and autoscaling configurations
âś… Simplified Upgrades & Rollbacks: Helm makes it easy to manage application lifecycles

By following best practices, we can ensure efficient, scalable, and production-ready deployments with Helm.

What’s your experience with Helm? Drop your thoughts in the comments!👇

Infrastructure as Code: Building Your Kubernetes Environment with Terraform

Introduction

Managing Kubernetes clusters manually can lead to configuration drift, inconsistencies, and operational overhead. By using Terraform, we can define Kubernetes resources declaratively, ensuring a version-controlled, reproducible, and scalable infrastructure.

In this post, we’ll walk through setting up a Kubernetes deployment using Terraform with a real-world example.

Why Terraform for Kubernetes?

✅ Declarative Approach – Define your infrastructure as code
✅ Version Control – Track changes using Git
✅ Reproducibility – Deploy identical environments
✅ Automation – Reduce manual configurations

Step 1: Install Terraform

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform

Verify the installation:

terraform -version

Step 2: Set Up Kubernetes Provider in Terraform

Now, let’s define our Terraform configuration for Kubernetes.

Create a new directory and files

mkdir terraform-k8s && cd terraform-k8s
touch main.tf

Terraform Configuration (main.tf)

provider "kubernetes" {
  config_path = "~/.kube/config"
}

resource "kubernetes_deployment" "nginx" {
  metadata {
    name = "nginx-deployment"
    labels = {
      app = "nginx"
    }
  }

  spec {
    replicas = 2

    selector {
      match_labels = {
        app = "nginx"
      }
    }

    template {
      metadata {
        labels = {
          app = "nginx"
        }
      }

      spec {
        container {
          name  = "nginx"
          image = "nginx:latest"
        }
      }
    }
  }
}

resource "kubernetes_service" "nginx_service" {
  metadata {
    name = "nginx-service"
  }

  spec {
    selector = {
      app = "nginx"
    }

    port {
      port        = 80
      target_port = 80
    }

    type = "NodePort"
  }
}
Click Here to Copy YAML

Step 3: Initialize Terraform

Run the following command to initialize Terraform:

terraform init

This will download the necessary Kubernetes provider.

Step 4: Validate the Configuration

Ensure there are no syntax errors:

terraform validate

Expected output:

Success! The configuration is valid.

Step 5: Deploy Kubernetes Resources

Run:

terraform apply -auto-approve

Terraform will create:

  • Deployment: Two replicas of an nginx pod
  • Service: A NodePort service exposing the pods

Step 6: Verify the Deployment

Check if the deployment is running:

kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2/2     2            2          1m

Check the service:

kubectl get svc

Access the service:

minikube service nginx-service --url

Step 7: Cleaning Up

To delete the deployment and service:

terraform destroy -auto-approve

Conclusion

Using Terraform to define Kubernetes resources brings consistency, automation, and version control to your infrastructure. By leveraging Infrastructure as Code (IaC), you eliminate manual errors and ensure smooth deployments across environments.

What’s your experience with Terraform for Kubernetes? Let’s discuss in the comments!👇

Creating Custom Prometheus Exporters for Your Applications

Introduction

In modern cloud-native environments, monitoring is a critical aspect of maintaining application reliability and performance. Prometheus is a popular monitoring system, but its built-in exporters may not cover custom business logic or application-specific metrics.

In this guide, we will build a custom Prometheus exporter for a sample application, package it into a Docker container, and deploy it in Kubernetes. By the end of this tutorial, you’ll have a fully functional custom monitoring setup for your application.

Why Custom Prometheus Exporters?

Prometheus exporters are essential for collecting and exposing application-specific metrics. While standard exporters cover databases, queues, and system metrics, custom exporters allow you to:

âś… Track business-specific metrics (e.g., user activity, sales data)
âś… Gain real-time insights into application performance
âś… Enable custom alerting based on key performance indicators

Building a Custom Prometheus Exporter

We will create a simple Python-based Prometheus exporter that exposes custom application metrics over an HTTP endpoint.

Step 1: Writing the Python Exporter

First, let’s create a simple Python script using the prometheus_client library.

Create exporter.py with the following content:

from prometheus_client import start_http_server, Counter
import time
import random

# Define a custom metric
REQUEST_COUNT = Counter("custom_app_requests_total", "Total number of processed requests")

def process_request():
    """Simulate request processing"""
    time.sleep(random.uniform(0.5, 2.0))  # Simulate latency
    REQUEST_COUNT.inc()  # Increment counter

if __name__ == "__main__":
    start_http_server(8000)  # Expose metrics on port 8000
    print("Custom Prometheus Exporter running on port 8000...")

    while True:
        process_request()
Click Here to Copy Python Code

This script exposes a custom counter metric custom_app_requests_total, which simulates incoming application requests.

Step 2: Building and Pushing the Docker Image

Now, let’s containerize our exporter for easy deployment.

Create a Dockerfile:

FROM python:3.9-slim
WORKDIR /app
COPY exporter.py /app/
RUN pip install prometheus_client
CMD ["python", "exporter.py"]

Build and push the image:

docker build -t myrepo/myapp-prometheus-exporter:latest .
docker push myrepo/myapp-prometheus-exporter:latest

Deploying in Kubernetes

Step 3: Kubernetes Deployment

To deploy our custom exporter in Kubernetes, we create a Deployment and Service.

Create exporter-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-prometheus-exporter
  labels:
    app: myapp-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp-exporter
  template:
    metadata:
      labels:
        app: myapp-exporter
    spec:
      containers:
        - name: myapp-exporter
          image: myrepo/myapp-prometheus-exporter:latest
          ports:
            - containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-exporter-service
spec:
  selector:
    app: myapp-exporter
  ports:
    - protocol: TCP
      port: 8000
      targetPort: 8000
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f exporter-deployment.yaml

Step 4: Configuring Prometheus to Scrape Custom Metrics

Next, we need to tell Prometheus to collect metrics from our exporter.

Create service-monitor.yaml:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: myapp-exporter-monitor
  labels:
    release: prometheus
spec:
  selector:
    matchLabels:
      app: myapp-exporter
  endpoints:
    - port: "8000"
      interval: 10s
Click Here to Copy YAML

Apply the ServiceMonitor:

kubectl apply -f service-monitor.yaml

Verifying the Setup

Step 5: Checking Metrics Collection

Check if the exporter is running:

kubectl get pods -l app=myapp-exporter

Port forward and test the metrics endpoint:

kubectl port-forward svc/myapp-exporter-service 8000:8000
curl http://localhost:8000/metrics

Check if Prometheus is scraping the exporter:

kubectl port-forward svc/prometheus-service 9090:9090

Now, open http://localhost:9090 and search for custom_app_requests_total.

Conclusion

Building a custom Prometheus exporter enables deep observability for your application. By following these steps, we have:

âś… Created a Python-based Prometheus exporter
âś… Containerized it using Docker
âś… Deployed it in Kubernetes
âś… Integrated it with Prometheus using ServiceMonitor

This setup ensures that we collect meaningful application metrics, which can be visualized in Grafana dashboards and used for proactive monitoring and alerting.

Are you using custom Prometheus exporters in your projects? Let’s discuss in the comments!👇

Using Sealed Secrets for Secure GitOps Deployments

Overview

Managing secrets securely in a GitOps workflow is a critical challenge. Storing plain Kubernetes secrets in a Git repository is risky because anyone with access to the repository can view them. Sealed Secrets, an open-source project by Bitnami, provides a way to encrypt secrets before storing them in Git, ensuring they remain secure.

Key Takeaways:

  • Securely store Kubernetes secrets in a Git repository.
  • Automate secret management using GitOps principles.
  • Ensure secrets can only be decrypted by the Kubernetes cluster.

Step 1: Install Sealed Secrets Controller

The Sealed Secrets Controller runs in the Kubernetes cluster and is responsible for decrypting Sealed Secrets into regular Kubernetes secrets.

Install via Helm

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install sealed-secrets bitnami/sealed-secrets --namespace kube-system

Verify installation:

kubectl get pods -n kube-system | grep sealed-secrets
kubectl get svc -n kube-system | grep sealed-secrets

Step 2: Install Kubeseal CLI

To encrypt secrets locally before committing them to Git, install the kubeseal CLI:

Linux Installation

wget https://github.com/bitnami-labs/sealed-secrets/releases/latest/download/kubeseal-linux-amd64 -O kubeseal
chmod +x kubeseal
sudo mv kubeseal /usr/local/bin/Overview

Verify installation:

kubeseal --version

Step 3: Create a Kubernetes Secret

Let’s create a secret for a database password.

Create a file named secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: default
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQ=   # Base64 encoded "password"
Click Here to View & Copy YAML

Apply the secret:

kubectl apply -f secret.yaml

Step 4: Encrypt the Secret Using Kubeseal

Use the kubeseal CLI to encrypt the secret so it can be safely stored in Git.

kubeseal --controller-name=sealed-secrets \
  --controller-namespace=kube-system \
  --format=yaml < secret.yaml > sealed-secret.yaml

The output sealed-secret.yaml will contain an encrypted version of the secret.

Example:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: my-secret
  namespace: default
spec:
  encryptedData:
    DB_PASSWORD: AgA+...long_encrypted_value...==
Click Here to View & Copy YAML

Now, delete the original secret from the cluster:

kubectl delete secret my-secret

Step 5: Apply Sealed Secret to Kubernetes

Deploy the sealed secret to Kubernetes:

kubectl apply -f sealed-secret.yaml

The Sealed Secrets Controller will automatically decrypt it and create a regular Kubernetes secret.

Verify the secret is recreated:

kubectl get secrets

Step 6: Push to GitHub for GitOps

Now, let’s commit the sealed secret to GitHub so it can be managed in a GitOps workflow.

Initialize a Git Repository (If Not Already Done)

git init
git remote add origin https://github.com/ArvindRaja45/deploy.git

Add and Commit Sealed Secret

git add sealed-secret.yaml
git commit -m "Added sealed secret for GitOps"
git push origin main

Step 7: Automate Deployment with GitHub Actions

To deploy the sealed secret automatically, create a GitHub Actions workflow.

Create a new file: .github/workflows/deployment.yaml

Push the Workflow to GitHubname: Deploy Sealed Secret

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'latest'

      - name: Configure Kubeconfig
        run: echo "$KUBECONFIG_DATA" | base64 --decode > kubeconfig.yaml

      - name: Deploy Sealed Secret
        run: kubectl apply -f sealed-secret.yaml --kubeconfig=kubeconfig.yaml
Click Here to View & Copy YAML

Push the Workflow to GitHub

git add .github/workflows/deployment.yaml
git commit -m "Added GitHub Actions deployment workflow"
git push origin main

Conclusion

Using Sealed Secrets, we achieved:

  • Secure secret management in GitOps workflows.
  • Automated deployments with GitHub Actions.
  • No plaintext secrets in Git repositories.

This setup ensures that secrets remain encrypted at rest, providing a secure and automated way to manage secrets in a Kubernetes environment.

Have you used Sealed Secrets in your GitOps workflow? Share your experience!👇

Automating Kubernetes Deployments with GitHub Actions

In today’s DevOps world, automation is key to faster and more reliable deployments. Instead of manually applying Kubernetes manifests, we can use GitHub Actions to trigger deployments automatically whenever we push code.

What We Built Today?

  • A complete GitHub Actions pipeline for Kubernetes deployments
  • End-to-end automation from code commit to deployment
  • Secure & efficient setup using GitHub Secrets

Key Challenges We Solved:

  • How to integrate GitHub Actions with Kubernetes?
  • Ensuring deployments are non-root and secure
  • Handling GitHub Secrets for secure kubeconfig access Kubernetes Deployment YAML

Here’s the Kubernetes deployment we used today:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      securityContext:
        runAsNonRoot: true
      containers:
        - name: myapp
          image: nginxinc/nginx-unprivileged:latest
          ports:
            - containerPort: 80
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
          volumeMounts:
            - mountPath: /var/cache/nginx
              name: cache-volume
            - mountPath: /tmp
              name: tmp-volume
      volumes:
        - name: cache-volume
          emptyDir: {}
        - name: tmp-volume
          emptyDir: {}
Click Here to Copy YAML
  • Runs as a non-root user
  • Read-only root filesystem for security
  • Uses nginx-unprivileged for better compliance Setting Up GitHub Actions for Kubernetes

To automate deployment, we used this GitHub Actions workflow:

name: Deploy to Kubernetes

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: latest

      - name: Configure Kubernetes Cluster
        run: |
          echo "${{ secrets.KUBECONFIG }}" | base64 --decode > kubeconfig
          export KUBECONFIG=kubeconfig

      - name: Deploy to Kubernetes
        run: kubectl apply -f deploy.yaml
Click Here to Copy YAML

What It Does?

  • Triggers on every git push to main
  • Sets up kubectl to interact with the cluster
  • Uses GitHub Secrets (KUBECONFIG) for secure authentication
  • Deploys the latest changes to Kubernetes automatically

Why This Matters?

  • No more manual deployments
  • Instant updates on every push
  • Security-first approach with GitHub Secrets

Do you automate your Kubernetes deployments? Let’s discuss best practices in the comments! 👇

Setting Up Tekton Pipelines for Kubernetes-Native CI/CD

Why Tekton?

In modern cloud environments, traditional CI/CD tools can introduce complexity and infrastructure overhead. Tekton, a Kubernetes-native CI/CD framework, provides:

  • Declarative Pipelines with Kubernetes CRDs
  • Event-Driven Automation through triggers
  • Seamless GitHub & DockerHub Integration
  • Scalability & Portability across Kubernetes clusters

With Tekton, CI/CD becomes a native Kubernetes workload, reducing external dependencies and enhancing automation.

Real-World Use Case

Imagine a microservices-based application where developers frequently push updates to GitHub. A robust pipeline is required to:

  • Detect changes in the repository
  • Build & test the application
  • Push the container image to a registry
  • Deploy the latest version to Kubernetes automatically

Tekton enables this entire process within Kubernetes—without relying on external CI/CD systems.

Step 1: Install Tekton in Kubernetes

1.1 Install Tekton Pipelines

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

1.2 Install Tekton Triggers

kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml

1.3 Verify Installation

kubectl get pods -n tekton-pipelines

Step 2: Define Tekton Pipeline Components

2.1 Create a Tekton Task (task-build.yaml)

This task clones a GitHub repository and builds a container image using Kaniko.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-task
spec:
  steps:
    - name: clone-repo
      image: alpine/git
      script: |
        #!/bin/sh
        git clone https://github.com/ArvindRaja45/rep.git /workspace/source
    - name: build-image
      image: gcr.io/kaniko-project/executor:latest
      args:
        - "--context=/workspace/source"
        - "--destination=myrepo/my-app:latest"
Click Here to Copy YAML

2.2 Apply the Task

kubectl apply -f task-build.yaml

Step 3: Define the Pipeline (pipeline.yaml)

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: ci-pipeline
spec:
  tasks:
    - name: build
      taskRef:
        name: build-task
Click Here to Copy YAML
kubectl apply -f pipeline.yaml

Step 4: Configure PipelineRun (pipelinerun.yaml)

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: ci-pipeline-run
spec:
  pipelineRef:
    name: ci-pipeline
Click Here to Copy YAML
kubectl apply -f pipelinerun.yaml

Step 5: Automate Triggering with Tekton Triggers

5.1 Define an EventListener

apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
  name: github-listener
spec:
  serviceAccountName: tekton-triggers-sa
  triggers:
    - name: github-push
      bindings:
        - ref: github-trigger-binding
      template:
        ref: github-trigger-template
Click Here to Copy YAML
kubectl apply -f eventlistener.yaml

5.2 Expose the Listener

kubectl port-forward service/el-github-listener 8080:8080 -n tekton-pipelines

Step 6: Connect to GitHub Webhooks

  • Go to GitHub → Repository → Settings → Webhooks
  • Add http://EXTERNAL_IP:8080
  • Select application/json and push event

Step 7: Monitor the Pipeline Execution

tkn pipeline list
tkn pipelinerun list
tkn pipelinerun describe ci-pipeline-run

Key Takeaways

  • Kubernetes-native automation simplifies CI/CD workflows
  • Event-driven pipelines improve efficiency and response time
  • GitOps integration ensures seamless deployment processes
  • Scalability—Tekton adapts to both small and large-scale applications

Conclusion

Now you have a fully Kubernetes-native CI/CD pipeline using Tekton, with automated GitHub-triggered builds and deployments.

Want to go deeper? Let’s explore multi-stage pipelines, security scans, and GitOps integrations! Drop a comment👇

Mastering Kubernetes Network Security with NetworkPolicies

Introduction

Did you know? By default, every pod in Kubernetes can talk to any other pod—leading to unrestricted internal communication and potential security risks. This is a major concern in production environments where microservices demand strict access controls.

So, how do we lock down communication while ensuring seamless service interactions? NetworkPolicies provide the answer!

The Challenge: Unrestricted Communication = Security Risk

  • Pods can freely communicate across namespaces
  • Sensitive data exposure due to open networking
  • No control over egress traffic to external services
  • Lateral movement risk if an attacker compromises a pod

In short, without proper security, a single breach can compromise the entire cluster. The Solution: Layered NetworkPolicies for Progressive Security

Step 1: Deploy the Application Pods

Create a Namespace for Isolation

Organize your application by creating a dedicated namespace.

kubectl create namespace secure-app

Effect:

  • All application resources will be deployed in this namespace
  • NetworkPolicies will only affect this namespace, avoiding interference with other workloads

Deploy the Frontend Pod

The frontend should be publicly accessible and interact with the backend.

apiVersion: v1
kind: Pod
metadata:
  name: frontend
  namespace: secure-app
  labels:
    app: frontend
spec:
  containers:
    - name: frontend
      image: nginx
Click Here to Copy YAML

Effect:

  • Creates a frontend pod that can serve requests
  • No restrictions yet—open network connectivity

Deploy the Backend Pod

The backend should only communicate with the frontend and the database.

apiVersion: v1
kind: Pod
metadata:
  name: backend
  namespace: secure-app
  labels:
    app: backend
spec:
  containers:
    - name: backend
      image: python:3.9
Click Here to Copy YAML

Effect:

  • Creates a backend pod to process logic
  • Currently accessible by any pod in the cluster

Deploy the Database Pod

The database should only be accessible to the backend.

apiVersion: v1
kind: Pod
metadata:
  name: database
  namespace: secure-app
  labels:
    app: database
spec:
  containers:
    - name: database
      image: postgres
Click Here to Copy YAML

Effect:

  • Creates a database pod with unrestricted access
  • A potential security risk—frontend or any pod could connect

Step 2: Implement NetworkPolicies for Security

By default, Kubernetes allows all pod-to-pod communication. To enforce security, we will apply four key NetworkPolicies step by step.

Enforce a Default Deny-All Policy

Restrict all ingress and egress traffic by default in the secure-app namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: secure-app
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
Click Here to Copy YAML

Effect:

  • No pod can send or receive traffic until explicitly allowed
  • Zero-trust security model enforced at the namespace level

Allow Frontend to Backend Communication

The frontend should be allowed to send requests to the backend, but not directly to the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
Click Here to Copy YAML

Effect:

  • Frontend can talk to backend
  • Backend cannot talk to frontend or database yet

Allow Backend to Access Database

The backend should be the only service that can communicate with the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
Click Here to Copy YAML

Effect:

  • Backend can talk to database
  • Frontend is blocked from accessing the database

Restrict Backend’s Outbound Traffic

To prevent data exfiltration, restrict backend’s egress traffic to only a specific external API.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-backend-egress
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24  # Allowed external API
Click Here to Copy YAML

Effect:

  • Backend can only connect to authorized external APIs
  • Prevents accidental or malicious data exfiltration

Step 3: Verify NetworkPolicies

After applying the policies, test network access between services.

Check if frontend can access backend:

kubectl exec frontend -n secure-app -- curl backend:80

Expected: Success

Check if frontend can access database:

kubectl exec frontend -n secure-app -- curl database:5432

Expected: Connection refused

Check if backend can access database:

kubectl exec backend -n secure-app -- curl database:5432

Expected: Success

Conclusion

We implemented a four-layer security model to gradually enforce pod-to-pod communication rules:

  • Default Deny-All Policy – Establish a zero-trust baseline by blocking all ingress and egress traffic. No pod can talk to another unless explicitly allowed.
  • Allow Frontend-to-Backend Traffic – Define strict ingress rules so only frontend pods can reach backend services.
  • Restrict Backend-to-Database Access – Grant database access only to backend pods, preventing unauthorized services from connecting.
  • Control Outbound Traffic – Limit backend egress access only to trusted external APIs while blocking all other outbound requests.

The Impact: Stronger Kubernetes Security

  • Strict pod-to-pod communication controls
  • Zero-trust networking within the cluster
  • Granular access control without breaking service dependencies
  • Minimal attack surface, reducing lateral movement risks

This layered approach ensures network isolation, data security, and regulated API access, transforming an open network into a highly secure Kubernetes environment.

Are you using NetworkPolicies in your Kubernetes setup? Let’s discuss how we can enhance cluster security together! Drop your thoughts in the comments.👇

Automating Container Security Scans with Trivy in GitHub Actions

Introduction

Ensuring security in containerized applications is a critical aspect of modern DevOps workflows. To enhance security and streamline vulnerability detection, I integrated Trivy into my GitHub repository, enabling automated security scanning within the CI/CD pipeline.

Objective

To automate vulnerability scanning for container images using Trivy within GitHub Actions, ensuring secure deployments with minimal manual intervention.

Step 1: Install Trivy v0.18.3

Run the following commands to download and install Trivy v0.18.3:

# Update package lists
sudo apt update

# Download Trivy v0.18.3 .deb package
wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb

# Install Trivy using dpkg
sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

# Verify installation
trivy --version

Step 2: Create a GitHub Actions Workflow for Automated Scanning

To integrate Trivy into your GitHub repository (trivy-security-scan), create a workflow file.

Create the Workflow Directory and File

mkdir -p .github/workflows
nano .github/workflows/trivy-scan.yml

Add the Following Content

name: Trivy Security Scan

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  trivy-scan:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v4

      - name: Install Trivy v0.18.3
        run: |
          sudo apt update
          wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb
          sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

      - name: Run Trivy Image Scan
        run: |
          trivy image alpine:latest > trivy-report.txt
          cat trivy-report.txt

      - name: Upload Scan Report
        uses: actions/upload-artifact@v4
        with:
          name: security-report
          path: trivy-report.txt
Click Here to Copy YAML

Step 3: Commit and Push the Workflow

git add .github/workflows/trivy-scan.yml
git commit -m "Added Trivy v0.18.3 security scan workflow"
git push origin main

Step 4: Verify GitHub Actions Workflow

  1. Open your GitHub repository: https://github.com/ArvindRaja45/trivy-security-scan.
  2. Click on the “Actions” tab.
  3. Ensure the “Trivy Security Scan” workflow runs successfully.
  4. Check the trivy-report.txt under Artifacts in GitHub Actions.

Final Outcome

  • Trivy v0.18.3 is installed using .deb package.
  • GitHub Actions will run Trivy security scans on Docker images.
  • Vulnerability reports are uploaded as artifacts for review.

Why This Matters?

By integrating security checks early in the CI/CD pipeline, we reduce risks and avoid last-minute surprises in production!

Security isn’t a one-time process—it’s a culture! How are you integrating security in your DevOps workflow? Let’s discuss in the comments!👇