Implementing the Prometheus Operator: A Complete Guide to Kubernetes Monitoring

Monitoring is the backbone of any reliable Kubernetes cluster. It ensures visibility into resource usage, application health, and potential failures. Instead of manually deploying and managing Prometheus, the Prometheus Operator simplifies and automates monitoring with custom resource definitions (CRDs).

In this guide, we will:
✅ Deploy the Prometheus Operator in Kubernetes
✅ Configure Prometheus, Alertmanager, and Grafana
✅ Set up automated service discovery using ServiceMonitor
✅ Enable alerts and notifications

Let’s get started!

Installing the Prometheus Operator Using Helm

The fastest and most efficient way to deploy the Prometheus stack is through Helm.

Step 1: Add the Prometheus Helm Repository

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Step 2: Install the Prometheus Operator

helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace

This command installs:
✅ Prometheus
✅ Alertmanager
✅ Grafana
✅ Node Exporters

Verify the installation:

kubectl get pods -n monitoring

Deploying a Prometheus Instance

Now, we will define a Prometheus Custom Resource to manage our Prometheus deployment.

prometheus.yaml

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus-instance
  namespace: monitoring
spec:
  replicas: 2
  serviceMonitorSelector: {}
  resources:
    requests:
      memory: 400Mi
      cpu: 200m
Click Here to Copy YAML

Apply the Prometheus Instance

kubectl apply -f prometheus.yaml

This will automatically create a Prometheus instance that follows Kubernetes best practices.

Configuring Service Discovery with ServiceMonitor

Prometheus requires service discovery to scrape metrics from your applications. The ServiceMonitor CRD makes this process seamless.

servicemonitor.yaml

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: myapp-monitor
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: myapp
  endpoints:
    - port: metrics
      interval: 30s
Click Here to Copy YAML

Apply the ServiceMonitor Configuration

kubectl apply -f servicemonitor.yaml

This ensures that Prometheus automatically discovers and scrapes metrics from services with the label app: myapp.

Setting Up Alerting with Alertmanager

Alertmanager handles alerts generated by Prometheus and routes them to email, Slack, PagerDuty, etc.

alertmanager.yaml

apiVersion: monitoring.coreos.com/v1
kind: Alertmanager
metadata:
  name: alertmanager-instance
  namespace: monitoring
spec:
  replicas: 2
Click Here to Copy YAML

Apply the Alertmanager Configuration

kubectl apply -f alertmanager.yaml

Now, we have a fully functional alerting system in place.

Accessing Grafana Dashboards

Grafana provides real-time visualization for Prometheus metrics. It is already included in the Prometheus Operator stack.

Access Grafana using port-forwarding

kubectl port-forward svc/prometheus-grafana -n monitoring 3000:80

Open http://localhost:3000 in your browser.
Username: admin
Password: Retrieve using:

kubectl get secret -n monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode

Now, you can import dashboards and start visualizing Kubernetes metrics!

Verifying the Monitoring Stack

To ensure everything is running smoothly, check the pods:

kubectl get pods -n monitoring

If you see Prometheus, Alertmanager, and Grafana running, congratulations! You now have a fully automated Kubernetes monitoring stack.

Conclusion: Why Use the Prometheus Operator?

By using the Prometheus Operator, we achieved:
✅ Simplified monitoring stack deployment
✅ Automated service discovery for metrics collection
✅ Centralized alerting with Alertmanager
✅ Interactive dashboards with Grafana 

With this setup, you can scale, extend, and customize monitoring based on your infrastructure needs.

Let me know if you have any questions in the comments! 👇

Using Sealed Secrets for Secure GitOps Deployments

Overview

Managing secrets securely in a GitOps workflow is a critical challenge. Storing plain Kubernetes secrets in a Git repository is risky because anyone with access to the repository can view them. Sealed Secrets, an open-source project by Bitnami, provides a way to encrypt secrets before storing them in Git, ensuring they remain secure.

Key Takeaways:

  • Securely store Kubernetes secrets in a Git repository.
  • Automate secret management using GitOps principles.
  • Ensure secrets can only be decrypted by the Kubernetes cluster.

Step 1: Install Sealed Secrets Controller

The Sealed Secrets Controller runs in the Kubernetes cluster and is responsible for decrypting Sealed Secrets into regular Kubernetes secrets.

Install via Helm

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install sealed-secrets bitnami/sealed-secrets --namespace kube-system

Verify installation:

kubectl get pods -n kube-system | grep sealed-secrets
kubectl get svc -n kube-system | grep sealed-secrets

Step 2: Install Kubeseal CLI

To encrypt secrets locally before committing them to Git, install the kubeseal CLI:

Linux Installation

wget https://github.com/bitnami-labs/sealed-secrets/releases/latest/download/kubeseal-linux-amd64 -O kubeseal
chmod +x kubeseal
sudo mv kubeseal /usr/local/bin/Overview

Verify installation:

kubeseal --version

Step 3: Create a Kubernetes Secret

Let’s create a secret for a database password.

Create a file named secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: default
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQ=   # Base64 encoded "password"
Click Here to View & Copy YAML

Apply the secret:

kubectl apply -f secret.yaml

Step 4: Encrypt the Secret Using Kubeseal

Use the kubeseal CLI to encrypt the secret so it can be safely stored in Git.

kubeseal --controller-name=sealed-secrets \
  --controller-namespace=kube-system \
  --format=yaml < secret.yaml > sealed-secret.yaml

The output sealed-secret.yaml will contain an encrypted version of the secret.

Example:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: my-secret
  namespace: default
spec:
  encryptedData:
    DB_PASSWORD: AgA+...long_encrypted_value...==
Click Here to View & Copy YAML

Now, delete the original secret from the cluster:

kubectl delete secret my-secret

Step 5: Apply Sealed Secret to Kubernetes

Deploy the sealed secret to Kubernetes:

kubectl apply -f sealed-secret.yaml

The Sealed Secrets Controller will automatically decrypt it and create a regular Kubernetes secret.

Verify the secret is recreated:

kubectl get secrets

Step 6: Push to GitHub for GitOps

Now, let’s commit the sealed secret to GitHub so it can be managed in a GitOps workflow.

Initialize a Git Repository (If Not Already Done)

git init
git remote add origin https://github.com/ArvindRaja45/deploy.git

Add and Commit Sealed Secret

git add sealed-secret.yaml
git commit -m "Added sealed secret for GitOps"
git push origin main

Step 7: Automate Deployment with GitHub Actions

To deploy the sealed secret automatically, create a GitHub Actions workflow.

Create a new file: .github/workflows/deployment.yaml

Push the Workflow to GitHubname: Deploy Sealed Secret

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'latest'

      - name: Configure Kubeconfig
        run: echo "$KUBECONFIG_DATA" | base64 --decode > kubeconfig.yaml

      - name: Deploy Sealed Secret
        run: kubectl apply -f sealed-secret.yaml --kubeconfig=kubeconfig.yaml
Click Here to View & Copy YAML

Push the Workflow to GitHub

git add .github/workflows/deployment.yaml
git commit -m "Added GitHub Actions deployment workflow"
git push origin main

Conclusion

Using Sealed Secrets, we achieved:

  • Secure secret management in GitOps workflows.
  • Automated deployments with GitHub Actions.
  • No plaintext secrets in Git repositories.

This setup ensures that secrets remain encrypted at rest, providing a secure and automated way to manage secrets in a Kubernetes environment.

Have you used Sealed Secrets in your GitOps workflow? Share your experience!👇

Automating Kubernetes Deployments with GitHub Actions

In today’s DevOps world, automation is key to faster and more reliable deployments. Instead of manually applying Kubernetes manifests, we can use GitHub Actions to trigger deployments automatically whenever we push code.

What We Built Today?

  • A complete GitHub Actions pipeline for Kubernetes deployments
  • End-to-end automation from code commit to deployment
  • Secure & efficient setup using GitHub Secrets

Key Challenges We Solved:

  • How to integrate GitHub Actions with Kubernetes?
  • Ensuring deployments are non-root and secure
  • Handling GitHub Secrets for secure kubeconfig access Kubernetes Deployment YAML

Here’s the Kubernetes deployment we used today:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      securityContext:
        runAsNonRoot: true
      containers:
        - name: myapp
          image: nginxinc/nginx-unprivileged:latest
          ports:
            - containerPort: 80
          securityContext:
            runAsNonRoot: true
            readOnlyRootFilesystem: true
          volumeMounts:
            - mountPath: /var/cache/nginx
              name: cache-volume
            - mountPath: /tmp
              name: tmp-volume
      volumes:
        - name: cache-volume
          emptyDir: {}
        - name: tmp-volume
          emptyDir: {}
Click Here to Copy YAML
  • Runs as a non-root user
  • Read-only root filesystem for security
  • Uses nginx-unprivileged for better compliance Setting Up GitHub Actions for Kubernetes

To automate deployment, we used this GitHub Actions workflow:

name: Deploy to Kubernetes

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: latest

      - name: Configure Kubernetes Cluster
        run: |
          echo "${{ secrets.KUBECONFIG }}" | base64 --decode > kubeconfig
          export KUBECONFIG=kubeconfig

      - name: Deploy to Kubernetes
        run: kubectl apply -f deploy.yaml
Click Here to Copy YAML

What It Does?

  • Triggers on every git push to main
  • Sets up kubectl to interact with the cluster
  • Uses GitHub Secrets (KUBECONFIG) for secure authentication
  • Deploys the latest changes to Kubernetes automatically

Why This Matters?

  • No more manual deployments
  • Instant updates on every push
  • Security-first approach with GitHub Secrets

Do you automate your Kubernetes deployments? Let’s discuss best practices in the comments! 👇

Implementing GitOps with Flux CD: A Deep Dive into Automated Kubernetes Deployments

The Challenge:

Ensuring a declarative and automated synchronization between Git and Kubernetes clusters is crucial for consistency, version control, and automated rollback capabilities. Traditional CI/CD pipelines often introduce inconsistencies due to manual triggers and state drift.

The Solution: Flux CD

Flux CD follows the GitOps model, ensuring that the desired state defined in a Git repository is continuously reconciled with the actual state in Kubernetes.

Example Scenario:

We are deploying a secure Nginx application using Flux CD. The goal is to:

  • Automate deployments from a private Git repository
  • Ensure rollback capabilities with Git versioning
  • Enforce a secure and immutable infrastructure

Step 1: Install Flux CLI

Ensure you have kubectl and helm installed, then install the Flux CLI:

curl -s https://fluxcd.io/install.sh | sudo bash
export PATH=$PATH:/usr/local/bin
flux --version

Step 2: Bootstrap Flux in Your Kubernetes Cluster

Flux needs to be bootstrapped with a Git repository that will act as the single source of truth for deployments.

flux bootstrap github \
  --owner=<your-github-username> \
  --repository=flux-gitops \
  --branch=main \
  --path=clusters/my-cluster \
  --personal

This command:

  • Sets up Flux in the cluster
  • Connects it to the specified GitHub repository
  • Deploys Flux components

Step 3: Define a GitOps Repository Structure

Structure your repository as follows:

flux-gitops/
│── clusters/
│   ├── my-cluster/
│   │   ├── kustomization.yaml
│── apps/
│   ├── nginx/
│   │   ├── deployment.yaml
│   │   ├── service.yaml
│   │   ├── kustomization.yaml

Step 4: Deploy a Secure Nginx Application

apps/nginx/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-secure
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-secure
  template:
    metadata:
      labels:
        app: nginx-secure
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.6
        ports:
        - containerPort: 80
        securityContext:
          runAsNonRoot: true
          readOnlyRootFilesystem: true
Click Here to Copy YAML

apps/nginx/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
spec:
  selector:
    app: nginx-secure
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Step 5: Register the Nginx Application with Flux

Flux requires a Kustomization resource to track and deploy applications.

apps/nginx/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
  name: nginx
  namespace: flux-system
spec:
  targetNamespace: default
  interval: 1m
  sourceRef:
    kind: GitRepository
    name: flux-gitops
  path: "./apps/nginx"
  prune: true
  wait: true
Click Here to Copy YAML

Apply the Kustomization to the cluster:

kubectl apply -f apps/nginx/kustomization.yaml

Flux will now continuously monitor and apply the Nginx manifests from the Git repository.

Step 6: Verify Deployment

Check the status of the Nginx deployment:

kubectl get pods -l app=nginx-secure
kubectl get svc nginx-service

Flux’s reconciliation logs can be checked with:

flux get kustomizations
flux get sources git

Step 7: Automatic Rollback

Flux tracks all changes via Git. If an incorrect version of Nginx is deployed, revert using:

git revert <commit-id>
git push origin main

Flux will detect this change and automatically restore the last stable state.

Flux CD ensures:

  • Continuous deployment from Git
  • Automated rollbacks via Git versioning
  • Enhanced security with read-only containers
  • Immutable and declarative infrastructure

This is how GitOps transforms Kubernetes deployments into a fully automated and scalable process.

Conclusion

GitOps with Flux CD revolutionizes Kubernetes deployments by making them declarative, automated, and highly secure. With Git as the single source of truth, deployments become version-controlled, reproducible, and rollback-friendly.

Are you using GitOps in production? Drop a comment!👇

Setting Up Tekton Pipelines for Kubernetes-Native CI/CD

Why Tekton?

In modern cloud environments, traditional CI/CD tools can introduce complexity and infrastructure overhead. Tekton, a Kubernetes-native CI/CD framework, provides:

  • Declarative Pipelines with Kubernetes CRDs
  • Event-Driven Automation through triggers
  • Seamless GitHub & DockerHub Integration
  • Scalability & Portability across Kubernetes clusters

With Tekton, CI/CD becomes a native Kubernetes workload, reducing external dependencies and enhancing automation.

Real-World Use Case

Imagine a microservices-based application where developers frequently push updates to GitHub. A robust pipeline is required to:

  • Detect changes in the repository
  • Build & test the application
  • Push the container image to a registry
  • Deploy the latest version to Kubernetes automatically

Tekton enables this entire process within Kubernetes—without relying on external CI/CD systems.

Step 1: Install Tekton in Kubernetes

1.1 Install Tekton Pipelines

kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

1.2 Install Tekton Triggers

kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml

1.3 Verify Installation

kubectl get pods -n tekton-pipelines

Step 2: Define Tekton Pipeline Components

2.1 Create a Tekton Task (task-build.yaml)

This task clones a GitHub repository and builds a container image using Kaniko.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-task
spec:
  steps:
    - name: clone-repo
      image: alpine/git
      script: |
        #!/bin/sh
        git clone https://github.com/ArvindRaja45/rep.git /workspace/source
    - name: build-image
      image: gcr.io/kaniko-project/executor:latest
      args:
        - "--context=/workspace/source"
        - "--destination=myrepo/my-app:latest"
Click Here to Copy YAML

2.2 Apply the Task

kubectl apply -f task-build.yaml

Step 3: Define the Pipeline (pipeline.yaml)

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: ci-pipeline
spec:
  tasks:
    - name: build
      taskRef:
        name: build-task
Click Here to Copy YAML
kubectl apply -f pipeline.yaml

Step 4: Configure PipelineRun (pipelinerun.yaml)

apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: ci-pipeline-run
spec:
  pipelineRef:
    name: ci-pipeline
Click Here to Copy YAML
kubectl apply -f pipelinerun.yaml

Step 5: Automate Triggering with Tekton Triggers

5.1 Define an EventListener

apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
  name: github-listener
spec:
  serviceAccountName: tekton-triggers-sa
  triggers:
    - name: github-push
      bindings:
        - ref: github-trigger-binding
      template:
        ref: github-trigger-template
Click Here to Copy YAML
kubectl apply -f eventlistener.yaml

5.2 Expose the Listener

kubectl port-forward service/el-github-listener 8080:8080 -n tekton-pipelines

Step 6: Connect to GitHub Webhooks

  • Go to GitHub → Repository → Settings → Webhooks
  • Add http://EXTERNAL_IP:8080
  • Select application/json and push event

Step 7: Monitor the Pipeline Execution

tkn pipeline list
tkn pipelinerun list
tkn pipelinerun describe ci-pipeline-run

Key Takeaways

  • Kubernetes-native automation simplifies CI/CD workflows
  • Event-driven pipelines improve efficiency and response time
  • GitOps integration ensures seamless deployment processes
  • Scalability—Tekton adapts to both small and large-scale applications

Conclusion

Now you have a fully Kubernetes-native CI/CD pipeline using Tekton, with automated GitHub-triggered builds and deployments.

Want to go deeper? Let’s explore multi-stage pipelines, security scans, and GitOps integrations! Drop a comment👇

Zero-Downtime Deployments with ArgoCD in Minikube – A Game Changer!

The Challenge

Ever pushed an update to your Kubernetes app and suddenly… Downtime!
Users frustrated, services unavailable, and rollback becomes a nightmare. Sounds familiar?

The Solution: GitOps + ArgoCD

With ArgoCD, we eliminate downtime, automate rollbacks, and ensure seamless deployments directly from Git. Here’s how:

  • Push your code → ArgoCD auto-syncs and deploys!
  • Rolling updates ensure no downtime – users always get a running instance
  • Health checks prevent broken deployments – if something fails, ArgoCD rolls back instantly
  • Version-controlled deployments – every change is trackable and reversible

Step 1: Start Minikube and Install ArgoCD

Ensure you have Minikube and kubectl installed.

# Start Minikube
minikube start

# Create the ArgoCD namespace
kubectl create namespace argocd

# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Wait for the ArgoCD pods to be ready:

kubectl get pods -n argocd

Step 2: Expose and Access ArgoCD UI

Expose the ArgoCD API server:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Retrieve the admin password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 --decode; echo

Login to ArgoCD:

argocd login localhost:8080 --username admin --password <your-password>

Step 3: Connect ArgoCD to Your GitHub Repo

Use SSH authentication since GitHub removed password-based authentication.

argocd repo add git@github.com:ArvindRaja45/rep.git --ssh-private-key-path ~/.ssh/id_rsa

Step 4: Define Kubernetes Manifests

Deployment.yaml (Zero-Downtime Strategy)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: myrepo/my-app:latest
          ports:
            - containerPort: 80
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 10
Click Here to Copy YAML

Service.yaml (Expose the Application)

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
Click Here to Copy YAML

Ingress.yaml (External Access)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
spec:
  rules:
    - host: my-app.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app-service
                port:
                  number: 80
Click Here to Copy YAML

Step 5: Deploy Application with ArgoCD

argocd app create my-app \
    --repo git@github.com:ArvindRaja45/rep.git \
    --path minikube \
    --dest-server https://kubernetes.default.svc \
    --dest-namespace default

Sync the application:

argocd app sync my-app

Monitor the deployment:

kubectl get pods -n default

Step 6: Enable Auto-Rollback on Failure

Edit Deployment.yaml to add readiness probe:

readinessProbe:
  httpGet:
    path: /
    port: 80
  initialDelaySeconds: 5
  periodSeconds: 10

Apply the changes:

kubectl apply -f deployment.yaml

If a deployment fails due to misconfiguration, ArgoCD will automatically rollback to the last successful version.

Why This is a Game-Changer

  • No more downtime – deployments are seamless and controlled
  • Fully automated – Git becomes your single source of truth
  • Faster rollbacks – mistakes are fixed in seconds, not hours
  • Scalable & production-ready – ArgoCD grows with your infrastructure

Conclusion

By integrating ArgoCD into your Kubernetes workflow, you can achieve zero-downtime deployments, automated rollbacks, and seamless application updates—all while ensuring stability and scalability. With Git as the single source of truth, your deployments become more reliable, repeatable, and transparent.

In today’s fast-paced DevOps world, automation is key to maintaining high availability and minimizing risk. Whether you’re running Minikube for development or scaling across multi-cluster production environments, ArgoCD is a game-changer for Kubernetes deployments.

How are you handling deployments in Kubernetes? Have you implemented ArgoCD yet? Share your thoughts!👇

Mastering Kubernetes Network Security with NetworkPolicies

Introduction

Did you know? By default, every pod in Kubernetes can talk to any other pod—leading to unrestricted internal communication and potential security risks. This is a major concern in production environments where microservices demand strict access controls.

So, how do we lock down communication while ensuring seamless service interactions? NetworkPolicies provide the answer!

The Challenge: Unrestricted Communication = Security Risk

  • Pods can freely communicate across namespaces
  • Sensitive data exposure due to open networking
  • No control over egress traffic to external services
  • Lateral movement risk if an attacker compromises a pod

In short, without proper security, a single breach can compromise the entire cluster. The Solution: Layered NetworkPolicies for Progressive Security

Step 1: Deploy the Application Pods

Create a Namespace for Isolation

Organize your application by creating a dedicated namespace.

kubectl create namespace secure-app

Effect:

  • All application resources will be deployed in this namespace
  • NetworkPolicies will only affect this namespace, avoiding interference with other workloads

Deploy the Frontend Pod

The frontend should be publicly accessible and interact with the backend.

apiVersion: v1
kind: Pod
metadata:
  name: frontend
  namespace: secure-app
  labels:
    app: frontend
spec:
  containers:
    - name: frontend
      image: nginx
Click Here to Copy YAML

Effect:

  • Creates a frontend pod that can serve requests
  • No restrictions yet—open network connectivity

Deploy the Backend Pod

The backend should only communicate with the frontend and the database.

apiVersion: v1
kind: Pod
metadata:
  name: backend
  namespace: secure-app
  labels:
    app: backend
spec:
  containers:
    - name: backend
      image: python:3.9
Click Here to Copy YAML

Effect:

  • Creates a backend pod to process logic
  • Currently accessible by any pod in the cluster

Deploy the Database Pod

The database should only be accessible to the backend.

apiVersion: v1
kind: Pod
metadata:
  name: database
  namespace: secure-app
  labels:
    app: database
spec:
  containers:
    - name: database
      image: postgres
Click Here to Copy YAML

Effect:

  • Creates a database pod with unrestricted access
  • A potential security risk—frontend or any pod could connect

Step 2: Implement NetworkPolicies for Security

By default, Kubernetes allows all pod-to-pod communication. To enforce security, we will apply four key NetworkPolicies step by step.

Enforce a Default Deny-All Policy

Restrict all ingress and egress traffic by default in the secure-app namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: secure-app
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
Click Here to Copy YAML

Effect:

  • No pod can send or receive traffic until explicitly allowed
  • Zero-trust security model enforced at the namespace level

Allow Frontend to Backend Communication

The frontend should be allowed to send requests to the backend, but not directly to the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
Click Here to Copy YAML

Effect:

  • Frontend can talk to backend
  • Backend cannot talk to frontend or database yet

Allow Backend to Access Database

The backend should be the only service that can communicate with the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
Click Here to Copy YAML

Effect:

  • Backend can talk to database
  • Frontend is blocked from accessing the database

Restrict Backend’s Outbound Traffic

To prevent data exfiltration, restrict backend’s egress traffic to only a specific external API.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-backend-egress
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24  # Allowed external API
Click Here to Copy YAML

Effect:

  • Backend can only connect to authorized external APIs
  • Prevents accidental or malicious data exfiltration

Step 3: Verify NetworkPolicies

After applying the policies, test network access between services.

Check if frontend can access backend:

kubectl exec frontend -n secure-app -- curl backend:80

Expected: Success

Check if frontend can access database:

kubectl exec frontend -n secure-app -- curl database:5432

Expected: Connection refused

Check if backend can access database:

kubectl exec backend -n secure-app -- curl database:5432

Expected: Success

Conclusion

We implemented a four-layer security model to gradually enforce pod-to-pod communication rules:

  • Default Deny-All Policy – Establish a zero-trust baseline by blocking all ingress and egress traffic. No pod can talk to another unless explicitly allowed.
  • Allow Frontend-to-Backend Traffic – Define strict ingress rules so only frontend pods can reach backend services.
  • Restrict Backend-to-Database Access – Grant database access only to backend pods, preventing unauthorized services from connecting.
  • Control Outbound Traffic – Limit backend egress access only to trusted external APIs while blocking all other outbound requests.

The Impact: Stronger Kubernetes Security

  • Strict pod-to-pod communication controls
  • Zero-trust networking within the cluster
  • Granular access control without breaking service dependencies
  • Minimal attack surface, reducing lateral movement risks

This layered approach ensures network isolation, data security, and regulated API access, transforming an open network into a highly secure Kubernetes environment.

Are you using NetworkPolicies in your Kubernetes setup? Let’s discuss how we can enhance cluster security together! Drop your thoughts in the comments.👇

Automating Container Security Scans with Trivy in GitHub Actions

Introduction

Ensuring security in containerized applications is a critical aspect of modern DevOps workflows. To enhance security and streamline vulnerability detection, I integrated Trivy into my GitHub repository, enabling automated security scanning within the CI/CD pipeline.

Objective

To automate vulnerability scanning for container images using Trivy within GitHub Actions, ensuring secure deployments with minimal manual intervention.

Step 1: Install Trivy v0.18.3

Run the following commands to download and install Trivy v0.18.3:

# Update package lists
sudo apt update

# Download Trivy v0.18.3 .deb package
wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb

# Install Trivy using dpkg
sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

# Verify installation
trivy --version

Step 2: Create a GitHub Actions Workflow for Automated Scanning

To integrate Trivy into your GitHub repository (trivy-security-scan), create a workflow file.

Create the Workflow Directory and File

mkdir -p .github/workflows
nano .github/workflows/trivy-scan.yml

Add the Following Content

name: Trivy Security Scan

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  trivy-scan:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v4

      - name: Install Trivy v0.18.3
        run: |
          sudo apt update
          wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb
          sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

      - name: Run Trivy Image Scan
        run: |
          trivy image alpine:latest > trivy-report.txt
          cat trivy-report.txt

      - name: Upload Scan Report
        uses: actions/upload-artifact@v4
        with:
          name: security-report
          path: trivy-report.txt
Click Here to Copy YAML

Step 3: Commit and Push the Workflow

git add .github/workflows/trivy-scan.yml
git commit -m "Added Trivy v0.18.3 security scan workflow"
git push origin main

Step 4: Verify GitHub Actions Workflow

  1. Open your GitHub repository: https://github.com/ArvindRaja45/trivy-security-scan.
  2. Click on the “Actions” tab.
  3. Ensure the “Trivy Security Scan” workflow runs successfully.
  4. Check the trivy-report.txt under Artifacts in GitHub Actions.

Final Outcome

  • Trivy v0.18.3 is installed using .deb package.
  • GitHub Actions will run Trivy security scans on Docker images.
  • Vulnerability reports are uploaded as artifacts for review.

Why This Matters?

By integrating security checks early in the CI/CD pipeline, we reduce risks and avoid last-minute surprises in production!

Security isn’t a one-time process—it’s a culture! How are you integrating security in your DevOps workflow? Let’s discuss in the comments!👇

Implementing Pod Security Standards in Kubernetes: A Practical Guide

Introduction

Securing Kubernetes workloads is critical to prevent security breaches and container escapes. Kubernetes Pod Security Standards (PSS) provide a framework for defining and enforcing security settings for Pods at different levels—Privileged, Baseline, and Restricted.

In this guide, you’ll learn how to implement Pod Security Standards in a Kubernetes cluster while ensuring your applications run smoothly.Understanding Pod Security Standards (PSS)

Kubernetes defines three security levels for Pods:

  1. Privileged – No restrictions; full host access (Not recommended).
  2. Baseline – Reasonable defaults for running common applications.
  3. Restricted – Strictest policies for maximum security.

Goal: Implement Restricted policies where possible while ensuring apps run without breaking.

Step 1: Enabling Pod Security Admission (PSA)

Starting from Kubernetes v1.23, Pod Security Admission (PSA) replaces PodSecurityPolicies (PSP) to enforce PSS.

Check if PSA is Enabled

kubectl get ns --show-labels

If namespaces are not labeled with PSS, you must label them manually.

Step 2: Apply Pod Security Labels to Namespaces

Namespaces must be labeled to enforce a Pod Security Standard.

Baseline Policy (For Standard Applications)

kubectl label namespace default pod-security.kubernetes.io/enforce=baseline

Restricted Policy (For Maximum Security)

kubectl label namespace secure-apps pod-security.kubernetes.io/enforce=restricted

Verify Labels

kubectl get ns --show-labels

Step 3: Deploy Applications with Pod Security Standards

Example 1: Non-Root Container (Restricted Mode)

A properly secured Pod must not run as root. Here’s a compliant example:

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
  namespace: secure-apps
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1001
    fsGroup: 1001
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      volumeMounts:
        - mountPath: /data
          name: app-storage
  volumes:
    - name: app-storage
      emptyDir: {}
Click Here to Copy YAML

Why This Pod Is Secure?

  • Runs as a non-root user (runAsNonRoot: true)
  • No privilege escalation (allowPrivilegeEscalation: false)
  • All unnecessary Linux capabilities are dropped (capabilities.drop: ALL)
  • Uses a read-only root filesystem (readOnlyRootFilesystem: true)

Step 4: Prevent Non-Compliant Pods from Running

Test deploying a privileged Pod in the secure-apps namespace:

apiVersion: v1
kind: Pod
metadata:
  name: privileged-app
  namespace: secure-apps
spec:
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        privileged: true
Click Here to Copy YAML

Expected Output

Error: pods "privileged-app" is forbidden: violates PodSecurity "restricted:latest"

Kubernetes blocks the Pod due to the privileged: true setting.Step 5: Audit and Warn Non-Compliant Pods (Optional)

Instead of enforcing policies immediately, you can audit violations first.

Audit Only:

kubectl label namespace dev-team pod-security.kubernetes.io/audit=restricted

Warn Before Deployment:

kubectl label namespace dev-team pod-security.kubernetes.io/warn=restricted

Now, users get warnings instead of immediate rejections.Step 6: Verify Security Policies

Check PSA Enforcement Logs

kubectl describe pod secure-app -n secure-apps

Test Pod Security Admission

kubectl run test-pod --image=nginx --namespace=secure-apps

If the Pod violates security rules, Kubernetes will block it.

Conclusion

✅ You implemented Kubernetes Pod Security Standards
✅ Pods now run with minimal privileges
✅ Security policies are enforced while maintaining functionality

How do you implement PSS? Let’s discuss best practices in the comments!👇

Setting Up a Secure Multi-tenant Kubernetes Cluster in Minikube

Introduction

In Kubernetes, multi-tenancy enables multiple teams or projects to share the same cluster while maintaining isolation and security. However, ensuring proper access control and preventing resource conflicts is a challenge. This guide walks you through setting up a secure multi-tenant environment using Minikube, Namespaces, and RBAC (Role-Based Access Control).

Why Multi-tenancy in Kubernetes?

✅ Isolates workloads for different teams
✅ Ensures least-privilege access
✅ Prevents unintentional interference between teams
✅ Helps organizations optimize resource usage

Step 1: Start Minikube

Before setting up multi-tenancy, ensure Minikube is running:

minikube start --memory=4096 --cpus=2

Step 2: Create Isolated Namespaces

Each team or project should have its own namespace.

kubectl create namespace dev-team  
kubectl create namespace qa-team  
kubectl create namespace prod-team

You can verify:

kubectl get namespaces

Step 3: Implement Role-Based Access Control (RBAC)

Create a Role for Developers

Developers should only be able to manage resources within their namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev-team
  name: developer-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["create", "get", "list", "delete"]
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-role.yaml

Bind the Role to a User

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: dev-team
  name: developer-binding
subjects:
  - kind: User
    name: alice
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-binding.yaml

Now, user Alice has access only to dev-team namespace.

Step 4: Enforce Network Isolation (Optional but Recommended)

To ensure teams cannot access resources outside their namespace, create a NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-access
  namespace: dev-team
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector: {}
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: dev-team
Click Here to Copy YAML

Apply it:

kubectl apply -f restrict-access.yaml

This ensures that pods in dev-team can only communicate within their namespace.

Step 5: Verify Multi-tenancy

  • Try creating resources from a different namespace with a restricted user.
  • Check access control using kubectl auth can-i.

Example:

kubectl auth can-i create pods --as=alice --namespace=dev-team  # Allowed  
kubectl auth can-i delete pods --as=alice --namespace=prod-team  # Denied  

Conclusion

By setting up Namespaces, RBAC, and NetworkPolicies, you have successfully created a secure multi-tenant Kubernetes cluster in Minikube. This setup ensures each team has isolated access to their resources without interference.

Stay tuned for more Kubernetes security insights! 🚀