Implementing Istio: A Step-by-Step Service Mesh Tutorial

Introduction

Modern applications rely on microservices, making service-to-service communication complex. Managing traffic routing, security, and observability becomes crucial.

Istio is a powerful service mesh that provides:
✅ Traffic Management – Fine-grained control over requests.
✅ Security – Mutual TLS (mTLS) for encrypted communication.
✅ Observability – Insights into service interactions and performance.

This step-by-step guide covers:

  • Installing Istio on a Kubernetes cluster.
  • Deploying microservices with Istio sidecars.
  • Configuring traffic routing and security.
  • Enabling monitoring with Grafana, Kiali, and Jaeger.

Step 1: Install Istio in Kubernetes

1.1 Download and Install Istio CLI

curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH

1.2 Install Istio with the Default Profile

istioctl install --set profile=demo -y

1.3 Enable Istio Injection

Enable automatic sidecar injection in the default namespace:

kubectl label namespace default istio-injection=enabled

Step 2: Deploy Microservices with Istio

We will deploy two microservices:
web – Calls the api service.
api – Responds with “Hello from API”.

2.1 Deploy web Service

Create web-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx
        ports:
        - containerPort: 80
Click Here to Copy YAML

Create web-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  selector:
    app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f web-deployment.yaml
kubectl apply -f web-service.yaml

2.2 Deploy api Service

Create api-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: hashicorp/http-echo
        args: ["-text=Hello from API"]
        ports:
        - containerPort: 5678
Click Here to Copy YAML

Create api-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5678
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f api-deployment.yaml
kubectl apply -f api-service.yaml

Step 3: Configure Istio Traffic Routing

3.1 Create a VirtualService for Traffic Control

Create api-virtualservice.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: api
spec:
  hosts:
  - api
  http:
  - route:
    - destination:
        host: api
        subset: v1
Click Here to Copy YAML

Apply the rule:

kubectl apply -f api-virtualservice.yaml

Step 4: Enable Observability & Monitoring

4.1 Install Kiali, Jaeger, Prometheus, and Grafana

kubectl apply -f samples/addons

4.2 Access the Monitoring Dashboards

kubectl port-forward svc/kiali 20001 -n istio-system

Open http://localhost:20001 to view the Kiali dashboard.

Step 5: Secure Service-to-Service Communication

5.1 Enable mTLS Between Services

Create peerauthentication.yaml:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
Click Here to Copy YAML

Apply the policy:

kubectl apply -f peerauthentication.yaml

Conclusion

We have successfully:
✅ Installed Istio and enabled sidecar injection.
✅ Deployed microservices inside the service mesh.
✅ Configured traffic routing using VirtualServices.
✅ Enabled observability tools like Grafana, Jaeger, and Kiali.
✅ Secured communication using mTLS encryption.

Istio simplifies microservices networking while enhancing security and visibility. Start using it today!

Are you using Istio in production? Share your experiences below!👇

Dynamic Configuration Management in Kubernetes with ConfigMaps and Secrets

Introduction

In Kubernetes, managing application configurations dynamically is essential for scalability, security, and zero-downtime deployments. Traditionally, configurations were hardcoded inside applications, requiring a restart to apply changes. However, Kubernetes ConfigMaps and Secrets allow us to separate configurations from application code, enabling real-time updates without affecting running pods.

In this guide, we will:
✅ Use ConfigMaps for non-sensitive configuration data.
✅ Use Secrets for storing sensitive credentials securely.
✅ Mount configurations as volumes for dynamic updates without pod restarts.

Step 1: Creating a ConfigMap

A ConfigMap stores application settings like environment variables, log levels, and external API URLs.

ConfigMap YAML (configmap.yaml)

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  APP_ENV: "production"
  LOG_LEVEL: "info"
Click Here to Copy YAML

Apply the ConfigMap:

kubectl apply -f configmap.yaml

Step 2: Creating a Secret

A Secret securely stores sensitive data, such as database credentials and API keys.

Secret YAML (secret.yaml)

apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  DB_PASSWORD: Ym9zcw==
Click Here to Copy YAML

To encode a password in base64:

echo -n "boss" | base64

Apply the Secret:

kubectl apply -f secret.yaml

Step 3: Injecting ConfigMap and Secret into a Deployment

Now, let’s inject these values into a Kubernetes Deployment.

Deployment YAML (deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: nginx:latest
        env:
        - name: APP_ENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: APP_ENV
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: LOG_LEVEL
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secret
              key: DB_PASSWORD
Click Here to Copy YAML

Apply the Deployment:

kubectl apply -f deployment.yaml

Step 4: Enabling Automatic Updates for ConfigMaps

By default, ConfigMaps do not update dynamically inside a running pod when changed. To enable automatic updates, we can mount the ConfigMap as a volume.

Modify Deployment to Use ConfigMap as a Volume

spec:
  volumes:
    - name: config-volume
      configMap:
        name: app-config
  containers:
    - name: my-app
      image: nginx:latest
      volumeMounts:
        - name: config-volume
          mountPath: "/etc/config"
          readOnly: true
Click Here to Copy YAML

Apply the Updated Deployment:

kubectl apply -f deployment.yaml

Now, whenever the ConfigMap is updated, the mounted file inside the pod will automatically reflect the changes without requiring a pod restart.

Step 5: Verifying ConfigMap and Secret Usage

Check ConfigMap Values in a Running Pod

kubectl exec -it <pod-name> -- printenv | grep APP_ENV

Verify Secret Values (Base64 Encoded)

kubectl get secret app-secret -o yaml

Important: Kubernetes does not allow printing Secrets in plaintext for security reasons.

Conclusion

With ConfigMaps and Secrets, we have achieved:
✅ Separation of application and configuration for better maintainability.
✅ Dynamic configuration updates without pod restarts.
✅ Secure handling of sensitive data with Secrets.

This approach ensures zero downtime, scalable deployments, and strong security for your Kubernetes applications.

Have you implemented ConfigMaps and Secrets in your Kubernetes projects? Share your experiences in the comments!👇

Using Kustomize for Environment-Specific Kubernetes Configurations

Managing multiple environments (development, staging, production) in Kubernetes can be complex. Different environments require different configurations, such as replica counts, image versions, and resource limits. Kustomize provides a clean, native Kubernetes solution to manage these variations while keeping a single source of truth.

Why Use Kustomize?

  • Declarative approach – No need for external templating.
  • Layered configuration – Maintain a base config with environment-specific overlays.
  • Native Kubernetes integration – Directly used with kubectl apply -k.

Setting Up Kustomize Directory Structure

Kustomize uses a base-and-overlay pattern. We will create a base configuration that applies to all environments and overlays for dev, staging, and prod to customize them as needed.

Run the following to set up the directory structure:

mkdir -p kustomize/base kustomize/overlays/dev kustomize/overlays/staging kustomize/overlays/prod

Creating the Base Configuration

The base configuration includes the core Deployment and Service YAMLs.

Deployment (kustomize/base/deployment.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
Click Here to Copy YAML

Service (kustomize/base/service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Click Here to Copy YAML

Base kustomization.yaml (kustomize/base/kustomization.yaml)

resources:
  - deployment.yaml
  - service.yaml

Creating Environment-Specific Overlays

Now, let’s define overlays for dev, staging, and prod.

Dev Patch (kustomize/overlays/dev/deployment-patch.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: nginx
          image: nginx:1.19
Click Here to Copy YAML

Staging Patch (kustomize/overlays/staging/deployment-patch.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  template:
    spec:
      containers:
        - name: nginx
          image: nginx:1.21
Click Here to Copy YAML

Prod Patch (kustomize/overlays/prod/deployment-patch.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 5
  template:
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
Click Here to Copy YAML

Defining Overlays

Each overlay references the base and applies environment-specific patches.

Dev kustomization.yaml (kustomize/overlays/dev/kustomization.yaml)

resources:
  - ../../base
patches:
  - path: deployment-patch.yaml

Staging kustomization.yaml (kustomize/overlays/staging/kustomization.yaml)

resources:
  - ../../base
patches:
  - path: deployment-patch.yaml

Prod kustomization.yaml (kustomize/overlays/prod/kustomization.yaml)

resources:
  - ../../base
patches:
  - path: deployment-patch.yaml

Applying the Configurations

To deploy the environment-specific configurations, use:

kubectl apply -k kustomize/overlays/dev/
kubectl apply -k kustomize/overlays/staging/
kubectl apply -k kustomize/overlays/prod/

Verify the deployments:

kubectl get deployments
kubectl get services

Conclusion

Kustomize simplifies Kubernetes configuration management by allowing environment-specific modifications while maintaining a single source of truth. With its patch-based approach, it avoids duplication and makes configurations easier to manage.

Have you used Kustomize before? How do you manage multiple Kubernetes environments? Let’s discuss in the comments!👇

Managing Kubernetes Resources with Pulumi: A Hands-on Guide

Introduction

Infrastructure as Code (IaC) is revolutionizing how we manage cloud-native applications, and Pulumi brings a unique advantage by allowing developers to define and deploy Kubernetes resources using familiar programming languages. Unlike YAML-heavy configurations, Pulumi enables us to programmatically create, manage, and version Kubernetes resources with Python, TypeScript, Go, and more.

In this guide, we’ll walk through deploying an Nginx application on Minikube using Pulumi with Python, ensuring a scalable, maintainable, and declarative approach to Kubernetes management.

Step 1: Installing Pulumi & Setting Up the Environment

Before we start, install Pulumi and its dependencies:

Install Pulumi

curl -fsSL https://get.pulumi.com | sh

Install Kubernetes CLI (kubectl) & Minikube

sudo apt install -y kubectl
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

Start Minikube

minikube start

Step 2: Initialize a Pulumi Project

pulumi new kubernetes-python

During setup:

  • Project name: pulumi-k8s
  • Stack name: dev
  • Dependencies: Pulumi will install pulumi_kubernetes

Verify installation:

pulumi version
kubectl version --client

Step 3: Defining Kubernetes Resources in Python

Now, modify __main__.py to define an Nginx deployment and service using Pulumi:

Corrected __main__.py

import pulumi
import pulumi_kubernetes as k8s

# Define labels for the application
app_labels = {"app": "nginx"}

# Define the Deployment
deployment = k8s.apps.v1.Deployment(
    "nginx-deployment",
    metadata=k8s.meta.v1.ObjectMetaArgs(
        namespace="default",
        name="nginx-deployment",
        labels=app_labels,
    ),
    spec=k8s.apps.v1.DeploymentSpecArgs(
        replicas=2,
        selector=k8s.meta.v1.LabelSelectorArgs(
            match_labels=app_labels,
        ),
        template=k8s.core.v1.PodTemplateSpecArgs(
            metadata=k8s.meta.v1.ObjectMetaArgs(
                labels=app_labels,
            ),
            spec=k8s.core.v1.PodSpecArgs(
                containers=[
                    k8s.core.v1.ContainerArgs(
                        name="nginx",
                        image="nginx:latest",
                        ports=[k8s.core.v1.ContainerPortArgs(container_port=80)],
                    )
                ]
            ),
        ),
    ),
)

# Define a ClusterIP Service
service = k8s.core.v1.Service(
    "nginx-service",
    metadata=k8s.meta.v1.ObjectMetaArgs(
        namespace="default",
        name="nginx-service",
    ),
    spec=k8s.core.v1.ServiceSpecArgs(
        selector=app_labels,
        ports=[k8s.core.v1.ServicePortArgs(port=80, target_port=80)],
        type="ClusterIP",  # Ensures it is NOT a LoadBalancer
    ),
)

# Export the service name
pulumi.export("service_name", service.metadata.name)
Click Here to Copy Python

Step 4: Deploying the Kubernetes Resources

After defining the resources, apply them using:

pulumi up

This will:
✅ Create an Nginx Deployment with 2 replicas
✅ Create a ClusterIP Service
✅ Deploy everything to Minikube

To verify the deployment:

kubectl get all -n default

Step 5: Accessing the Application

Since we’re using ClusterIP, we need to port-forward to access the application:

kubectl port-forward svc/nginx-service 8080:80 -n default

Now, open http://localhost:8080 in your browser, and you should see the Nginx welcome page! 

Why Choose Pulumi Over YAML?

  • Programmatic Infrastructure – Use Python, TypeScript, Go, etc., instead of complex YAML.
  • Reusability & Automation – Write functions, use loops, and manage dependencies efficiently.
  • Version Control & CI/CD – Easily integrate with Git, Terraform, and GitHub Actions.

Conclusion

With Pulumi, managing Kubernetes infrastructure becomes developer-friendly and scalable. Unlike traditional YAML-based approaches, Pulumi enables real programming constructs, making it easier to define, update, and manage Kubernetes workloads efficiently.

What do you think? Have you used Pulumi before for Kubernetes? Let’s discuss in the comments!👇

Practical Kubernetes Tracing with Jaeger

Introduction

In modern microservices architectures, debugging performance issues can be challenging. Requests often travel across multiple services, making it difficult to identify bottlenecks. Jaeger, an open-source distributed tracing system, helps solve this problem by providing end-to-end request tracing across services.

In this blog post, we will explore how to:
✅ Deploy Jaeger in Kubernetes
✅ Set up distributed tracing without building custom images
✅ Use an OpenTelemetry-enabled NGINX for tracing

Step 1: Deploying Jaeger in Kubernetes

The easiest way to deploy Jaeger in Kubernetes is by using Helm.

Installing Jaeger Using Helm

To install Jaeger in the observability namespace, run:

helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm repo update
helm install jaeger jaegertracing/jaeger \
  --namespace observability --create-namespace
  --set query.service.httpPort=16686

Verify the Deployment

Check if Jaeger is running:

kubectl get pods -n observability
kubectl get svc -n observability

You should see services like jaeger-collector and jaeger-query.

Step 2: Deploying an NGINX-Based Application with OpenTelemetry

Instead of building a custom image, we use an OpenTelemetry-enabled NGINX container that automatically sends traces to Jaeger.

Creating the Deployment

Here’s the YAML configuration for an NGINX service that integrates with Jaeger:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-tracing
  namespace: observability
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-tracing
  template:
    metadata:
      labels:
        app: nginx-tracing
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
          env:
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://jaeger-collector:4317"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-tracing
  namespace: observability
spec:
  selector:
    app: nginx-tracing
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Click Here to Copy YAML

Deploying the Application

Apply the deployment:

kubectl apply -f nginx-tracing.yaml

Step 3: Accessing the Application

To expose the NGINX service locally, run:

kubectl port-forward svc/nginx-tracing 8080:80 -n observability

Now, visit http://localhost:8080 in your browser.

Step 4: Viewing Traces in Jaeger

To access the Jaeger UI, forward the query service port:

kubectl port-forward svc/jaeger 16686:16686 -n observability

Now, open http://localhost:16686 and search for traces from NGINX.

Conclusion

In this guide, we:
✅ Deployed Jaeger using Helm for distributed tracing.
✅ Used an OpenTelemetry-enabled NGINX image to send traces without building custom images.
✅ Accessed the Jaeger UI to visualize trace data.

Why is tracing important in your Kubernetes setup? Share your thoughts below!👇

Using Sealed Secrets for Secure GitOps Deployments

Overview

Managing secrets securely in a GitOps workflow is a critical challenge. Storing plain Kubernetes secrets in a Git repository is risky because anyone with access to the repository can view them. Sealed Secrets, an open-source project by Bitnami, provides a way to encrypt secrets before storing them in Git, ensuring they remain secure.

Key Takeaways:

  • Securely store Kubernetes secrets in a Git repository.
  • Automate secret management using GitOps principles.
  • Ensure secrets can only be decrypted by the Kubernetes cluster.

Step 1: Install Sealed Secrets Controller

The Sealed Secrets Controller runs in the Kubernetes cluster and is responsible for decrypting Sealed Secrets into regular Kubernetes secrets.

Install via Helm

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install sealed-secrets bitnami/sealed-secrets --namespace kube-system

Verify installation:

kubectl get pods -n kube-system | grep sealed-secrets
kubectl get svc -n kube-system | grep sealed-secrets

Step 2: Install Kubeseal CLI

To encrypt secrets locally before committing them to Git, install the kubeseal CLI:

Linux Installation

wget https://github.com/bitnami-labs/sealed-secrets/releases/latest/download/kubeseal-linux-amd64 -O kubeseal
chmod +x kubeseal
sudo mv kubeseal /usr/local/bin/Overview

Verify installation:

kubeseal --version

Step 3: Create a Kubernetes Secret

Let’s create a secret for a database password.

Create a file named secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: my-secret
  namespace: default
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQ=   # Base64 encoded "password"
Click Here to View & Copy YAML

Apply the secret:

kubectl apply -f secret.yaml

Step 4: Encrypt the Secret Using Kubeseal

Use the kubeseal CLI to encrypt the secret so it can be safely stored in Git.

kubeseal --controller-name=sealed-secrets \
  --controller-namespace=kube-system \
  --format=yaml < secret.yaml > sealed-secret.yaml

The output sealed-secret.yaml will contain an encrypted version of the secret.

Example:

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: my-secret
  namespace: default
spec:
  encryptedData:
    DB_PASSWORD: AgA+...long_encrypted_value...==
Click Here to View & Copy YAML

Now, delete the original secret from the cluster:

kubectl delete secret my-secret

Step 5: Apply Sealed Secret to Kubernetes

Deploy the sealed secret to Kubernetes:

kubectl apply -f sealed-secret.yaml

The Sealed Secrets Controller will automatically decrypt it and create a regular Kubernetes secret.

Verify the secret is recreated:

kubectl get secrets

Step 6: Push to GitHub for GitOps

Now, let’s commit the sealed secret to GitHub so it can be managed in a GitOps workflow.

Initialize a Git Repository (If Not Already Done)

git init
git remote add origin https://github.com/ArvindRaja45/deploy.git

Add and Commit Sealed Secret

git add sealed-secret.yaml
git commit -m "Added sealed secret for GitOps"
git push origin main

Step 7: Automate Deployment with GitHub Actions

To deploy the sealed secret automatically, create a GitHub Actions workflow.

Create a new file: .github/workflows/deployment.yaml

Push the Workflow to GitHubname: Deploy Sealed Secret

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3
        with:
          version: 'latest'

      - name: Configure Kubeconfig
        run: echo "$KUBECONFIG_DATA" | base64 --decode > kubeconfig.yaml

      - name: Deploy Sealed Secret
        run: kubectl apply -f sealed-secret.yaml --kubeconfig=kubeconfig.yaml
Click Here to View & Copy YAML

Push the Workflow to GitHub

git add .github/workflows/deployment.yaml
git commit -m "Added GitHub Actions deployment workflow"
git push origin main

Conclusion

Using Sealed Secrets, we achieved:

  • Secure secret management in GitOps workflows.
  • Automated deployments with GitHub Actions.
  • No plaintext secrets in Git repositories.

This setup ensures that secrets remain encrypted at rest, providing a secure and automated way to manage secrets in a Kubernetes environment.

Have you used Sealed Secrets in your GitOps workflow? Share your experience!👇

Implementing GitOps with Flux CD: A Deep Dive into Automated Kubernetes Deployments

The Challenge:

Ensuring a declarative and automated synchronization between Git and Kubernetes clusters is crucial for consistency, version control, and automated rollback capabilities. Traditional CI/CD pipelines often introduce inconsistencies due to manual triggers and state drift.

The Solution: Flux CD

Flux CD follows the GitOps model, ensuring that the desired state defined in a Git repository is continuously reconciled with the actual state in Kubernetes.

Example Scenario:

We are deploying a secure Nginx application using Flux CD. The goal is to:

  • Automate deployments from a private Git repository
  • Ensure rollback capabilities with Git versioning
  • Enforce a secure and immutable infrastructure

Step 1: Install Flux CLI

Ensure you have kubectl and helm installed, then install the Flux CLI:

curl -s https://fluxcd.io/install.sh | sudo bash
export PATH=$PATH:/usr/local/bin
flux --version

Step 2: Bootstrap Flux in Your Kubernetes Cluster

Flux needs to be bootstrapped with a Git repository that will act as the single source of truth for deployments.

flux bootstrap github \
  --owner=<your-github-username> \
  --repository=flux-gitops \
  --branch=main \
  --path=clusters/my-cluster \
  --personal

This command:

  • Sets up Flux in the cluster
  • Connects it to the specified GitHub repository
  • Deploys Flux components

Step 3: Define a GitOps Repository Structure

Structure your repository as follows:

flux-gitops/
│── clusters/
│   ├── my-cluster/
│   │   ├── kustomization.yaml
│── apps/
│   ├── nginx/
│   │   ├── deployment.yaml
│   │   ├── service.yaml
│   │   ├── kustomization.yaml

Step 4: Deploy a Secure Nginx Application

apps/nginx/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-secure
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-secure
  template:
    metadata:
      labels:
        app: nginx-secure
    spec:
      containers:
      - name: nginx
        image: nginx:1.21.6
        ports:
        - containerPort: 80
        securityContext:
          runAsNonRoot: true
          readOnlyRootFilesystem: true
Click Here to Copy YAML

apps/nginx/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
spec:
  selector:
    app: nginx-secure
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Step 5: Register the Nginx Application with Flux

Flux requires a Kustomization resource to track and deploy applications.

apps/nginx/kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
metadata:
  name: nginx
  namespace: flux-system
spec:
  targetNamespace: default
  interval: 1m
  sourceRef:
    kind: GitRepository
    name: flux-gitops
  path: "./apps/nginx"
  prune: true
  wait: true
Click Here to Copy YAML

Apply the Kustomization to the cluster:

kubectl apply -f apps/nginx/kustomization.yaml

Flux will now continuously monitor and apply the Nginx manifests from the Git repository.

Step 6: Verify Deployment

Check the status of the Nginx deployment:

kubectl get pods -l app=nginx-secure
kubectl get svc nginx-service

Flux’s reconciliation logs can be checked with:

flux get kustomizations
flux get sources git

Step 7: Automatic Rollback

Flux tracks all changes via Git. If an incorrect version of Nginx is deployed, revert using:

git revert <commit-id>
git push origin main

Flux will detect this change and automatically restore the last stable state.

Flux CD ensures:

  • Continuous deployment from Git
  • Automated rollbacks via Git versioning
  • Enhanced security with read-only containers
  • Immutable and declarative infrastructure

This is how GitOps transforms Kubernetes deployments into a fully automated and scalable process.

Conclusion

GitOps with Flux CD revolutionizes Kubernetes deployments by making them declarative, automated, and highly secure. With Git as the single source of truth, deployments become version-controlled, reproducible, and rollback-friendly.

Are you using GitOps in production? Drop a comment!👇

Zero-Downtime Deployments with ArgoCD in Minikube – A Game Changer!

The Challenge

Ever pushed an update to your Kubernetes app and suddenly… Downtime!
Users frustrated, services unavailable, and rollback becomes a nightmare. Sounds familiar?

The Solution: GitOps + ArgoCD

With ArgoCD, we eliminate downtime, automate rollbacks, and ensure seamless deployments directly from Git. Here’s how:

  • Push your code → ArgoCD auto-syncs and deploys!
  • Rolling updates ensure no downtime – users always get a running instance
  • Health checks prevent broken deployments – if something fails, ArgoCD rolls back instantly
  • Version-controlled deployments – every change is trackable and reversible

Step 1: Start Minikube and Install ArgoCD

Ensure you have Minikube and kubectl installed.

# Start Minikube
minikube start

# Create the ArgoCD namespace
kubectl create namespace argocd

# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Wait for the ArgoCD pods to be ready:

kubectl get pods -n argocd

Step 2: Expose and Access ArgoCD UI

Expose the ArgoCD API server:

kubectl port-forward svc/argocd-server -n argocd 8080:443

Retrieve the admin password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 --decode; echo

Login to ArgoCD:

argocd login localhost:8080 --username admin --password <your-password>

Step 3: Connect ArgoCD to Your GitHub Repo

Use SSH authentication since GitHub removed password-based authentication.

argocd repo add git@github.com:ArvindRaja45/rep.git --ssh-private-key-path ~/.ssh/id_rsa

Step 4: Define Kubernetes Manifests

Deployment.yaml (Zero-Downtime Strategy)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: myrepo/my-app:latest
          ports:
            - containerPort: 80
          livenessProbe:
            httpGet:
              path: /
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 10
Click Here to Copy YAML

Service.yaml (Expose the Application)

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: ClusterIP
Click Here to Copy YAML

Ingress.yaml (External Access)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
spec:
  rules:
    - host: my-app.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app-service
                port:
                  number: 80
Click Here to Copy YAML

Step 5: Deploy Application with ArgoCD

argocd app create my-app \
    --repo git@github.com:ArvindRaja45/rep.git \
    --path minikube \
    --dest-server https://kubernetes.default.svc \
    --dest-namespace default

Sync the application:

argocd app sync my-app

Monitor the deployment:

kubectl get pods -n default

Step 6: Enable Auto-Rollback on Failure

Edit Deployment.yaml to add readiness probe:

readinessProbe:
  httpGet:
    path: /
    port: 80
  initialDelaySeconds: 5
  periodSeconds: 10

Apply the changes:

kubectl apply -f deployment.yaml

If a deployment fails due to misconfiguration, ArgoCD will automatically rollback to the last successful version.

Why This is a Game-Changer

  • No more downtime – deployments are seamless and controlled
  • Fully automated – Git becomes your single source of truth
  • Faster rollbacks – mistakes are fixed in seconds, not hours
  • Scalable & production-ready – ArgoCD grows with your infrastructure

Conclusion

By integrating ArgoCD into your Kubernetes workflow, you can achieve zero-downtime deployments, automated rollbacks, and seamless application updates—all while ensuring stability and scalability. With Git as the single source of truth, your deployments become more reliable, repeatable, and transparent.

In today’s fast-paced DevOps world, automation is key to maintaining high availability and minimizing risk. Whether you’re running Minikube for development or scaling across multi-cluster production environments, ArgoCD is a game-changer for Kubernetes deployments.

How are you handling deployments in Kubernetes? Have you implemented ArgoCD yet? Share your thoughts!👇

Mastering Kubernetes Network Security with NetworkPolicies

Introduction

Did you know? By default, every pod in Kubernetes can talk to any other pod—leading to unrestricted internal communication and potential security risks. This is a major concern in production environments where microservices demand strict access controls.

So, how do we lock down communication while ensuring seamless service interactions? NetworkPolicies provide the answer!

The Challenge: Unrestricted Communication = Security Risk

  • Pods can freely communicate across namespaces
  • Sensitive data exposure due to open networking
  • No control over egress traffic to external services
  • Lateral movement risk if an attacker compromises a pod

In short, without proper security, a single breach can compromise the entire cluster. The Solution: Layered NetworkPolicies for Progressive Security

Step 1: Deploy the Application Pods

Create a Namespace for Isolation

Organize your application by creating a dedicated namespace.

kubectl create namespace secure-app

Effect:

  • All application resources will be deployed in this namespace
  • NetworkPolicies will only affect this namespace, avoiding interference with other workloads

Deploy the Frontend Pod

The frontend should be publicly accessible and interact with the backend.

apiVersion: v1
kind: Pod
metadata:
  name: frontend
  namespace: secure-app
  labels:
    app: frontend
spec:
  containers:
    - name: frontend
      image: nginx
Click Here to Copy YAML

Effect:

  • Creates a frontend pod that can serve requests
  • No restrictions yet—open network connectivity

Deploy the Backend Pod

The backend should only communicate with the frontend and the database.

apiVersion: v1
kind: Pod
metadata:
  name: backend
  namespace: secure-app
  labels:
    app: backend
spec:
  containers:
    - name: backend
      image: python:3.9
Click Here to Copy YAML

Effect:

  • Creates a backend pod to process logic
  • Currently accessible by any pod in the cluster

Deploy the Database Pod

The database should only be accessible to the backend.

apiVersion: v1
kind: Pod
metadata:
  name: database
  namespace: secure-app
  labels:
    app: database
spec:
  containers:
    - name: database
      image: postgres
Click Here to Copy YAML

Effect:

  • Creates a database pod with unrestricted access
  • A potential security risk—frontend or any pod could connect

Step 2: Implement NetworkPolicies for Security

By default, Kubernetes allows all pod-to-pod communication. To enforce security, we will apply four key NetworkPolicies step by step.

Enforce a Default Deny-All Policy

Restrict all ingress and egress traffic by default in the secure-app namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: secure-app
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
Click Here to Copy YAML

Effect:

  • No pod can send or receive traffic until explicitly allowed
  • Zero-trust security model enforced at the namespace level

Allow Frontend to Backend Communication

The frontend should be allowed to send requests to the backend, but not directly to the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
Click Here to Copy YAML

Effect:

  • Frontend can talk to backend
  • Backend cannot talk to frontend or database yet

Allow Backend to Access Database

The backend should be the only service that can communicate with the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
Click Here to Copy YAML

Effect:

  • Backend can talk to database
  • Frontend is blocked from accessing the database

Restrict Backend’s Outbound Traffic

To prevent data exfiltration, restrict backend’s egress traffic to only a specific external API.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-backend-egress
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24  # Allowed external API
Click Here to Copy YAML

Effect:

  • Backend can only connect to authorized external APIs
  • Prevents accidental or malicious data exfiltration

Step 3: Verify NetworkPolicies

After applying the policies, test network access between services.

Check if frontend can access backend:

kubectl exec frontend -n secure-app -- curl backend:80

Expected: Success

Check if frontend can access database:

kubectl exec frontend -n secure-app -- curl database:5432

Expected: Connection refused

Check if backend can access database:

kubectl exec backend -n secure-app -- curl database:5432

Expected: Success

Conclusion

We implemented a four-layer security model to gradually enforce pod-to-pod communication rules:

  • Default Deny-All Policy – Establish a zero-trust baseline by blocking all ingress and egress traffic. No pod can talk to another unless explicitly allowed.
  • Allow Frontend-to-Backend Traffic – Define strict ingress rules so only frontend pods can reach backend services.
  • Restrict Backend-to-Database Access – Grant database access only to backend pods, preventing unauthorized services from connecting.
  • Control Outbound Traffic – Limit backend egress access only to trusted external APIs while blocking all other outbound requests.

The Impact: Stronger Kubernetes Security

  • Strict pod-to-pod communication controls
  • Zero-trust networking within the cluster
  • Granular access control without breaking service dependencies
  • Minimal attack surface, reducing lateral movement risks

This layered approach ensures network isolation, data security, and regulated API access, transforming an open network into a highly secure Kubernetes environment.

Are you using NetworkPolicies in your Kubernetes setup? Let’s discuss how we can enhance cluster security together! Drop your thoughts in the comments.👇

Setting Up a Secure Multi-tenant Kubernetes Cluster in Minikube

Introduction

In Kubernetes, multi-tenancy enables multiple teams or projects to share the same cluster while maintaining isolation and security. However, ensuring proper access control and preventing resource conflicts is a challenge. This guide walks you through setting up a secure multi-tenant environment using Minikube, Namespaces, and RBAC (Role-Based Access Control).

Why Multi-tenancy in Kubernetes?

✅ Isolates workloads for different teams
✅ Ensures least-privilege access
✅ Prevents unintentional interference between teams
✅ Helps organizations optimize resource usage

Step 1: Start Minikube

Before setting up multi-tenancy, ensure Minikube is running:

minikube start --memory=4096 --cpus=2

Step 2: Create Isolated Namespaces

Each team or project should have its own namespace.

kubectl create namespace dev-team  
kubectl create namespace qa-team  
kubectl create namespace prod-team

You can verify:

kubectl get namespaces

Step 3: Implement Role-Based Access Control (RBAC)

Create a Role for Developers

Developers should only be able to manage resources within their namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev-team
  name: developer-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["create", "get", "list", "delete"]
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-role.yaml

Bind the Role to a User

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: dev-team
  name: developer-binding
subjects:
  - kind: User
    name: alice
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-binding.yaml

Now, user Alice has access only to dev-team namespace.

Step 4: Enforce Network Isolation (Optional but Recommended)

To ensure teams cannot access resources outside their namespace, create a NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-access
  namespace: dev-team
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector: {}
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: dev-team
Click Here to Copy YAML

Apply it:

kubectl apply -f restrict-access.yaml

This ensures that pods in dev-team can only communicate within their namespace.

Step 5: Verify Multi-tenancy

  • Try creating resources from a different namespace with a restricted user.
  • Check access control using kubectl auth can-i.

Example:

kubectl auth can-i create pods --as=alice --namespace=dev-team  # Allowed  
kubectl auth can-i delete pods --as=alice --namespace=prod-team  # Denied  

Conclusion

By setting up Namespaces, RBAC, and NetworkPolicies, you have successfully created a secure multi-tenant Kubernetes cluster in Minikube. This setup ensures each team has isolated access to their resources without interference.

Stay tuned for more Kubernetes security insights! 🚀