Mastering Kubernetes Network Security with NetworkPolicies

Introduction

Did you know? By default, every pod in Kubernetes can talk to any other pod—leading to unrestricted internal communication and potential security risks. This is a major concern in production environments where microservices demand strict access controls.

So, how do we lock down communication while ensuring seamless service interactions? NetworkPolicies provide the answer!

The Challenge: Unrestricted Communication = Security Risk

  • Pods can freely communicate across namespaces
  • Sensitive data exposure due to open networking
  • No control over egress traffic to external services
  • Lateral movement risk if an attacker compromises a pod

In short, without proper security, a single breach can compromise the entire cluster. The Solution: Layered NetworkPolicies for Progressive Security

Step 1: Deploy the Application Pods

Create a Namespace for Isolation

Organize your application by creating a dedicated namespace.

kubectl create namespace secure-app

Effect:

  • All application resources will be deployed in this namespace
  • NetworkPolicies will only affect this namespace, avoiding interference with other workloads

Deploy the Frontend Pod

The frontend should be publicly accessible and interact with the backend.

apiVersion: v1
kind: Pod
metadata:
  name: frontend
  namespace: secure-app
  labels:
    app: frontend
spec:
  containers:
    - name: frontend
      image: nginx
Click Here to Copy YAML

Effect:

  • Creates a frontend pod that can serve requests
  • No restrictions yet—open network connectivity

Deploy the Backend Pod

The backend should only communicate with the frontend and the database.

apiVersion: v1
kind: Pod
metadata:
  name: backend
  namespace: secure-app
  labels:
    app: backend
spec:
  containers:
    - name: backend
      image: python:3.9
Click Here to Copy YAML

Effect:

  • Creates a backend pod to process logic
  • Currently accessible by any pod in the cluster

Deploy the Database Pod

The database should only be accessible to the backend.

apiVersion: v1
kind: Pod
metadata:
  name: database
  namespace: secure-app
  labels:
    app: database
spec:
  containers:
    - name: database
      image: postgres
Click Here to Copy YAML

Effect:

  • Creates a database pod with unrestricted access
  • A potential security risk—frontend or any pod could connect

Step 2: Implement NetworkPolicies for Security

By default, Kubernetes allows all pod-to-pod communication. To enforce security, we will apply four key NetworkPolicies step by step.

Enforce a Default Deny-All Policy

Restrict all ingress and egress traffic by default in the secure-app namespace.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: secure-app
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
Click Here to Copy YAML

Effect:

  • No pod can send or receive traffic until explicitly allowed
  • Zero-trust security model enforced at the namespace level

Allow Frontend to Backend Communication

The frontend should be allowed to send requests to the backend, but not directly to the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: frontend
Click Here to Copy YAML

Effect:

  • Frontend can talk to backend
  • Backend cannot talk to frontend or database yet

Allow Backend to Access Database

The backend should be the only service that can communicate with the database.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: backend
Click Here to Copy YAML

Effect:

  • Backend can talk to database
  • Frontend is blocked from accessing the database

Restrict Backend’s Outbound Traffic

To prevent data exfiltration, restrict backend’s egress traffic to only a specific external API.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-backend-egress
  namespace: secure-app
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 203.0.113.0/24  # Allowed external API
Click Here to Copy YAML

Effect:

  • Backend can only connect to authorized external APIs
  • Prevents accidental or malicious data exfiltration

Step 3: Verify NetworkPolicies

After applying the policies, test network access between services.

Check if frontend can access backend:

kubectl exec frontend -n secure-app -- curl backend:80

Expected: Success

Check if frontend can access database:

kubectl exec frontend -n secure-app -- curl database:5432

Expected: Connection refused

Check if backend can access database:

kubectl exec backend -n secure-app -- curl database:5432

Expected: Success

Conclusion

We implemented a four-layer security model to gradually enforce pod-to-pod communication rules:

  • Default Deny-All Policy – Establish a zero-trust baseline by blocking all ingress and egress traffic. No pod can talk to another unless explicitly allowed.
  • Allow Frontend-to-Backend Traffic – Define strict ingress rules so only frontend pods can reach backend services.
  • Restrict Backend-to-Database Access – Grant database access only to backend pods, preventing unauthorized services from connecting.
  • Control Outbound Traffic – Limit backend egress access only to trusted external APIs while blocking all other outbound requests.

The Impact: Stronger Kubernetes Security

  • Strict pod-to-pod communication controls
  • Zero-trust networking within the cluster
  • Granular access control without breaking service dependencies
  • Minimal attack surface, reducing lateral movement risks

This layered approach ensures network isolation, data security, and regulated API access, transforming an open network into a highly secure Kubernetes environment.

Are you using NetworkPolicies in your Kubernetes setup? Let’s discuss how we can enhance cluster security together! Drop your thoughts in the comments.👇

Automating Container Security Scans with Trivy in GitHub Actions

Introduction

Ensuring security in containerized applications is a critical aspect of modern DevOps workflows. To enhance security and streamline vulnerability detection, I integrated Trivy into my GitHub repository, enabling automated security scanning within the CI/CD pipeline.

Objective

To automate vulnerability scanning for container images using Trivy within GitHub Actions, ensuring secure deployments with minimal manual intervention.

Step 1: Install Trivy v0.18.3

Run the following commands to download and install Trivy v0.18.3:

# Update package lists
sudo apt update

# Download Trivy v0.18.3 .deb package
wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb

# Install Trivy using dpkg
sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

# Verify installation
trivy --version

Step 2: Create a GitHub Actions Workflow for Automated Scanning

To integrate Trivy into your GitHub repository (trivy-security-scan), create a workflow file.

Create the Workflow Directory and File

mkdir -p .github/workflows
nano .github/workflows/trivy-scan.yml

Add the Following Content

name: Trivy Security Scan

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  trivy-scan:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v4

      - name: Install Trivy v0.18.3
        run: |
          sudo apt update
          wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb
          sudo dpkg -i trivy_0.18.3_Linux-64bit.deb

      - name: Run Trivy Image Scan
        run: |
          trivy image alpine:latest > trivy-report.txt
          cat trivy-report.txt

      - name: Upload Scan Report
        uses: actions/upload-artifact@v4
        with:
          name: security-report
          path: trivy-report.txt
Click Here to Copy YAML

Step 3: Commit and Push the Workflow

git add .github/workflows/trivy-scan.yml
git commit -m "Added Trivy v0.18.3 security scan workflow"
git push origin main

Step 4: Verify GitHub Actions Workflow

  1. Open your GitHub repository: https://github.com/ArvindRaja45/trivy-security-scan.
  2. Click on the “Actions” tab.
  3. Ensure the “Trivy Security Scan” workflow runs successfully.
  4. Check the trivy-report.txt under Artifacts in GitHub Actions.

Final Outcome

  • Trivy v0.18.3 is installed using .deb package.
  • GitHub Actions will run Trivy security scans on Docker images.
  • Vulnerability reports are uploaded as artifacts for review.

Why This Matters?

By integrating security checks early in the CI/CD pipeline, we reduce risks and avoid last-minute surprises in production!

Security isn’t a one-time process—it’s a culture! How are you integrating security in your DevOps workflow? Let’s discuss in the comments!👇

Implementing Pod Security Standards in Kubernetes: A Practical Guide

Introduction

Securing Kubernetes workloads is critical to prevent security breaches and container escapes. Kubernetes Pod Security Standards (PSS) provide a framework for defining and enforcing security settings for Pods at different levels—Privileged, Baseline, and Restricted.

In this guide, you’ll learn how to implement Pod Security Standards in a Kubernetes cluster while ensuring your applications run smoothly.Understanding Pod Security Standards (PSS)

Kubernetes defines three security levels for Pods:

  1. Privileged – No restrictions; full host access (Not recommended).
  2. Baseline – Reasonable defaults for running common applications.
  3. Restricted – Strictest policies for maximum security.

Goal: Implement Restricted policies where possible while ensuring apps run without breaking.

Step 1: Enabling Pod Security Admission (PSA)

Starting from Kubernetes v1.23, Pod Security Admission (PSA) replaces PodSecurityPolicies (PSP) to enforce PSS.

Check if PSA is Enabled

kubectl get ns --show-labels

If namespaces are not labeled with PSS, you must label them manually.

Step 2: Apply Pod Security Labels to Namespaces

Namespaces must be labeled to enforce a Pod Security Standard.

Baseline Policy (For Standard Applications)

kubectl label namespace default pod-security.kubernetes.io/enforce=baseline

Restricted Policy (For Maximum Security)

kubectl label namespace secure-apps pod-security.kubernetes.io/enforce=restricted

Verify Labels

kubectl get ns --show-labels

Step 3: Deploy Applications with Pod Security Standards

Example 1: Non-Root Container (Restricted Mode)

A properly secured Pod must not run as root. Here’s a compliant example:

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
  namespace: secure-apps
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1001
    fsGroup: 1001
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      volumeMounts:
        - mountPath: /data
          name: app-storage
  volumes:
    - name: app-storage
      emptyDir: {}
Click Here to Copy YAML

Why This Pod Is Secure?

  • Runs as a non-root user (runAsNonRoot: true)
  • No privilege escalation (allowPrivilegeEscalation: false)
  • All unnecessary Linux capabilities are dropped (capabilities.drop: ALL)
  • Uses a read-only root filesystem (readOnlyRootFilesystem: true)

Step 4: Prevent Non-Compliant Pods from Running

Test deploying a privileged Pod in the secure-apps namespace:

apiVersion: v1
kind: Pod
metadata:
  name: privileged-app
  namespace: secure-apps
spec:
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        privileged: true
Click Here to Copy YAML

Expected Output

Error: pods "privileged-app" is forbidden: violates PodSecurity "restricted:latest"

Kubernetes blocks the Pod due to the privileged: true setting.Step 5: Audit and Warn Non-Compliant Pods (Optional)

Instead of enforcing policies immediately, you can audit violations first.

Audit Only:

kubectl label namespace dev-team pod-security.kubernetes.io/audit=restricted

Warn Before Deployment:

kubectl label namespace dev-team pod-security.kubernetes.io/warn=restricted

Now, users get warnings instead of immediate rejections.Step 6: Verify Security Policies

Check PSA Enforcement Logs

kubectl describe pod secure-app -n secure-apps

Test Pod Security Admission

kubectl run test-pod --image=nginx --namespace=secure-apps

If the Pod violates security rules, Kubernetes will block it.

Conclusion

✅ You implemented Kubernetes Pod Security Standards
✅ Pods now run with minimal privileges
✅ Security policies are enforced while maintaining functionality

How do you implement PSS? Let’s discuss best practices in the comments!👇

Setting Up a Secure Multi-tenant Kubernetes Cluster in Minikube

Introduction

In Kubernetes, multi-tenancy enables multiple teams or projects to share the same cluster while maintaining isolation and security. However, ensuring proper access control and preventing resource conflicts is a challenge. This guide walks you through setting up a secure multi-tenant environment using Minikube, Namespaces, and RBAC (Role-Based Access Control).

Why Multi-tenancy in Kubernetes?

✅ Isolates workloads for different teams
✅ Ensures least-privilege access
✅ Prevents unintentional interference between teams
✅ Helps organizations optimize resource usage

Step 1: Start Minikube

Before setting up multi-tenancy, ensure Minikube is running:

minikube start --memory=4096 --cpus=2

Step 2: Create Isolated Namespaces

Each team or project should have its own namespace.

kubectl create namespace dev-team  
kubectl create namespace qa-team  
kubectl create namespace prod-team

You can verify:

kubectl get namespaces

Step 3: Implement Role-Based Access Control (RBAC)

Create a Role for Developers

Developers should only be able to manage resources within their namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev-team
  name: developer-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["create", "get", "list", "delete"]
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-role.yaml

Bind the Role to a User

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: dev-team
  name: developer-binding
subjects:
  - kind: User
    name: alice
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-binding.yaml

Now, user Alice has access only to dev-team namespace.

Step 4: Enforce Network Isolation (Optional but Recommended)

To ensure teams cannot access resources outside their namespace, create a NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-access
  namespace: dev-team
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector: {}
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: dev-team
Click Here to Copy YAML

Apply it:

kubectl apply -f restrict-access.yaml

This ensures that pods in dev-team can only communicate within their namespace.

Step 5: Verify Multi-tenancy

  • Try creating resources from a different namespace with a restricted user.
  • Check access control using kubectl auth can-i.

Example:

kubectl auth can-i create pods --as=alice --namespace=dev-team  # Allowed  
kubectl auth can-i delete pods --as=alice --namespace=prod-team  # Denied  

Conclusion

By setting up Namespaces, RBAC, and NetworkPolicies, you have successfully created a secure multi-tenant Kubernetes cluster in Minikube. This setup ensures each team has isolated access to their resources without interference.

Stay tuned for more Kubernetes security insights! 🚀