Implementing Pod Security Standards in Kubernetes: A Practical Guide

Introduction

Securing Kubernetes workloads is critical to prevent security breaches and container escapes. Kubernetes Pod Security Standards (PSS) provide a framework for defining and enforcing security settings for Pods at different levels—Privileged, Baseline, and Restricted.

In this guide, you’ll learn how to implement Pod Security Standards in a Kubernetes cluster while ensuring your applications run smoothly.Understanding Pod Security Standards (PSS)

Kubernetes defines three security levels for Pods:

  1. Privileged – No restrictions; full host access (Not recommended).
  2. Baseline – Reasonable defaults for running common applications.
  3. Restricted – Strictest policies for maximum security.

Goal: Implement Restricted policies where possible while ensuring apps run without breaking.

Step 1: Enabling Pod Security Admission (PSA)

Starting from Kubernetes v1.23, Pod Security Admission (PSA) replaces PodSecurityPolicies (PSP) to enforce PSS.

Check if PSA is Enabled

kubectl get ns --show-labels

If namespaces are not labeled with PSS, you must label them manually.

Step 2: Apply Pod Security Labels to Namespaces

Namespaces must be labeled to enforce a Pod Security Standard.

Baseline Policy (For Standard Applications)

kubectl label namespace default pod-security.kubernetes.io/enforce=baseline

Restricted Policy (For Maximum Security)

kubectl label namespace secure-apps pod-security.kubernetes.io/enforce=restricted

Verify Labels

kubectl get ns --show-labels

Step 3: Deploy Applications with Pod Security Standards

Example 1: Non-Root Container (Restricted Mode)

A properly secured Pod must not run as root. Here’s a compliant example:

apiVersion: v1
kind: Pod
metadata:
  name: secure-app
  namespace: secure-apps
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1001
    fsGroup: 1001
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
      volumeMounts:
        - mountPath: /data
          name: app-storage
  volumes:
    - name: app-storage
      emptyDir: {}
Click Here to Copy YAML

Why This Pod Is Secure?

  • Runs as a non-root user (runAsNonRoot: true)
  • No privilege escalation (allowPrivilegeEscalation: false)
  • All unnecessary Linux capabilities are dropped (capabilities.drop: ALL)
  • Uses a read-only root filesystem (readOnlyRootFilesystem: true)

Step 4: Prevent Non-Compliant Pods from Running

Test deploying a privileged Pod in the secure-apps namespace:

apiVersion: v1
kind: Pod
metadata:
  name: privileged-app
  namespace: secure-apps
spec:
  containers:
    - name: app
      image: nginx:latest
      securityContext:
        privileged: true
Click Here to Copy YAML

Expected Output

Error: pods "privileged-app" is forbidden: violates PodSecurity "restricted:latest"

Kubernetes blocks the Pod due to the privileged: true setting.Step 5: Audit and Warn Non-Compliant Pods (Optional)

Instead of enforcing policies immediately, you can audit violations first.

Audit Only:

kubectl label namespace dev-team pod-security.kubernetes.io/audit=restricted

Warn Before Deployment:

kubectl label namespace dev-team pod-security.kubernetes.io/warn=restricted

Now, users get warnings instead of immediate rejections.Step 6: Verify Security Policies

Check PSA Enforcement Logs

kubectl describe pod secure-app -n secure-apps

Test Pod Security Admission

kubectl run test-pod --image=nginx --namespace=secure-apps

If the Pod violates security rules, Kubernetes will block it.

Conclusion

✅ You implemented Kubernetes Pod Security Standards
✅ Pods now run with minimal privileges
✅ Security policies are enforced while maintaining functionality

How do you implement PSS? Let’s discuss best practices in the comments!👇

Setting Up a Secure Multi-tenant Kubernetes Cluster in Minikube

Introduction

In Kubernetes, multi-tenancy enables multiple teams or projects to share the same cluster while maintaining isolation and security. However, ensuring proper access control and preventing resource conflicts is a challenge. This guide walks you through setting up a secure multi-tenant environment using Minikube, Namespaces, and RBAC (Role-Based Access Control).

Why Multi-tenancy in Kubernetes?

✅ Isolates workloads for different teams
✅ Ensures least-privilege access
✅ Prevents unintentional interference between teams
✅ Helps organizations optimize resource usage

Step 1: Start Minikube

Before setting up multi-tenancy, ensure Minikube is running:

minikube start --memory=4096 --cpus=2

Step 2: Create Isolated Namespaces

Each team or project should have its own namespace.

kubectl create namespace dev-team  
kubectl create namespace qa-team  
kubectl create namespace prod-team

You can verify:

kubectl get namespaces

Step 3: Implement Role-Based Access Control (RBAC)

Create a Role for Developers

Developers should only be able to manage resources within their namespace.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: dev-team
  name: developer-role
rules:
  - apiGroups: [""]
    resources: ["pods", "services"]
    verbs: ["create", "get", "list", "delete"]
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-role.yaml

Bind the Role to a User

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: dev-team
  name: developer-binding
subjects:
  - kind: User
    name: alice
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: developer-role
  apiGroup: rbac.authorization.k8s.io
Click Here to Copy YAML

Apply it:

kubectl apply -f developer-binding.yaml

Now, user Alice has access only to dev-team namespace.

Step 4: Enforce Network Isolation (Optional but Recommended)

To ensure teams cannot access resources outside their namespace, create a NetworkPolicy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-access
  namespace: dev-team
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector: {}
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              name: dev-team
Click Here to Copy YAML

Apply it:

kubectl apply -f restrict-access.yaml

This ensures that pods in dev-team can only communicate within their namespace.

Step 5: Verify Multi-tenancy

  • Try creating resources from a different namespace with a restricted user.
  • Check access control using kubectl auth can-i.

Example:

kubectl auth can-i create pods --as=alice --namespace=dev-team  # Allowed  
kubectl auth can-i delete pods --as=alice --namespace=prod-team  # Denied  

Conclusion

By setting up Namespaces, RBAC, and NetworkPolicies, you have successfully created a secure multi-tenant Kubernetes cluster in Minikube. This setup ensures each team has isolated access to their resources without interference.

Stay tuned for more Kubernetes security insights! 🚀

Kubernetes: The Cornerstone of Modern Container Orchestration

Introduction

In today’s fast-paced world of cloud-native technologies, Kubernetes has become synonymous with container orchestration. Initially developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes is an open-source platform designed to automate the deployment, scaling, and management of containerized applications.

As enterprises increasingly migrate to microservices-based architectures and containerized environments, Kubernetes provides the scalability, reliability, and operational efficiency required to manage applications at scale. This blog explores the essential concepts of Kubernetes, its architecture, and the advantages it offers for modern application deployment and management.

What is Kubernetes?

Kubernetes is a container orchestration platform that enables the management of containerized applications in a distributed environment. By abstracting the underlying infrastructure, Kubernetes simplifies many of the complexities associated with deploying, scaling, and maintaining applications in a production environment.

Kubernetes is designed to automate the following key tasks:

  • Deployment: Kubernetes ensures that the desired number of application instances are running at all times.
  • Scaling: It can automatically scale applications based on resource usage or custom metrics.
  • Self-healing: Kubernetes monitors the health of containers and automatically replaces failed ones to ensure continuous service availability.
  • Load balancing and networking: Kubernetes provides mechanisms for distributing traffic across application instances, ensuring efficient use of resources.

Through declarative configuration, Kubernetes allows organizations to define the desired state of their applications, and the platform autonomously works to maintain that state.

Key Components of Kubernetes

The architecture of Kubernetes is composed of several interdependent components that work together to manage containerized applications. Below are the core components of a Kubernetes environment:

  1. Pod
    A Pod is the smallest deployable unit in Kubernetes, which represents a single instance of a running process in the cluster. A Pod can contain one or more containers, which are guaranteed to share the same network namespace and storage resources. Pods enable co-located containers to communicate efficiently and are the foundation for scaling applications in Kubernetes.
  2. Node
    A Node is a physical or virtual machine that hosts the components necessary to run Pods. Kubernetes clusters consist of multiple Nodes that collectively handle application workloads. Nodes run the Kubelet (which ensures containers are running), the Kube Proxy (which handles networking), and the container runtime (such as Docker or containerd).
  3. Deployment
    A Deployment is a Kubernetes resource that manages the lifecycle of Pods. It defines the desired state of an application, including the number of replicas, and ensures that the desired number of Pods are running and available. Deployments also facilitate rolling updates and rollback capabilities, ensuring applications can be updated with minimal disruption.
  4. Service
    A Service provides a stable interface for accessing Pods, regardless of changes to the Pods themselves (e.g., scaling, replacements). Services enable load balancing and ensure that network traffic is directed to the correct Pods, ensuring high availability and reliability for applications.
  5. Namespace
    Namespaces allow users to divide a Kubernetes cluster into multiple virtual clusters, providing a way to organize resources and isolate workloads. This is particularly useful in multi-tenant environments or when managing applications across different stages of the software development lifecycle (e.g., development, testing, and production).

Why Kubernetes?

Kubernetes offers several compelling advantages that have driven its widespread adoption across industries. Some of the key benefits include:

  1. Scalability
    Kubernetes is designed to scale applications seamlessly. By leveraging auto-scaling capabilities, it can adjust the number of replicas of a service based on resource utilization or custom metrics. This dynamic scaling ensures that applications can handle varying levels of traffic without manual intervention.
  2. High Availability
    Kubernetes provides built-in features for maintaining high availability. It automatically reschedules containers across nodes when a node fails, ensuring that applications remain operational. By replicating Pods across different nodes, Kubernetes helps ensure that services are always available, even in the event of hardware failures.
  3. Efficient Resource Utilization
    Kubernetes optimizes resource usage by ensuring that workloads are scheduled efficiently across the cluster. It allows developers and operators to define resource requests and limits for containers, helping prevent resource contention and ensuring that workloads are evenly distributed across available nodes.
  4. Declarative Configuration
    Kubernetes employs a declarative approach to configuration management. Rather than specifying step-by-step instructions for deployment, users describe the desired end state of an application (e.g., the number of replicas, resource allocations). Kubernetes continuously works to maintain this state, ensuring consistency across the cluster.
  5. Portability
    Kubernetes supports multi-cloud and hybrid-cloud environments, allowing applications to run seamlessly across different infrastructure providers. This level of abstraction means that organizations can migrate applications between different clouds or on-premises environments with minimal reconfiguration.

Kubernetes Architecture: An Overview

The architecture of Kubernetes consists of two main layers: the Control Plane and the Worker Nodes.

Control Plane

The Control Plane is responsible for managing the overall state of the Kubernetes cluster. Key components of the Control Plane include:

  • API Server: The API Server exposes the Kubernetes API, acting as the central point of communication between the cluster’s components. It serves as the interface for users and applications to interact with the Kubernetes cluster.
  • Scheduler: The Scheduler is responsible for assigning Pods to nodes based on resource availability and other constraints. It ensures that Pods are distributed across the cluster in an optimal manner.
  • Controller Manager: The Controller Manager ensures that the desired state of the cluster is maintained by continuously monitoring and adjusting the state of Pods, Nodes, and other resources. It handles tasks like scaling, replicating Pods, and managing deployments.

Worker Nodes

The Worker Nodes are where the application workloads are executed. Each Worker Node contains the following components:

  • Kubelet: The Kubelet is an agent that runs on each node. It ensures that containers are running in Pods as expected and reports the status back to the Control Plane.
  • Kube Proxy: The Kube Proxy manages networking and load balancing across Pods within the cluster. It ensures that network traffic is properly directed to the correct Pods based on defined Services.
  • Container Runtime: The container runtime (e.g., Docker or containerd) is responsible for running and managing containers on the node.

Real-World Applications of Kubernetes

Kubernetes is a versatile platform used in a variety of real-world scenarios. Some of the most common use cases include:

  • Microservices Architecture: Kubernetes is ideal for managing microservices-based applications. It allows each microservice to run in its own container, while ensuring that they can scale independently and communicate reliably.
  • CI/CD Pipelines: Kubernetes simplifies the continuous integration and deployment process by automating the deployment, scaling, and monitoring of application components. It ensures that applications can be updated with minimal downtime and rollbacks are performed automatically if issues arise.
  • Hybrid Cloud Deployments: Kubernetes enables seamless workload management across multiple environments, including on-premises data centers and public clouds. It supports hybrid cloud strategies by abstracting the underlying infrastructure.

Conclusion

Kubernetes has fundamentally changed how organizations deploy and manage applications. Its ability to automate complex tasks such as scaling, load balancing, and self-healing has made it an indispensable tool for modern software development. As businesses continue to adopt containerized architectures, Kubernetes will remain at the forefront of container orchestration, driving efficiency, reliability, and scalability in application management.

By embracing Kubernetes, organizations can ensure they are equipped to manage and scale their applications in an increasingly dynamic and distributed environment.