Building Event-Driven Architectures with Kubernetes and NATS

Introduction

Modern cloud-native applications demand scalability, flexibility, and resilience. Traditional request-response communication patterns often lead to tight coupling between services, making them hard to scale independently. Event-driven architectures (EDA) solve this by enabling asynchronous, loosely coupled communication between microservices.

In this article, we will explore how to build an event-driven system using NATS (a lightweight, high-performance messaging system) on Kubernetes. We will:

  • Deploy a NATS messaging broker
  • Create a publisher service that emits events
  • Develop a subscriber service that listens and processes events
  • Demonstrate event-driven communication with Kubernetes

Prerequisites

Before we begin, ensure you have the following:

  • A running Kubernetes cluster (Minikube, k3s, or a self-managed cluster)
  • kubectl installed and configured
  • Helm installed for deploying NATS
  • Docker installed for building container images

Step 1: Deploy NATS on Kubernetes

We will use Helm to deploy NATS.

Install NATS using Helm

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
helm install nats nats/nats --namespace default

Verify that the NATS pods are running:

kubectl get pods -l app.kubernetes.io/name=nats

Step 2: Create a Publisher Service

Our publisher will send messages to NATS on a specific subject.

Publisher Code (publisher.py)

import nats
import asyncio

async def main():
    nc = await nats.connect("nats://nats.default.svc.cluster.local:4222")
    await nc.publish("events.data", b"Hello, this is an event message!")
    print("Message sent!")
    await nc.close()

if __name__ == "__main__":
    asyncio.run(main())
Click Here to Copy Python Code

Dockerfile for Publisher

FROM python:3.8
WORKDIR /app
COPY publisher.py .
RUN pip install nats-py
CMD ["python", "publisher.py"]

Build the Image

docker build -t mygrpc-server:latest .

Publisher Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: publisher
spec:
  replicas: 1
  selector:
    matchLabels:
      app: publisher
  template:
    metadata:
      labels:
        app: publisher
    spec:
      containers:
        - name: publisher
          image: mygrpc-server:latest
          env:
            - name: NATS_SERVER
              value: "nats://nats.default.svc.cluster.local:4222"
Click Here to Copy YAML

Deploy the publisher:

kubectl apply -f publisher-deployment.yaml

Step 3: Create a Subscriber Service

Our subscriber listens to the events.data subject and processes messages.

Subscriber Code (subscriber.py)

import nats
import asyncio

async def message_handler(msg):
    subject = msg.subject
    data = msg.data.decode()
    print(f"Received message on {subject}: {data}")

async def main():
    nc = await nats.connect("nats://nats.default.svc.cluster.local:4222")
    await nc.subscribe("events.data", cb=message_handler)
    print("Listening for events...")
    while True:
        await asyncio.sleep(1)

if __name__ == "__main__":
    asyncio.run(main())
Click Here to Copy Python Code

Dockerfile for Subscriber

FROM python:3.8
WORKDIR /app
COPY subscriber.py .
RUN pip install nats-py
CMD ["python", "subscriber.py"]

Build the Image

docker build -t mygrpc-subscriber:latest .

Subscriber Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: subscriber
spec:
  replicas: 1
  selector:
    matchLabels:
      app: subscriber
  template:
    metadata:
      labels:
        app: subscriber
    spec:
      containers:
        - name: subscriber
          image: mygrpc-subscriber:latest
          env:
            - name: NATS_SERVER
              value: "nats://nats.default.svc.cluster.local:4222"
Click Here to Copy YAML

Deploy the subscriber:

kubectl apply -f subscriber-deployment.yaml

Step 4: Test the Event-Driven Architecture

Once all components are deployed, check logs for event propagation.

  1. Check the subscriber logs:
kubectl logs -l app=subscriber
  1. Trigger the publisher manually:
kubectl delete pod -l app=publisher
  1. Observe subscriber receiving events: If everything is set up correctly, the subscriber should print:
Received message on events.data: Hello, this is an event message!

Conclusion

We successfully built an event-driven system using Kubernetes and NATS. This architecture allows microservices to communicate asynchronously, improving scalability, resilience, and maintainability.

Key takeaways:

  • NATS simplifies pub-sub messaging in Kubernetes.
  • Event-driven patterns decouple services and improve scalability.
  • Kubernetes provides a flexible infrastructure to deploy and manage such systems.

This architecture can be extended with multiple subscribers, durable streams, and event filtering for more advanced use cases. If you have any questions let me know in the comments!👇

Practical gRPC Communication Between Kubernetes Services

Introduction

Microservices architectures require efficient communication. While REST APIs are widely used, gRPC is a better alternative when high performance, streaming capabilities, and strict API contracts are required.

In this guide, we’ll set up gRPC communication between two services in Kubernetes:

  1. grpc-server → A gRPC server that provides a simple API.
  2. grpc-client → A client that interacts with the gRPC server.

This tutorial covers everything from scratch, including .proto files, Docker images, Kubernetes manifests, and testing.

Prerequisites

  • Kubernetes cluster (Minikube, Kind, or self-hosted)
  • kubectl installed
  • Docker installed
  • Basic knowledge of gRPC and Protobuf

Step 1: Define the gRPC Service Using Protocol Buffers

Create a file called service.proto:

syntax = "proto3";

package grpcservice;

// Define the gRPC Service
service Greeter {
    rpc SayHello (HelloRequest) returns (HelloReply);
}

// Define Request message
message HelloRequest {
    string name = 1;
}

// Define Response message
message HelloReply {
    string message = 1;
}
Click Here to Copy Python Code

Step 2: Implement the gRPC Server and Client

gRPC Server (Python)

Create a file called server.py:

import grpc
from concurrent import futures
import time
import service_pb2
import service_pb2_grpc

class GreeterServicer(service_pb2_grpc.GreeterServicer):
    def SayHello(self, request, context):
        return service_pb2.HelloReply(message=f"Hello, {request.name}!")

def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    service_pb2_grpc.add_GreeterServicer_to_server(GreeterServicer(), server)
    server.add_insecure_port('[::]:50051')
    server.start()
    print("gRPC Server is running on port 50051...")
    server.wait_for_termination()

if __name__ == "__main__":
    serve()
Click Here to Copy Python Code

gRPC Client (Python)

Create a file called client.py:

import grpc
import service_pb2
import service_pb2_grpc

def run():
    channel = grpc.insecure_channel('grpc-server:50051')
    stub = service_pb2_grpc.GreeterStub(channel)
    response = stub.SayHello(service_pb2.HelloRequest(name="Kubernetes"))
    print("Server response:", response.message)

if __name__ == "__main__":
    run()
Click Here to Copy Python Code

Step 3: Create Docker Images

Create a Dockerfile for both the server and client.

Dockerfile for gRPC Server

FROM python:3.8
WORKDIR /app
COPY server.py service_pb2.py service_pb2_grpc.py .
RUN pip install grpcio grpcio-tools
CMD ["python", "server.py"]

Dockerfile for gRPC Client

FROM python:3.8
WORKDIR /app
COPY client.py service_pb2.py service_pb2_grpc.py .
RUN pip install grpcio grpcio-tools
CMD ["python", "client.py"]

Build and Push Images

Run the following commands:

docker build -t mygrpc-server:latest -f Dockerfile .
docker build -t mygrpc-client:latest -f Dockerfile .

Step 4: Deploy gRPC Services in Kubernetes

Deployment and Service for gRPC Server

Create grpc-server-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-server
  template:
    metadata:
      labels:
        app: grpc-server
    spec:
      containers:
        - name: grpc-server
          image: mygrpc-server:latest
          ports:
            - containerPort: 50051
---
apiVersion: v1
kind: Service
metadata:
  name: grpc-server
spec:
  selector:
    app: grpc-server
  ports:
    - protocol: TCP
      port: 50051
      targetPort: 50051
Click Here to Copy YAML

Deployment for gRPC Client

Create grpc-client-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grpc-client
  template:
    metadata:
      labels:
        app: grpc-client
    spec:
      containers:
        - name: grpc-client
          image: mygrpc-client:latest
Click Here to Copy YAML

Step 5: Apply the Manifests

Deploy everything to Kubernetes:

kubectl apply -f grpc-server-deployment.yaml
kubectl apply -f grpc-client-deployment.yaml

Check the status:

kubectl get pods

Step 6: Testing gRPC Communication

To see if the client successfully communicates with the server:

kubectl logs -l app=grpc-client

If everything works, you should see an output like:

Server response: Hello, Kubernetes!

Step 7: Exposing gRPC Service Externally (Optional)

If you want to expose the gRPC service externally using an Ingress, create grpc-ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: grpc-ingress
spec:
  rules:
  - host: grpc.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grpc-server
            port:
              number: 50051
Click Here to Copy YAML

Apply the ingress:

kubectl apply -f grpc-ingress.yaml

Conclusion

gRPC on Kubernetes ensures fast, efficient, and scalable communication.
We built a gRPC server and client, deployed them on Kubernetes, and established seamless service-to-service communication.
This setup is ideal for high-performance microservices architectures.

Are you using gRPC in Kubernetes? Share your experience in the comments!👇

Serverless on Kubernetes: Setting Up Knative Serving

Introduction

In modern cloud-native environments, developers seek the best of both worlds: the flexibility of Kubernetes and the simplicity of serverless computing. Knative Serving brings serverless capabilities to Kubernetes, enabling auto-scaling, scale-to-zero, and request-driven execution.

Why Knative?

  • Auto-scaling – Scale pods based on incoming traffic.
  • Scale-to-zero – When no traffic exists, Knative frees up resources.
  • Traffic Splitting – Deploy multiple versions and roll out updates safely.
  • Event-driven – Respond dynamically to requests without managing infra manually.

In this guide, we’ll install Knative Serving on Kubernetes and deploy a simple serverless application.

Prerequisites

Ensure you have:
 ✅ A running Kubernetes cluster (Minikube, K3s, or any managed Kubernetes).
 ✅ kubectl installed and configured.
 ✅ A valid domain name (or use a local DNS setup like sslip.io).

Step 1: Installing Knative Serving

Install the Required CRDs

Knative requires custom resources for managing serving components. Apply them to your cluster.

kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-crds.yaml

Install the Knative Core Components

kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-core.yaml

Install a Networking Layer

Knative requires an ingress to route traffic. We’ll use Kourier (a lightweight option).

kubectl apply -f https://github.com/knative/net-kourier/releases/latest/download/kourier.yaml
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

Verify Installation

kubectl get pods -n knative-serving

All Knative components should be in Running state.

Step 2: Deploying a Serverless Application

We’ll deploy a simple Hello World application using Knative Serving.

Create a Knative Service

Define a KnativeService resource for our application.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello-world
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go
          env:
            - name: TARGET
              value: "Knative on Kubernetes"
Click Here to Copy YAML

Apply the manifest:

kubectl apply -f hello-world.yaml

Step 3: Testing the Serverless Application

Get the External URL

Run the following command to get the app URL:

kubectl get ksvc hello-world

You should see an output like:

NAME          URL                                      LATESTCREATED   LATESTREADY
hello-world   http://hello-world.default.example.com   hello-world-00001   hello-world-00001

To test it:

curl http://hello-world.default.example.com

You should see “Hello Knative on Kubernetes!”

Step 4: Auto-Scaling & Scale-to-Zero in Action

Knative automatically scales up when there’s traffic and scales down to zero when idle.

Send multiple requests to trigger scaling:

hey -n 100 -c 10 http://hello-world.default.example.com

Watch the pods scaling:

kubectl get pods -w

Wait a few minutes and check again:

kubectl get pods

If no traffic exists, the app scales to zero, freeing up cluster resources!

Conclusion

With Knative Serving, we have transformed Kubernetes into a serverless powerhouse!

  • Deployed request-driven applications
  • Enabled automatic scaling and scale-to-zero
  • Simplified service deployment with KnativeService

Knative gives us the best of Kubernetes and serverless—scalability, flexibility, and resource efficiency. Now, you can deploy event-driven applications without worrying about infrastructure overhead.

Follow for more Kubernetes and cloud-native insights! Drop your thoughts in the comments!👇

Implementing the Circuit Breaker Pattern in Kubernetes

Introduction

In a microservices architecture, services communicate with each other over the network, which introduces latency, failures, and timeouts. If one service fails, it can cause cascading failures, leading to a complete system outage. The Circuit Breaker Pattern helps prevent these failures from propagating, ensuring system resilience.

In this blog, we’ll set up circuit breaking in Kubernetes using Istio, implement failure handling, and demonstrate how to recover from failures gracefully.

Why Circuit Breakers?

The Problem

  • Unstable services: If a dependent service is slow or failing, all requests pile up, increasing resource consumption.
  • Cascading failures: A single failing service can bring down the entire system.
  • Poor user experience: Without intelligent request handling, users experience timeouts and failures.

The Solution: Circuit Breaker Pattern

  • Detects when a service is slow or failing and temporarily stops sending requests.
  • Prevents resource exhaustion by limiting concurrent requests.
  • Automatically recovers once the service is stable.

Step 1: Setting Up Circuit Breaking in Kubernetes

Prerequisites

  • A running Kubernetes cluster (Minikube, kind, or any managed K8s).
  • Istio Service Mesh installed.

Step 2: Deploying Sample Microservices

We will deploy two microservices:

  • Product Service (simulates a reliable service).
  • Order Service (calls the Product Service, sometimes failing).

Deploy the Product Service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: product-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: product-service
  template:
    metadata:
      labels:
        app: product-service
    spec:
      containers:
      - name: product-service
        image: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: product-service
spec:
  selector:
    app: product-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Deploy the Order Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
    spec:
      containers:
      - name: order-service
        image: httpd
        ports:
        - containerPort: 80
        env:
        - name: PRODUCT_SERVICE_URL
          value: "http://product-service"
---
apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Step 3: Enabling Circuit Breaking with Istio

Now, let’s limit requests to the Product Service using Istio’s DestinationRule.

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: product-service-circuit-breaker
spec:
  host: product-service
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 1
        maxRequestsPerConnection: 1
    outlierDetection:
      consecutive5xxErrors: 2
      interval: 5s
      baseEjectionTime: 30s
Click Here to Copy YAML

What’s Happening Here?

  • Limits max requests per connection to prevent overload.
  • If Istio detects 2 consecutive 5xx errors, it ejects the service for 30 seconds.
  • Prevents an unhealthy service from taking down dependent services.

Step 4: Testing the Circuit Breaker

To test the circuit breaker, simulate failures in the Product Service by sending multiple requests:

kubectl exec -it $(kubectl get pod -l app=order-service -o jsonpath='{.items[0].metadata.name}') -- curl -X GET http://product-service

Now, if the Product Service fails multiple times, Istio stops sending requests temporarily, preventing further failures.

Conclusion

Circuit Breakers are essential for building fault-tolerant microservices.
Prevents cascading failures by intelligently rejecting requests.
Enhances system resilience by allowing only healthy services to process traffic.
Automatically recovers when services are back online.

Using Istio’s built-in circuit breaking, Kubernetes workloads can self-heal and prevent system-wide outages!

Would you use Circuit Breakers in your production environment? Let’s discuss! 👇

Building a Microservices Architecture with Kubernetes: A Complete Example

Introduction

Modern applications demand scalability, flexibility, and resilience. Microservices architecture allows teams to break down monolithic applications into smaller, independent services that can be deployed, scaled, and managed separately.

In this blog, we’ll build a complete microservices-based application on Kubernetes, covering:

  • Defining multiple microservices
  • Exposing them via Kubernetes Services
  • Managing inter-service communication
  • Deploying and scaling them efficiently

Step 1: Define Our Microservices

For this example, we’ll create two services:

  • Product Service: Handles product details.
  • Order Service: Manages order placements and communicates with the Product service.

Deployment for Product Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: product-service
  labels:
    app: product-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: product-service
  template:
    metadata:
      labels:
        app: product-service
    spec:
      containers:
      - name: product-service
        image: nginx:latest
        ports:
        - containerPort: 80
Click Here to Copy YAML

Service for Product Service

apiVersion: v1
kind: Service
metadata:
  name: product-service
spec:
  selector:
    app: product-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
Click Here to Copy YAML

Deployment for Order Service

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
  labels:
    app: order-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: order-service
  template:
    metadata:
      labels:
        app: order-service
    spec:
      containers:
      - name: order-service
        image: httpd:latest
        env:
        - name: PRODUCT_SERVICE_URL
          value: "http://product-service"
        ports:
        - containerPort: 80
Click Here to Copy YAML

Service for Order Service

apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: order-service
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
Click Here to Copy YAML

Step 2: Deploy and Verify

Apply all YAML files:

kubectl apply -f product-service.yaml
kubectl apply -f order-service.yaml

Check if pods are running:

kubectl get pods

Verify services:

kubectl get svc

Step 3: Expose Services to External Users

To make the services accessible externally, use an Ingress resource.

Ingress Configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: microservices-ingress
spec:
  rules:
  - host: myapp.local
    http:
      paths:
      - path: /products
        pathType: Prefix
        backend:
          service:
            name: product-service
            port:
              number: 80
      - path: /orders
        pathType: Prefix
        backend:
          service:
            name: order-service
            port:
              number: 80
Click Here to Copy YAML

Apply it:

kubectl apply -f ingress.yaml

Step 4: Scaling Microservices

Need to scale services? Just increase the replica count!

kubectl scale deployment product-service --replicas=5
kubectl scale deployment order-service --replicas=5

Verify scaling:

kubectl get deployments

Step 5: Observability and Logging

To monitor microservices performance, use Prometheus and Grafana for metrics and ELK Stack for centralized logging.

Example: Enable logs for a pod

kubectl logs -f <pod-name>

Conclusion

Microservices architecture, combined with Kubernetes, enables scalable, resilient, and manageable applications. By breaking monoliths into independent services, we:
✅ Improve scalability and fault tolerance
✅ Enable faster deployments and updates
✅ Simplify inter-service communication with Kubernetes Services

Start deploying microservices today and scale your applications like a pro! 

What challenges have you faced while working with microservices on Kubernetes? Let’s discuss in the comments!👇