Introduction
Microservices architectures require efficient communication. While REST APIs are widely used, gRPC is a better alternative when high performance, streaming capabilities, and strict API contracts are required.
In this guide, we’ll set up gRPC communication between two services in Kubernetes:
- grpc-server → A gRPC server that provides a simple API.
- grpc-client → A client that interacts with the gRPC server.
This tutorial covers everything from scratch, including .proto files, Docker images, Kubernetes manifests, and testing.
Prerequisites
- Kubernetes cluster (Minikube, Kind, or self-hosted)
- kubectl installed
- Docker installed
- Basic knowledge of gRPC and Protobuf
Step 1: Define the gRPC Service Using Protocol Buffers
Create a file called service.proto:
syntax = "proto3";
package grpcservice;
// Define the gRPC Service
service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply);
}
// Define Request message
message HelloRequest {
string name = 1;
}
// Define Response message
message HelloReply {
string message = 1;
}
Step 2: Implement the gRPC Server and Client
gRPC Server (Python)
Create a file called server.py:
import grpc
from concurrent import futures
import time
import service_pb2
import service_pb2_grpc
class GreeterServicer(service_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return service_pb2.HelloReply(message=f"Hello, {request.name}!")
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
service_pb2_grpc.add_GreeterServicer_to_server(GreeterServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
print("gRPC Server is running on port 50051...")
server.wait_for_termination()
if __name__ == "__main__":
serve()
gRPC Client (Python)
Create a file called client.py:
import grpc
import service_pb2
import service_pb2_grpc
def run():
channel = grpc.insecure_channel('grpc-server:50051')
stub = service_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(service_pb2.HelloRequest(name="Kubernetes"))
print("Server response:", response.message)
if __name__ == "__main__":
run()
Step 3: Create Docker Images
Create a Dockerfile for both the server and client.
Dockerfile for gRPC Server
FROM python:3.8
WORKDIR /app
COPY server.py service_pb2.py service_pb2_grpc.py .
RUN pip install grpcio grpcio-tools
CMD ["python", "server.py"]
Dockerfile for gRPC Client
FROM python:3.8
WORKDIR /app
COPY client.py service_pb2.py service_pb2_grpc.py .
RUN pip install grpcio grpcio-tools
CMD ["python", "client.py"]
Build and Push Images
Run the following commands:
docker build -t mygrpc-server:latest -f Dockerfile .
docker build -t mygrpc-client:latest -f Dockerfile .
Step 4: Deploy gRPC Services in Kubernetes
Deployment and Service for gRPC Server
Create grpc-server-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-server
spec:
replicas: 1
selector:
matchLabels:
app: grpc-server
template:
metadata:
labels:
app: grpc-server
spec:
containers:
- name: grpc-server
image: mygrpc-server:latest
ports:
- containerPort: 50051
---
apiVersion: v1
kind: Service
metadata:
name: grpc-server
spec:
selector:
app: grpc-server
ports:
- protocol: TCP
port: 50051
targetPort: 50051
Deployment for gRPC Client
Create grpc-client-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-client
spec:
replicas: 1
selector:
matchLabels:
app: grpc-client
template:
metadata:
labels:
app: grpc-client
spec:
containers:
- name: grpc-client
image: mygrpc-client:latest
Step 5: Apply the Manifests
Deploy everything to Kubernetes:
kubectl apply -f grpc-server-deployment.yaml
kubectl apply -f grpc-client-deployment.yaml
Check the status:
kubectl get pods
Step 6: Testing gRPC Communication
To see if the client successfully communicates with the server:
kubectl logs -l app=grpc-client
If everything works, you should see an output like:
Server response: Hello, Kubernetes!
Step 7: Exposing gRPC Service Externally (Optional)
If you want to expose the gRPC service externally using an Ingress, create grpc-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grpc-ingress
spec:
rules:
- host: grpc.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grpc-server
port:
number: 50051
Apply the ingress:
kubectl apply -f grpc-ingress.yaml
Conclusion
gRPC on Kubernetes ensures fast, efficient, and scalable communication.
We built a gRPC server and client, deployed them on Kubernetes, and established seamless service-to-service communication.
This setup is ideal for high-performance microservices architectures.
Are you using gRPC in Kubernetes? Share your experience in the comments!![]()