Practical Kubernetes Tracing with Jaeger

Introduction

In modern microservices architectures, debugging performance issues can be challenging. Requests often travel across multiple services, making it difficult to identify bottlenecks. Jaeger, an open-source distributed tracing system, helps solve this problem by providing end-to-end request tracing across services.

In this blog post, we will explore how to:
✅ Deploy Jaeger in Kubernetes
✅ Set up distributed tracing without building custom images
✅ Use an OpenTelemetry-enabled NGINX for tracing

Step 1: Deploying Jaeger in Kubernetes

The easiest way to deploy Jaeger in Kubernetes is by using Helm.

Installing Jaeger Using Helm

To install Jaeger in the observability namespace, run:

helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm repo update
helm install jaeger jaegertracing/jaeger \
  --namespace observability --create-namespace
  --set query.service.httpPort=16686

Verify the Deployment

Check if Jaeger is running:

kubectl get pods -n observability
kubectl get svc -n observability

You should see services like jaeger-collector and jaeger-query.

Step 2: Deploying an NGINX-Based Application with OpenTelemetry

Instead of building a custom image, we use an OpenTelemetry-enabled NGINX container that automatically sends traces to Jaeger.

Creating the Deployment

Here’s the YAML configuration for an NGINX service that integrates with Jaeger:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-tracing
  namespace: observability
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-tracing
  template:
    metadata:
      labels:
        app: nginx-tracing
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          ports:
            - containerPort: 80
          env:
            - name: OTEL_EXPORTER_OTLP_ENDPOINT
              value: "http://jaeger-collector:4317"
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-tracing
  namespace: observability
spec:
  selector:
    app: nginx-tracing
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Click Here to Copy YAML

Deploying the Application

Apply the deployment:

kubectl apply -f nginx-tracing.yaml

Step 3: Accessing the Application

To expose the NGINX service locally, run:

kubectl port-forward svc/nginx-tracing 8080:80 -n observability

Now, visit http://localhost:8080 in your browser.

Step 4: Viewing Traces in Jaeger

To access the Jaeger UI, forward the query service port:

kubectl port-forward svc/jaeger 16686:16686 -n observability

Now, open http://localhost:16686 and search for traces from NGINX.

Conclusion

In this guide, we:
✅ Deployed Jaeger using Helm for distributed tracing.
✅ Used an OpenTelemetry-enabled NGINX image to send traces without building custom images.
✅ Accessed the Jaeger UI to visualize trace data.

Why is tracing important in your Kubernetes setup? Share your thoughts below!👇

Leave a comment