ExternalDNS: Automating DNS Management for Kubernetes Services

Introduction

Managing DNS records manually in Kubernetes can be time-consuming and error-prone. As services scale and change dynamically, updating DNS records manually becomes inefficient. ExternalDNS automates DNS record management by dynamically syncing records with Kubernetes objects.

In this blog, we will cover:
✅ What is ExternalDNS?
✅ How it works with Kubernetes
✅ Steps to deploy and configure it
✅ Best practices for seamless automation

What is ExternalDNS?

ExternalDNS is a Kubernetes add-on that automatically manages DNS records for services and ingress resources. It eliminates manual updates by dynamically syncing DNS records with Kubernetes objects.

Key Benefits:

  • Automated DNS Updates – No manual intervention required.
  • Multi-Cloud Support – Works with AWS Route 53, Cloudflare, Google Cloud DNS, etc.
  • Scalability – Adapts to dynamic changes in Kubernetes services.
  • Improved Reliability – Reduces misconfiguration and ensures consistency.

Deploying ExternalDNS in Kubernetes

Install ExternalDNS using Helm

helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update

For AWS Route 53:

helm install external-dns external-dns/external-dns \
  --namespace kube-system \
  --set provider=aws \
  --set txtOwnerId="my-cluster"

For Cloudflare:

helm install external-dns external-dns/external-dns \
  --namespace kube-system \
  --set provider=cloudflare \
  --set cloudflare.apiToken="YOUR_CLOUDFLARE_API_TOKEN" \
  --set txtOwnerId="my-cluster"

Verify Installation

kubectl get pods -n kube-system -l app.kubernetes.io/name=external-dns

Configuring ExternalDNS for Kubernetes Services

Service Example (LoadBalancer Type)

apiVersion: v1
kind: Service
metadata:
  name: my-app
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
    - port: 80
      targetPort: 8080
Click Here to Copy YAML

Apply the service:

kubectl apply -f service.yaml

Configuring ExternalDNS for Ingress Resources

Ingress Example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.example.com
spec:
  rules:
    - host: myapp.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-app
                port:
                  number: 80
Click Here to Copy YAML

Apply the Ingress resource:

kubectl apply -f ingress.yaml

Verifying DNS Records

Check ExternalDNS Logs

kubectl logs -l app.kubernetes.io/name=external-dns -n kube-system

Validate DNS Resolution

dig myapp.example.com

Expected output should contain the correct A record pointing to your service.

Conclusion

ExternalDNS simplifies DNS management in Kubernetes by automating record updates, reducing manual errors, and ensuring service availability.

Key Takeaways:

✅ Automates DNS record creation and updates
✅ Works with multiple cloud DNS providers
✅ Integrates seamlessly with Kubernetes services and ingress

By integrating ExternalDNS, Kubernetes administrators can enhance scalability, automation, and reliability in their infrastructure.

Have you used ExternalDNS in your Kubernetes setup? Share your experience!👇

Implementing mTLS in Kubernetes with Cert-Manager

Introduction

Securing internal communication between services in Kubernetes is a critical security practice. Mutual TLS (mTLS) ensures encrypted traffic while also verifying the identity of both the client and server. In this guide, we will configure mTLS between two microservices using Cert-Manager for automated certificate issuance and renewal.

Problem Statement

By default, Kubernetes services communicate in plaintext, making them vulnerable to man-in-the-middle attacks. We need a solution that:

  • Encrypts communication between services.
  • Ensures only trusted services can talk to each other.
  • Automates certificate management to avoid manual rotation.

Solution: mTLS with Cert-Manager

We will deploy:
✅ A Certificate Authority (CA) to issue certificates.
✅ A Kubernetes Issuer to generate TLS certificates.
✅ Two microservices (App One and App Two) configured with mTLS.
✅ A test pod to verify secure service-to-service communication.

Step 1: Install Cert-Manager

Cert-Manager automates TLS certificate lifecycle management. If you haven’t installed it yet, deploy it using Helm:

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true

Verify installation:

kubectl get pods -n cert-manager

Step 2: Configure a Certificate Authority (CA)

First, we need a CA to issue certificates for our services.

Create a self-signed root CA:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: ca-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: root-ca
  namespace: cert-manager
spec:
  secretName: root-ca-secret
  isCA: true
  duration: 365d
  renewBefore: 30d
  subject:
    organizations:
      - MyOrg
  commonName: root-ca
  privateKey:
    algorithm: RSA
    size: 2048
  issuerRef:
    name: ca-issuer
    kind: ClusterIssuer
Click Here to Copy YAML

Apply it:

kubectl apply -f ca.yaml

Step 3: Issue TLS Certificates for Services

Now, let’s create an Issuer that will generate certificates signed by our CA:

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: ca-issuer
  namespace: default
spec:
  ca:
    secretName: root-ca-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f issuer.yaml

Now, request certificates for App One and App Two:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: app-one-tls
  namespace: default
spec:
  secretName: app-one-tls-secret
  duration: 90d
  renewBefore: 2160h
  issuerRef:
    name: ca-issuer
    kind: Issuer
  dnsNames:
    - app-one.default.svc.cluster.local
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: app-two-tls
  namespace: default
spec:
  secretName: app-two-tls-secret
  duration: 90d
  renewBefore: 2160h
  issuerRef:
    name: ca-issuer
    kind: Issuer
  dnsNames:
    - app-two.default.svc.cluster.local
Click Here to Copy YAML

Apply it:

kubectl apply -f app-certs.yaml

Step 4: Deploy the Services with TLS

Now, let’s deploy App One and App Two, mounting the certificates.

App One Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-one
  labels:
    app: app-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-one
  template:
    metadata:
      labels:
        app: app-one
    spec:
      containers:
      - name: app-one
        image: nginx
        ports:
        - containerPort: 443
        volumeMounts:
        - name: tls
          mountPath: "/etc/tls"
          readOnly: true
      volumes:
      - name: tls
        secret:
          secretName: app-one-tls-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f app-one.yaml

App Two Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-two
  labels:
    app: app-two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-two
  template:
    metadata:
      labels:
        app: app-two
    spec:
      containers:
      - name: app-two
        image: nginx
        ports:
        - containerPort: 443
        volumeMounts:
        - name: tls
          mountPath: "/etc/tls"
          readOnly: true
      volumes:
      - name: tls
        secret:
          secretName: app-two-tls-secret
Click Here to Copy YAML

Apply it:

kubectl apply -f app-two.yaml

Step 5: Test mTLS Communication

We will now test service-to-service communication using mTLS.

Run a test pod with curl:

kubectl run curl-test --rm -it --image=curlimages/curl -- /bin/sh

Inside the pod, run:

curl --cacert /etc/tls/ca.crt --cert /etc/tls/tls.crt --key /etc/tls/tls.key https://app-two.default.svc.cluster.local:443

Expected Output:

Hello from App Two

If the request fails, check logs and ensure the correct ports are used.

Conclusion

With this setup, we’ve successfully implemented Mutual TLS (mTLS) in Kubernetes using Cert-Manager.

✅ Encrypted Communication – All traffic is secured via TLS.
✅ Mutual Authentication – Both services verify each other.
✅ Automated Certificate Lifecycle – Cert-Manager handles issuance & renewal.

Have you implemented mTLS in your Kubernetes clusters? Share your experiences in the comments! 👇

Setting Up an Ingress Controller with Advanced Routing in Kubernetes

Introduction

In a Kubernetes environment, managing external access to multiple services efficiently is crucial. This post walks through setting up the NGINX Ingress Controller in minikube with advanced routing rules, including authentication and rate limiting. By the end of this guide, you’ll have a working setup where different services are exposed through a single Ingress with sophisticated HTTP routing.

Step 1: Deploying the NGINX Ingress Controller

Minikube does not include an Ingress controller by default, so we need to enable it:

minikube addons enable ingress

Verify the Ingress controller is running:

kubectl get pods -n kube-system | grep ingress

Expected output:

NAME                           READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-xx     1/1     Running   0          2m

Step 2: Deploying Sample Applications

We’ll create two sample deployments with simple HTTP responses.

app-one.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-one
  labels:
    app: app-one
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-one
  template:
    metadata:
      labels:
        app: app-one
    spec:
      containers:
      - name: app-one
        image: hashicorp/http-echo
        args:
        - "-text=Hello from App One"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app-one
spec:
  selector:
    app: app-one
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

app-two.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-two
  labels:
    app: app-two
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-two
  template:
    metadata:
      labels:
        app: app-two
    spec:
      containers:
      - name: app-two
        image: hashicorp/http-echo
        args:
        - "-text=Hello from App Two"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app-two
spec:
  selector:
    app: app-two
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Apply these configurations:

kubectl apply -f app-one.yaml
kubectl apply -f app-two.yaml

Step 3: Creating the Advanced Ingress

Now, we’ll create an Ingress resource to route traffic based on the request path.

advanced-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: advanced-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/limit-rps: "5"  # Rate limiting: max 5 requests per second
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.local
    http:
      paths:
      - path: /app-one
        pathType: Prefix
        backend:
          service:
            name: app-one
            port:
              number: 80
      - path: /app-two
        pathType: Prefix
        backend:
          service:
            name: app-two
            port:
              number: 80
Click Here to Copy YAML

Apply the Ingress:

kubectl apply -f advanced-ingress.yaml

Check if the Ingress is created:

kubectl get ingress

Expected output:

NAME            CLASS   HOSTS         ADDRESS        PORTS   AGE
advanced-ingress nginx   myapp.local   192.168.49.2   80      5s

Step 4: Testing the Ingress

First, update your /etc/hosts file to map myapp.local to Minikube’s IP:

echo "$(minikube ip) myapp.local" | sudo tee -a /etc/hosts

Test the routes:

curl -H "Host: myapp.local" http://myapp.local/app-one
curl -H "Host: myapp.local" http://myapp.local/app-two

Expected responses:

Hello from App One
Hello from App Two

Step 5: Enforcing Basic Authentication

To secure access, we add Basic Authentication for app-one.

First, create a username-password pair:

echo "admin:$(openssl passwd -stdin -apr1)" | kubectl create secret generic my-auth-secret --from-file=auth -n default

Modify advanced-ingress.yaml to enforce authentication:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: advanced-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/limit-rps: "5"
    nginx.ingress.kubernetes.io/auth-type: "basic"
    nginx.ingress.kubernetes.io/auth-secret: "my-auth-secret"
    nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.local
    http:
      paths:
      - path: /app-one
        pathType: Prefix
        backend:
          service:
            name: app-one
            port:
              number: 80
      - path: /app-two
        pathType: Prefix
        backend:
          service:
            name: app-two
            port:
              number: 80
Click Here to Copy YAML

Reapply the Ingress:

kubectl apply -f advanced-ingress.yaml

Test authentication:

curl -u admin:your-password -H "Host: myapp.local" http://myapp.local/app-one

If correct, it will return:

Hello from App One

Without credentials, it returns:

401 Unauthorized

Conclusion

By following this guide, we have:

✅ Deployed an NGINX Ingress Controller in Minikube.
✅ Configured multiple applications behind a single Ingress resource.
✅ Implemented rate limiting to control excessive requests.
✅ Secured an endpoint using Basic Authentication.

These techniques are essential when deploying microservices in production environments. You can further extend this setup with TLS termination, JWT authentication, or OAuth integration.

Let me know in the comments if you have any questions!👇

Practical Traffic Splitting and Canary Deployments with Istio

Introduction

As applications evolve, releasing new versions safely is crucial. Traditional deployment methods often risk downtime or entire system failures if a new release is faulty. Canary deployments allow gradual rollout of new versions while monitoring performance.

With Istio, we can implement traffic splitting to control how much traffic goes to each version, ensuring a smooth transition without disruptions.

In this post, we’ll walk through:
✅ Setting up Istio in Minikube
✅ Deploying two versions of an app
✅ Using Istio’s VirtualService and DestinationRule for canary rollout

Step 1: Install and Configure Istio in Minikube

Since we are working in a local Minikube cluster, first enable Istio:

minikube start
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled

The istio-injection=enabled label ensures that Istio automatically injects sidecar proxies into our pods.

Step 2: Deploy Application Versions

We’ll deploy two versions of our application (v1 and v2).

Create myapp:v1 Deployment

Save the following YAML as deployment-v1.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      version: v1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
        - name: myapp
          image: myapp:v1  # Using locally built image
          ports:
            - containerPort: 80
Click Here to Copy YAML

Create myapp:v2 Deployment

Save the following YAML as deployment-v2.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
        - name: myapp
          image: myapp:v2  # Using locally built image
          ports:
            - containerPort: 80
Click Here to Copy YAML

Apply both deployments:

kubectl apply -f deployment-v1.yaml
kubectl apply -f deployment-v2.yaml

Step 3: Define an Istio Service

We need a service to route traffic to both versions.

Save the following as service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Click Here to Copy YAML

Apply the service:

kubectl apply -f service.yaml

Step 4: Create Istio VirtualService for Traffic Splitting

Now, let’s configure Istio to split traffic between v1 and v2.

Save the following YAML as virtual-service.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
    - myapp
  http:
    - route:
        - destination:
            host: myapp
            subset: v1
          weight: 80
        - destination:
            host: myapp
            subset: v2
          weight: 20
Click Here to Copy YAML

This configuration sends 80% of traffic to v1 and 20% to v2.
Apply the VirtualService:

kubectl apply -f virtual-service.yaml

Step 5: Define an Istio DestinationRule

To allow version-based routing, we need a DestinationRule.

Save the following YAML as destination-rule.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: myapp
spec:
  host: myapp
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
Click Here to Copy YAML

Apply the DestinationRule:

kubectl apply -f destination-rule.yaml

Step 6: Test Traffic Splitting

Now, let’s check the traffic distribution:

kubectl run -it --rm --image=curlimages/curl test -- curl http://myapp

Run this multiple times—you should see 80% responses from v1 and 20% from v2.

Conclusion

By implementing Istio’s VirtualService and DestinationRule, we successfully built a canary deployment that gradually rolls out a new version without impacting all users at once.

Key Takeaways:
✅ Istio simplifies traffic control for Kubernetes applications.
✅ Canary deployments allow safe testing of new versions.
✅ Traffic splitting can be adjusted dynamically as confidence in v2 increases.

This approach ensures zero downtime deployments, improving stability and user experience. 

What’s your experience with Istio? Drop a comment below!👇

Implementing Istio: A Step-by-Step Service Mesh Tutorial

Introduction

Modern applications rely on microservices, making service-to-service communication complex. Managing traffic routing, security, and observability becomes crucial.

Istio is a powerful service mesh that provides:
✅ Traffic Management – Fine-grained control over requests.
✅ Security – Mutual TLS (mTLS) for encrypted communication.
✅ Observability – Insights into service interactions and performance.

This step-by-step guide covers:

  • Installing Istio on a Kubernetes cluster.
  • Deploying microservices with Istio sidecars.
  • Configuring traffic routing and security.
  • Enabling monitoring with Grafana, Kiali, and Jaeger.

Step 1: Install Istio in Kubernetes

1.1 Download and Install Istio CLI

curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH

1.2 Install Istio with the Default Profile

istioctl install --set profile=demo -y

1.3 Enable Istio Injection

Enable automatic sidecar injection in the default namespace:

kubectl label namespace default istio-injection=enabled

Step 2: Deploy Microservices with Istio

We will deploy two microservices:
web – Calls the api service.
api – Responds with “Hello from API”.

2.1 Deploy web Service

Create web-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx
        ports:
        - containerPort: 80
Click Here to Copy YAML

Create web-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  selector:
    app: web
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f web-deployment.yaml
kubectl apply -f web-service.yaml

2.2 Deploy api Service

Create api-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: hashicorp/http-echo
        args: ["-text=Hello from API"]
        ports:
        - containerPort: 5678
Click Here to Copy YAML

Create api-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5678
Click Here to Copy YAML

Apply the deployment:

kubectl apply -f api-deployment.yaml
kubectl apply -f api-service.yaml

Step 3: Configure Istio Traffic Routing

3.1 Create a VirtualService for Traffic Control

Create api-virtualservice.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: api
spec:
  hosts:
  - api
  http:
  - route:
    - destination:
        host: api
        subset: v1
Click Here to Copy YAML

Apply the rule:

kubectl apply -f api-virtualservice.yaml

Step 4: Enable Observability & Monitoring

4.1 Install Kiali, Jaeger, Prometheus, and Grafana

kubectl apply -f samples/addons

4.2 Access the Monitoring Dashboards

kubectl port-forward svc/kiali 20001 -n istio-system

Open http://localhost:20001 to view the Kiali dashboard.

Step 5: Secure Service-to-Service Communication

5.1 Enable mTLS Between Services

Create peerauthentication.yaml:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
Click Here to Copy YAML

Apply the policy:

kubectl apply -f peerauthentication.yaml

Conclusion

We have successfully:
✅ Installed Istio and enabled sidecar injection.
✅ Deployed microservices inside the service mesh.
✅ Configured traffic routing using VirtualServices.
✅ Enabled observability tools like Grafana, Jaeger, and Kiali.
✅ Secured communication using mTLS encryption.

Istio simplifies microservices networking while enhancing security and visibility. Start using it today!

Are you using Istio in production? Share your experiences below!👇