Practical Knative: Building Serverless Functions on Kubernetes

Introduction

Serverless computing has revolutionized the way we build and deploy applications, allowing developers to focus on writing code rather than managing infrastructure. Kubernetes, the powerful container orchestration platform, provides several serverless frameworks, and Knative is one of the most popular solutions.

In this guide, we will explore Knative, its architecture, and how to build serverless functions that react to various events in a Kubernetes cluster.

The Problem: Managing Event-Based Processing

Traditional applications require developers to set up and manage servers, configure scaling policies, and handle infrastructure complexities. This becomes challenging when dealing with event-driven architectures, such as:

  • Processing messages from Kafka or NATS
  • Responding to HTTP requests
  • Triggering functions on cronjobs
  • Automating workflows inside Kubernetes

Manually setting up and scaling these workloads is inefficient. Knative solves this by providing a robust, event-driven serverless solution on Kubernetes.

What is Knative?

Knative is a Kubernetes-native serverless framework that enables developers to deploy and scale serverless applications efficiently. It eliminates the need for external FaaS (Function-as-a-Service) platforms and integrates seamlessly with Kubernetes events, message queues, and HTTP triggers.

Why Use Knative?

Built on Kubernetes with strong community support
Supports multiple runtimes: Python, Node.js, Go, Java, and more
Works with event sources like Kafka, NATS, HTTP, and Cron
Scales down to zero when no requests are received
Backed by major cloud providers like Google, Red Hat, and VMware

Installing Knative on Kubernetes

To start using Knative, first install its core components on your Kubernetes cluster.

Step 1: Install Knative Serving

Knative Serving is responsible for running serverless workloads. Install it using:

kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-core.yaml

Step 2: Install a Networking Layer

Knative requires a networking layer like Istio, Kourier, or Contour. To install Kourier:

kubectl apply -f https://github.com/knative/net-kourier/releases/latest/download/kourier.yaml
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

Verify that Knative is installed:

kubectl get pods -n knative-serving

Deploying a Serverless Function

Step 1: Writing a Function

Let’s create a simple Python function that responds to HTTP requests.

Create a file hello.py with the following content:

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello():
    return "Hello, Serverless World!"
Click Here to Copy Python Code

Step 2: Creating a Container Image

Build and push the image to a container registry:

docker build -t <your-dockerhub-username>/hello-knative .
docker push <your-dockerhub-username>/hello-knative

Step 3: Deploying the Function with Knative

Create a Knative service:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello-knative
spec:
  template:
    spec:
      containers:
      - image: <your-dockerhub-username>/hello-knative
Click Here to Copy YAML

Apply the YAML file:

kubectl apply -f hello-knative.yaml

Step 4: Testing the Function

Retrieve the function URL:

kubectl get ksvc hello-knative

Invoke the function:

curl http://<SERVICE-URL>

You should see the output:

Hello, Serverless World!

Using Event Triggers

Kafka Trigger

Knative Eventing enables you to trigger functions using Kafka topics. Install Knative Eventing:

kubectl apply -f https://github.com/knative/eventing/releases/latest/download/eventing-crds.yaml
kubectl apply -f https://github.com/knative/eventing/releases/latest/download/eventing-core.yaml

Create a Kafka trigger:

apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
  name: kafka-trigger
spec:
  broker: default
  subscriber:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: hello-knative
Click Here to Copy YAML

Apply the trigger:

kubectl apply -f kafka-trigger.yaml

Publish a message to Kafka:

echo '{"message": "Hello Kafka!"}' | kubectl -n knative-eventing exec -i kafka-producer -- kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic my-topic

CronJob Trigger

Run a function every 5 minutes using a cron trigger:

apiVersion: sources.knative.dev/v1beta1
kind: PingSource
metadata:
  name: cron-trigger
spec:
  schedule: "*/5 * * * *"
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: hello-knative
Click Here to Copy YAML

Apply the trigger:

kubectl apply -f cron-trigger.yaml

Conclusion

Knative provides a powerful and scalable serverless framework for Kubernetes. By integrating with HTTP, Kafka, and CronJob triggers, it enables truly event-driven serverless architectures without managing infrastructure.

What’s your experience with Knative? Let’s discuss in the comments! 👇

Using Kubernetes Jobs and CronJobs for Batch Processing

Introduction

Kubernetes provides powerful constructs for running batch workloads efficiently. Jobs and CronJobs enable reliable, scheduled, and parallel execution of tasks, making them perfect for data processing, scheduled reports, and maintenance tasks.

In this blog, we’ll explore:
✅ What Jobs and CronJobs are
✅ How to create and manage them
✅ Real-world use cases

Step 1: Understanding Kubernetes Jobs

A Job ensures that a task runs to completion. It can run a single pod, multiple pods in parallel, or restart failed ones until successful. Jobs are useful when you need to process a batch of data once (e.g., database migrations, log processing).

Creating a Kubernetes Job

Let’s create a Job that runs a simple batch script inside a pod.

YAML for a Kubernetes Job

apiVersion: batch/v1
kind: Job
metadata:
  name: batch-job
spec:
  template:
    spec:
      containers:
      - name: batch-job
        image: busybox
        command: ["sh", "-c", "echo 'Processing data...'; sleep 10; echo 'Job completed.'"]
      restartPolicy: Never
Click Here to Copy YAML

Apply the Job:

kubectl apply -f batch-job.yaml

Check Job status:

kubectl get jobs
kubectl logs job/batch-job

Cleanup:

kubectl delete job batch-job

Step 2: Using Kubernetes CronJobs for Scheduled Tasks

A CronJob runs Jobs at scheduled intervals, just like a traditional Linux cron job. It’s perfect for recurring data processing, backups, and report generation.

Creating a Kubernetes CronJob

Let’s schedule a Job that runs every minute and prints a timestamp.

YAML for a Kubernetes CronJob

apiVersion: batch/v1
kind: CronJob
metadata:
  name: scheduled-job
spec:
  schedule: "* * * * *"  # Runs every minute
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: scheduled-job
            image: busybox
            command: ["sh", "-c", "echo 'Scheduled job running at: $(date)'"]
          restartPolicy: OnFailure
Click Here to Copy YAML

Apply the CronJob:

kubectl apply -f cronjob.yaml

Check the CronJob execution:

kubectl get cronjobs
kubectl get jobs

View logs from the latest Job:

kubectl logs job/<job-name>

Delete the CronJob:

kubectl delete cronjob scheduled-job

Step 3: Use Cases of Jobs and CronJobs

  • Data Processing: Running batch scripts to clean and analyze data
  • Database Backups: Taking periodic snapshots of databases
  • Report Generation: Automating daily/monthly analytics reports
  • File Transfers: Scheduled uploads/downloads of files
  • System Maintenance: Automating cleanup of logs, cache, and unused resources

Conclusion

Kubernetes Jobs and CronJobs simplify batch processing and scheduled tasks, ensuring reliable execution even in distributed environments. By leveraging them, you automate workflows, optimize resources, and enhance reliability.

Are you using Kubernetes Jobs and CronJobs in your projects? Share your experiences in the comments!👇

Implementing Admission Controllers: Enforcing Organizational Policies in Kubernetes

Introduction

In a Kubernetes environment, ensuring compliance with security and operational policies is critical. Admission controllers provide a mechanism to enforce organizational policies at the API level before resources are created, modified, or deleted.

In this post, we will build a simple admission controller in Minikube. Instead of using a custom image, we will leverage a lightweight existing image (busybox) to demonstrate the webhook concept.

Why Use Admission Controllers?

Admission controllers help organizations enforce policies such as:
✅ Blocking privileged containers
✅ Enforcing resource limits
✅ Validating labels and annotations
✅ Restricting image sources

By implementing an admission webhook, we can inspect and validate incoming requests before they are persisted in the Kubernetes cluster.

Step 1: Create the Webhook Deployment

We will use busybox as the container image instead of a custom-built admission webhook image.

Create webhook-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: admission-webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: admission-webhook
  template:
    metadata:
      labels:
        app: admission-webhook
    spec:
      containers:
      - name: webhook
        image: busybox
        command: ["/bin/sh", "-c", "echo Webhook Running; sleep 3600"]
        ports:
        - containerPort: 443
        volumeMounts:
        - name: certs
          mountPath: "/certs"
          readOnly: true
      volumes:
      - name: certs
        secret:
          secretName: admission-webhook-secret
Click Here to Copy YAML

Key Changes:

  • Using busybox instead of a custom image
  • The container prints “Webhook Running” and sleeps for 1 hour
  • Mounting a secret to hold TLS certificates

Step 2: Generate TLS Certificates

Kubernetes requires admission webhooks to communicate securely. We need to generate TLS certificates for our webhook server.

Run the following commands in Minikube:

openssl req -x509 -newkey rsa:4096 -keyout tls.key -out tls.crt -days 365 -nodes -subj "/CN=admission-webhook.default.svc"
kubectl create secret tls admission-webhook-secret --cert=tls.crt --key=tls.key

This creates a self-signed certificate and stores it in a Kubernetes Secret.

Step 3: Define the MutatingWebhookConfiguration

Now, let’s create a Kubernetes webhook configuration that tells the API server when to invoke our webhook.

Create webhook-configuration.yaml:

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: admission-webhook
webhooks:
  - name: webhook.default.svc
    clientConfig:
      service:
        name: admission-webhook
        namespace: default
        path: "/mutate"
      caBundle: ""
    rules:
      - apiGroups: [""]
        apiVersions: ["v1"]
        operations: ["CREATE"]
        resources: ["pods"]
    admissionReviewVersions: ["v1"]
    sideEffects: None
Click Here to Copy YAML

Key Details:

  • The webhook applies to all Pods created in the cluster
  • The webhook will be called whenever a Pod is created
  • The caBundle field will be populated later

Apply the webhook configuration:

kubectl apply -f webhook-configuration.yaml

Step 4: Test the Webhook

Let’s check if the webhook is being triggered when a new Pod is created.

Create a test Pod:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test
    image: busybox
    command: ["sleep", "3600"]
Click Here to Copy YAML

Apply the Pod:

kubectl apply -f test-pod.yaml

If the webhook is working correctly, the admission controller should intercept the request and either allow or reject it based on the configured policies.

Step 5: Debugging and Logs

To check the logs of the webhook, run:

kubectl logs -l app=admission-webhook

If the webhook is not working as expected, ensure:

  • The webhook deployment is running (kubectl get pods)
  • The secret exists (kubectl get secret admission-webhook-secret)
  • The webhook configuration is applied (kubectl get mutatingwebhookconfigurations)

Conclusion

We successfully set up a custom Kubernetes admission controller. Instead of a custom-built webhook image, we used a minimal container (busybox) to simulate webhook functionality.

Key Takeaways:

  • Admission controllers enforce security policies before resources are created
  • Webhooks provide dynamic validation and policy enforcement
  • Minikube can be used to test webhooks without pushing images to remote registries

What’s your experience with admission controllers? Let’s discuss!👇

Building Kubernetes Operators: Automating Application Management

Introduction

Managing stateful applications in Kubernetes manually can be complex. Automating application lifecycle tasks—like deployment, scaling, and failover—requires encoding operational knowledge into software. This is where Kubernetes Operators come in.

Operators extend Kubernetes by using Custom Resource Definitions (CRDs) and controllers to manage applications just like native Kubernetes resources. In this guide, we’ll build an Operator for a PostgreSQL database, automating its lifecycle management.

Step 1: Setting Up the Operator SDK

To create a Kubernetes Operator, we use the Operator SDK, which simplifies scaffolding and controller development.

Install Operator SDK

If you haven’t installed Operator SDK, follow these steps:

export ARCH=$(uname -m)
curl -LO "https://github.com/operator-framework/operator-sdk/releases/latest/download/operator-sdk_linux_${ARCH}"
chmod +x operator-sdk_linux_${ARCH}
sudo mv operator-sdk_linux_${ARCH} /usr/local/bin/operator-sdk

Verify installation:

operator-sdk version

Step 2: Initializing the Operator Project

We start by initializing our Operator project:

operator-sdk init --domain mycompany.com --repo github.com/mycompany/postgres-operator --skip-go-version-check

This command:
✅ Sets up the project structure
✅ Configures Go modules
✅ Generates required manifests

Step 3: Creating the PostgreSQL API and Controller

Now, let’s create a Custom Resource Definition (CRD) and a controller:

operator-sdk create api --group database --version v1alpha1 --kind PostgreSQL --resource --controller

This generates:
api/v1alpha1/postgresql_types.go → Defines the PostgreSQL resource structure
controllers/postgresql_controller.go → Implements the logic to manage PostgreSQL instances

Step 4: Defining the Custom Resource (CRD)

Edit api/v1alpha1/postgresql_types.go to define the PostgreSQLSpec and PostgreSQLStatus:

package v1alpha1

import (
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// PostgreSQLSpec defines the desired state
type PostgreSQLSpec struct {
	Replicas  int    `json:"replicas"`
	Image     string `json:"image"`
	Storage   string `json:"storage"`
}

// PostgreSQLStatus defines the observed state
type PostgreSQLStatus struct {
	ReadyReplicas int `json:"readyReplicas"`
}

// PostgreSQL is the Schema for the PostgreSQL API
type PostgreSQL struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   PostgreSQLSpec   `json:"spec,omitempty"`
	Status PostgreSQLStatus `json:"status,omitempty"`
}
Click Here to Copy Go Language

Register this CRD:

make manifests
make install

Step 5: Implementing the Controller Logic

Edit controllers/postgresql_controller.go to define how the Operator manages PostgreSQL:

package controllers

import (
	"context"

	databasev1alpha1 "github.com/mycompany/postgres-operator/api/v1alpha1"
	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	ctrl "sigs.k8s.io/controller-runtime"
	"sigs.k8s.io/controller-runtime/pkg/client"
)

type PostgreSQLReconciler struct {
	client.Client
}

func (r *PostgreSQLReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	var postgres databasev1alpha1.PostgreSQL
	if err := r.Get(ctx, req.NamespacedName, &postgres); err != nil {
		return ctrl.Result{}, client.IgnoreNotFound(err)
	}

	deployment := &appsv1.Deployment{}
	if err := r.Get(ctx, req.NamespacedName, deployment); err != nil {
		// Define a new Deployment
		deployment = &appsv1.Deployment{
			ObjectMeta: postgres.ObjectMeta,
			Spec: appsv1.DeploymentSpec{
				Replicas: int32Ptr(int32(postgres.Spec.Replicas)),
				Template: corev1.PodTemplateSpec{
					Spec: corev1.PodSpec{
						Containers: []corev1.Container{{
							Name:  "postgres",
							Image: postgres.Spec.Image,
						}},
					},
				},
			},
		}
		if err := r.Create(ctx, deployment); err != nil {
			return ctrl.Result{}, err
		}
	}

	return ctrl.Result{}, nil
}

func int32Ptr(i int32) *int32 {
	return &i
}

func (r *PostgreSQLReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&databasev1alpha1.PostgreSQL{}).
		Complete(r)
}
Click Here to Copy Go Language

Step 6: Deploying the Operator

Build and push the Operator container:

make docker-build docker-push IMG=mycompany/postgres-operator:latest

Apply the Operator to the cluster:

make deploy IMG=mycompany/postgres-operator:latest

Step 7: Creating a PostgreSQL Custom Resource

Once the Operator is deployed, create a PostgreSQL instance:

apiVersion: database.mycompany.com/v1alpha1
kind: PostgreSQL
metadata:
  name: my-db
spec:
  replicas: 2
  image: postgres:13
  storage: 10Gi
Click Here to Copy YAML

Apply it:

kubectl apply -f postgresql-cr.yaml

Verify the Operator has created a Deployment:

kubectl get deployments

Step 8: Testing the Operator

Check if the PostgreSQL pods are running:

kubectl get pods

Describe the Custom Resource:

kubectl describe postgresql my-db

Delete the PostgreSQL instance:

kubectl delete postgresql my-db

Conclusion

We successfully built a Kubernetes Operator to manage PostgreSQL instances automatically. By encoding operational knowledge into software, Operators:
✅ Simplify complex application management
✅ Enable self-healing and auto-scaling
✅ Enhance Kubernetes-native automation

Operators are essential for managing stateful applications efficiently in Kubernetes.

What application would you like to automate with an Operator? Drop your thoughts in the comments!👇

Custom Resource Definitions: Extending Kubernetes the Right Way

Introduction

Kubernetes is powerful, but what if its built-in objects like Pods, Services, and Deployments aren’t enough for your application’s needs? That’s where Custom Resource Definitions (CRDs) come in!

In this post, I’ll walk you through:
✅ Why CRDs are needed
✅ How to create a CRD from scratch
✅ Implementing a custom controller
✅ Deploying and managing custom resources

Why Extend Kubernetes?

Kubernetes comes with a standard set of APIs (like apps/v1 for Deployments), but many applications require domain-specific concepts that Kubernetes doesn’t provide natively.

For example:
A database team might want a Database object instead of manually managing StatefulSets.
A security team might want a FirewallRule object to enforce policies at the cluster level.

With CRDs, you can define custom objects tailored to your use case and make them first-class citizens in Kubernetes!

Step 1: Creating a Custom Resource Definition (CRD)

A CRD allows Kubernetes to recognize new object types. Let’s create a CRD for a PostgreSQL database instance.

Save the following YAML as postgresql-crd.yaml:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: postgresqls.mycompany.com
spec:
  group: mycompany.com
  names:
    kind: PostgreSQL
    plural: postgresqls
    singular: postgresql
  scope: Namespaced
  versions:
    - name: v1alpha1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                databaseName:
                  type: string
                storageSize:
                  type: string
                replicas:
                  type: integer
Click Here to Copy YAML

Apply the CRD to Kubernetes

kubectl apply -f postgresql-crd.yaml

Now, Kubernetes knows about the PostgreSQL resource!

Step 2: Creating a Custom Resource Instance

Let’s create an actual PostgreSQL instance using our CRD.

Save the following YAML as postgresql-instance.yaml:

apiVersion: mycompany.com/v1alpha1
kind: PostgreSQL
metadata:
  name: my-database
spec:
  databaseName: mydb
  storageSize: "10Gi"
  replicas: 2
Click Here to Copy YAML

Apply the Custom Resource

kubectl apply -f postgresql-instance.yaml

Kubernetes now understands PostgreSQL objects, but it won’t do anything with them yet. That’s where controllers come in!

Step 3: Building a Kubernetes Controller

A controller watches for changes in custom resources and performs necessary actions.

Here’s a basic Go-based controller using controller-runtime:

package controllers

import (
	"context"
	"fmt"

	"k8s.io/apimachinery/pkg/types"
	ctrl "sigs.k8s.io/controller-runtime"
	"sigs.k8s.io/controller-runtime/pkg/client"
)

type PostgreSQLReconciler struct {
	client.Client
}

func (r *PostgreSQLReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	fmt.Println("Reconciling PostgreSQL instance:", req.NamespacedName)

	// Fetch the PostgreSQL instance
	var pgInstance PostgreSQL
	if err := r.Get(ctx, req.NamespacedName, &pgInstance); err != nil {
		return ctrl.Result{}, client.IgnoreNotFound(err)
	}

	// Implement database provisioning logic here

	return ctrl.Result{}, nil
}

func (r *PostgreSQLReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&PostgreSQL{}).
		Complete(r)
}
Click Here to Copy Go Language

Deploying the Controller

To deploy this, we use Kubebuilder and the Operator SDK:

operator-sdk init --domain mycompany.com --repo github.com/mycompany/postgres-operator
operator-sdk create api --group mycompany --version v1alpha1 --kind PostgreSQL --resource --controller
make manifests
make install
make run

Your Kubernetes Operator is now watching for PostgreSQL objects and taking action!

Step 4: Deploying and Testing the Operator

Apply the CRD and PostgreSQL resource:

kubectl apply -f postgresql-crd.yaml
kubectl apply -f postgresql-instance.yaml

Check if the custom resource is recognized:

kubectl get postgresqls.mycompany.com

Check the controller logs to see it processing the custom resource:

kubectl logs -l control-plane=controller-manager

If everything works, your PostgreSQL resource is being managed automatically!

Conclusion: Why Use CRDs?

  • Encapsulate Business Logic: No need to manually configure every deployment—just define a custom resource, and the operator handles it.
  • Standard Kubernetes API: Developers can use kubectl to interact with custom resources just like native Kubernetes objects.
  • Automated Workflows: Kubernetes Operators can provision, update, and heal application components automatically.

By implementing Custom Resource Definitions and Operators, you extend Kubernetes the right way—without hacking it!

What are some use cases where CRDs and Operators helped your team? Let’s discuss in the comments!👇