Kubernetes Networking with Cilium: eBPF-Powered Kubernetes Security and Observability
-
Ahmed Muhi - 07 May, 2024
Introduction
In the previous article, we used Network Policies to control which pods can communicate with which. We applied a policy in a Kind cluster with Calico and saw it enforce real traffic restrictions.
That works well for controlling access at the level of pods, ports, and protocols. But standard Network Policies operate at Layer 3 and Layer 4 of the networking stack. They can filter based on IP addresses, labels, ports, and protocols. What they can’t do is look inside the traffic itself. They can’t distinguish between two HTTP requests going to the same pod on the same port but hitting different API endpoints. And they don’t give you any visibility into what’s actually flowing through your cluster.
This article introduces Cilium, a CNI plugin built on a technology called eBPF (Extended Berkeley Packet Filter). Where traditional CNIs like Calico process packets using iptables, a chain of rules evaluated in user space, Cilium runs programs directly in the Linux kernel. That architectural difference is what allows Cilium to inspect packets earlier, enforce policies faster, and support capabilities that iptables-based CNIs can’t reach.
We’ll cover what eBPF is and how Cilium uses it to handle networking at the kernel level. Then we’ll build a Kind cluster with Cilium, deploy the same frontend and backend app from Article 2, apply a CiliumNetworkPolicy, and compare the experience to what we did with Calico. By the end, you’ll understand the architectural difference between a traditional CNI and Cilium, and you’ll be set up for the next article where we take Cilium to AKS and see what Layer 7 filtering actually looks like in practice.
What Cilium and eBPF Are
To understand what makes Cilium different, we need to start with the technology underneath it: eBPF.
eBPF: Programs Inside the Kernel
When a packet arrives at a server, it passes through the Linux kernel before reaching your application. Traditionally, if you want to inspect, filter, or route that packet, you use tools like iptables. These tools work, but they operate through a fixed chain of rules that the kernel evaluates one by one. As the number of rules grows (more pods, more policies, more services), the chain gets longer and slower.
iptables also has a fixed set of capabilities. It can match on IP addresses, ports, and protocols, but it can’t look deeper into the packet. If you want to make a decision based on something like an HTTP path or method, iptables can’t help you. You’d need to add a separate proxy layer on top, which introduces more latency and more complexity.
eBPF (Extended Berkeley Packet Filter) takes a different approach. Instead of processing packets through a chain of static rules, eBPF lets you write small programs that attach directly to specific points in the Linux kernel’s networking stack. When a packet hits one of those points, the eBPF program runs, and it can inspect the packet, make routing decisions, collect metrics, or drop the packet entirely.
These programs run in a sandboxed environment inside the kernel. The kernel verifies every eBPF program before it runs, checking that it can’t crash the system, access memory it shouldn’t, or run forever. This makes eBPF safe to use in production, even though the programs are executing at the kernel level.
Because eBPF programs sit inside the kernel, they process packets at the earliest possible point, before iptables rules, before user space proxies, before the packet reaches any application. That’s what gives eBPF its performance advantage: lower latency and less overhead, because the packet doesn’t have to travel through additional layers to be inspected or filtered.
The eBPF Ecosystem
eBPF isn’t just used for Kubernetes networking. It has become a foundational technology across the cloud-native ecosystem for networking, security, and observability.
The diagram below shows how the ecosystem is structured.

At the bottom is the kernel runtime. This is where eBPF programs actually execute. The runtime includes a verifier (checks programs for safety before they run), a JIT compiler (turns eBPF bytecode into efficient machine code), maps (shared data structures that programs can read and write), and helper APIs (for interacting with the rest of the kernel).
In the middle is user space, where developers write eBPF programs using SDKs in languages like Go, Rust, and C. These programs are compiled into bytecode and loaded into the kernel at runtime.
At the top are the projects built on eBPF. Cilium uses it for networking and security. Falco uses it for runtime security monitoring. Pixie uses it for application observability. These tools all share the same foundation but apply eBPF to different problems. This is why eBPF has gained so much adoption across the CNCF ecosystem: it provides a single, high-performance mechanism for inspecting and controlling system behavior at the kernel level.
How Cilium Uses eBPF
Cilium is a CNI plugin that uses eBPF as its enforcement engine. Instead of configuring iptables rules to manage pod traffic, Cilium translates your network policies into eBPF programs and loads them directly into the kernel.
The diagram below shows how Cilium is structured.

At the top is the Cilium layer. This includes the Cilium CLI (for managing the installation), a policy repository (where your CiliumNetworkPolicies are stored), plugins (for integrating with Kubernetes and other orchestration systems), and Cilium Monitor (for real-time observability).
All of these feed into the Cilium Daemon, which runs on every node in your cluster. The daemon is the core component. Its job is to take your network policies, compile them into eBPF bytecode, and inject that bytecode into the kernel. This process is labeled “Bytecode injection” in the diagram.
At the bottom is the kernel. Each container and the node’s network interface (eth0) has a BPF program attached to it. These programs are the actual enforcement point. When a packet arrives at a container, the attached BPF program inspects it and applies the policy right there, at the kernel level, before the packet reaches the container’s application.
This is the core architectural difference from a CNI like Calico. With iptables, packets travel through rule chains that grow linearly with the number of policies. With Cilium, policies are compiled into optimized eBPF programs. Adding more policies doesn’t create a longer chain to evaluate. And because the enforcement happens inside the kernel, Cilium can support capabilities that iptables can’t: inspecting HTTP headers, filtering by API path, collecting per-request metrics, and providing real-time traffic visibility.
In this article, we’ll use Cilium at the same L3/L4 level we used Calico in Article 2. The policies will look similar and the results will be the same: backend traffic allowed, everything else blocked. The difference is how it’s enforced. In Article 4, we’ll use what this architecture unlocks and apply Layer 7 policies that filter based on HTTP methods and paths.
Hands-On: Cilium in a Kind Cluster
We’ve seen how Cilium and eBPF work architecturally. Now let’s put it into practice.
In this section, we’ll create a Kind cluster with no default CNI, install Cilium using Helm, deploy the same frontend and backend app from Article 2, apply a CiliumNetworkPolicy, and verify that it enforces the rules we expect. If you went through the Calico lab in Article 2, this will feel familiar. The setup is similar, but the enforcement engine underneath is completely different.
Prerequisites
Make sure these tools are installed on your machine:
- Docker: Install Docker
- kubectl: Install kubectl
- Helm: Install Helm
- Kind: Install Kind
Create the Cluster
In Article 2, we created a Kind cluster and installed Calico on top of Kind’s default networking. This time, we need to disable Kind’s built-in CNI entirely so Cilium can take full control.
Create a file called kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
networking:
disableDefaultCNI: true
The disableDefaultCNI: true flag prevents Kind from installing its default CNI plugin (kindnet). Kindnet doesn’t support Network Policies or eBPF, so we need to remove it and let Cilium handle all pod networking, policy enforcement, and observability.
Create the cluster:
kind create cluster --config kind-config.yaml
Verify the node is up:
kubectl get nodes
The node will show NotReady at first. That’s expected. There’s no CNI installed yet, so the node can’t handle pod networking. It will become Ready once we install Cilium.
Install Cilium with Helm
Add the Cilium Helm repository:
helm repo add cilium https://helm.cilium.io/
Install Cilium into the cluster:
helm install cilium cilium/cilium --version 1.15.4 \
--namespace kube-system \
--set image.pullPolicy=IfNotPresent \
--set ipam.mode=kubernetes
This deploys the Cilium agent and operator into the kube-system namespace. The image.pullPolicy=IfNotPresent flag tells Helm to use locally cached container images if they’re already on the machine, avoiding unnecessary downloads. The ipam.mode=kubernetes flag tells Cilium to use Kubernetes for IP address management, which is the standard approach for Kind clusters.
Watch the installation progress:
kubectl -n kube-system get pods --watch
You’ll see cilium and cilium-operator pods move through Pending, ContainerCreating, and finally Running. Once both are running, your cluster has a fully functional eBPF-powered CNI.
Verify the node is now ready:
kubectl get nodes
The node should show Ready. Cilium is handling all networking.
Deploy the Application
We’ll use the same frontend and backend setup from Article 2, including a client pod for testing unauthorized access. Create a file called sample-app.yaml:
apiVersion: v1
kind: Pod
metadata:
name: frontend
labels:
app: frontend
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: backend
labels:
app: backend
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
app: client
spec:
containers:
- name: nginx
image: nginx
Apply it:
kubectl apply -f sample-app.yaml
Wait for the pods to be running:
kubectl get pods
Before applying any policy, confirm that both the backend and client pods can reach the frontend:
# Backend to frontend (should succeed, no policy yet)
kubectl exec backend -- curl -s frontend-service
# Client to frontend (should also succeed, no policy yet)
kubectl exec client-pod -- curl -s frontend-service
Both commands should return the Nginx welcome page. That’s our baseline: everything can talk to everything, just like in Article 2.
Apply a CiliumNetworkPolicy
In Article 2, we used a standard Kubernetes NetworkPolicy. Cilium supports those, but it also provides its own custom resource called CiliumNetworkPolicy. The syntax is slightly different, but the concept is the same: select the pods you want to protect, define the allowed traffic, and everything else is denied.
Create a file called frontend-policy.yaml:
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: frontend-policy
spec:
endpointSelector:
matchLabels:
app: frontend
ingress:
- fromEndpoints:
- matchLabels:
app: backend
toPorts:
- ports:
- port: "80"
protocol: TCP
If you compare this to the standard NetworkPolicy from Article 2, the logic is identical but the syntax has a few differences. Here’s how the key fields map:
| Standard NetworkPolicy | CiliumNetworkPolicy | What it does |
|---|---|---|
kind: NetworkPolicy | kind: CiliumNetworkPolicy | Defines the resource type |
podSelector | endpointSelector | Selects which pods the policy applies to |
from: podSelector | fromEndpoints | Defines which pods are allowed to send traffic |
ports | toPorts.ports | Specifies which ports the traffic can reach |
Cilium uses the term “endpoint” instead of “pod” because its identity model extends beyond just pods, but for our purposes they mean the same thing. The toPorts wrapper around ports is more explicit about direction, but the effect is the same: these are the ports that ingress traffic is allowed to reach.
The policy does exactly what our Calico policy did: allow ingress traffic to the frontend only from backend pods on TCP port 80. Everything else is denied.
Apply it:
kubectl apply -f frontend-policy.yaml
Test the Policy
First, test from the backend. This should succeed:
kubectl exec backend -- curl -s frontend-service
You should see the Nginx welcome page. The backend is allowed by the policy.
Now test from the client pod. This should fail:
kubectl exec client-pod -- curl -s --max-time 5 frontend-service
This should time out or return a connection error. The client pod has the label app: client, which is not in the policy’s allowed list. Cilium’s eBPF program at the kernel level drops the traffic before it reaches the frontend.
What’s Different from Calico?
At the L3/L4 level, the outcome is identical. Backend traffic is allowed, unauthorized traffic is blocked. If you only looked at the test results, you wouldn’t be able to tell which CNI was running.
The difference is underneath. With Calico, the policy was enforced through iptables rules. With Cilium, it was enforced by an eBPF program compiled from your policy and loaded into the kernel. That program runs every time a packet arrives at the frontend pod, making the allow/deny decision at the kernel level without going through a chain of rules.
At this layer, the practical benefit is mostly performance: eBPF enforcement scales better as you add more policies and more pods. But the real payoff comes when you move beyond L3/L4. Because Cilium’s enforcement engine runs in the kernel and can inspect full packet contents, it can filter based on HTTP methods, paths, and headers. That’s Layer 7 filtering, and it’s something iptables-based CNIs simply can’t do.
We’ll see that in action in the next article.
Clean Up
Delete the cluster:
kind delete cluster
Conclusion
In this article, we looked at what sits underneath Cilium and why it matters: eBPF. Where traditional CNIs like Calico enforce policies through iptables rule chains in user space, Cilium compiles policies into eBPF programs that run directly in the Linux kernel. Packets are inspected at the earliest possible point, before they reach user space, before they travel through a chain of rules.
We walked through the eBPF ecosystem and saw how Cilium’s architecture turns policies into bytecode that gets injected into the kernel and attached to each container. Then we built a Kind cluster with Cilium, deployed the same frontend and backend app from Article 2, and applied a CiliumNetworkPolicy that achieved the same result as our Calico policy: backend traffic allowed, everything else blocked.
At the L3/L4 level, the outcome is the same. The difference is how it’s enforced. Cilium’s eBPF-based enforcement scales better as policies and pods grow, and it opens the door to capabilities that iptables can’t support.
The most significant of those capabilities is Layer 7 filtering. Standard Network Policies can control which pods talk to which, on which ports and protocols. But they can’t look inside the traffic. They can’t distinguish between a safe API call and a dangerous one hitting the same pod on the same port. Cilium can, because its enforcement engine has access to the full packet contents at the kernel level.
In the next article, we’ll see that in practice. We’ll deploy Cilium on Azure Kubernetes Service (AKS), take it to the cloud, and use the Star Wars demo to apply Layer 7 policies that filter traffic based on HTTP methods and paths. You’ll see what it looks like when your CNI doesn’t just control who talks to whom, but controls what they’re allowed to say.
Image credits: