Featured image of post Understanding Kubernetes Network Policies

Understanding Kubernetes Network Policies

Unlock the power of Kubernetes Network Policies! Learn how to create and apply them to secure your cluster communications. This guide covers what Network Policies are, how they work, and includes real-world examples and best practices.

Introduction

In our previous article on Kubernetes networking, we explored how pods communicate across a cluster using a flat network model. While this model enables seamless communication, it raises an important question: How do we control and secure this communication?

By default, Kubernetes allows unrestricted communication between all pods in a cluster. Looking at our diagram below, you can see a frontend pod and a backend pod that can freely communicate with each other and receive external traffic. While this default behavior simplifies initial setup, it may not meet your security requirements.

Network Policies address this challenge by allowing you to define rules that control pod communication. Think of them as firewall rules specifically designed for Kubernetes - they let you specify which pods can communicate with each other and with external endpoints.

In this article, you’ll learn:

  • How Network Policies control pod communication
  • The basic components of a Network Policy
  • How to create and test your first policy
  • Practical implementation using Kind

Let’s start by understanding how Network Policies fit into the Kubernetes networking model and what they can help you achieve.

Network Policy Fundamentals: How Kubernetes Controls Pod Traffic

In a Kubernetes cluster, pods start with complete freedom to communicate. Our diagram below illustrates this default state - the frontend pod receives external traffic and communicates with the backend pod without restrictions. This allow-all approach simplifies development and testing but often falls short for production environments.

Diagram showing network policy controlling traffic between pods

Production deployments typically need finer control over network traffic. You might need to:

  • Restrict external access to only specific pods
  • Control which pods can communicate with each other
  • Isolate certain workloads for security or compliance reasons

Network Policies provide this control through rules that filter pod traffic. Our diagram shows a network policy layer (represented by the red rectangle) that intercepts and controls communication between pods and external sources.

How Kubernetes Identifies Pods

Controlling pod communication starts with understanding how Kubernetes identifies pods. Every pod in Kubernetes can have labels - simple key-value pairs that identify pods. For example:

1
2
3
labels:
  app: frontend
  environment: production

These labels help you organize and select pods. Think of labels like name tags that help identify which pods are which.

Applying Network Policies

Network Policies use these labels to determine which pods they affect. When creating a policy, you specify which pods it applies to using label selectors. For example, you might want your policy to affect all pods with the label app: frontend.

Let’s use a simple example: imagine you have:

  • Frontend pods labeled app: frontend
  • Backend pods labeled app: backend
  • Database pods labeled app: database

You could create a policy that applies to frontend pods, controlling exactly how they communicate with backend and database pods.

The Implicit Deny Rule

When you apply a Network Policy to a pod, something important happens: that pod’s networking behavior changes fundamentally. It switches from allowing all traffic to denying all traffic except what you explicitly permit.

For example, if you create a policy for your frontend pods that only allows communication with backend pods on port 8080, two things happen:

  1. Frontend pods can communicate with backend pods on port 8080
  2. All other communication to and from frontend pods is automatically blocked

This “implicit deny” behavior is a crucial security feature - it ensures pods only communicate in ways you specifically authorize.

Note: Network Policies are defined in Kubernetes but enforced by your Container Network Interface (CNI) plugin. You’ll need a CNI plugin like Calico, Flannel with Calico, or Cilium to implement these rules.

Now that we understand these fundamentals, let’s examine how to create Network Policies using these concepts…

Key Components of a Kubernetes Network Policy

Now that we understand how pods are identified and selected, let’s examine how to build a Network Policy. A policy consists of three main parts that work together to control pod communication:

Pod Selectors

The first thing a Network Policy needs to specify is which pods it affects. We do this using pod selectors that match pod labels. For example:

1
2
3
4
5
6
7
8
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-policy
spec:
  podSelector:
    matchLabels:
      app: frontend

Let’s break this down:

  • kind: NetworkPolicy tells Kubernetes we’re creating a Network Policy
  • metadata.name gives our policy a unique identifier
  • spec.podSelector defines which pods this policy affects
  • matchLabels: app: frontend means this policy applies to all pods with label app: frontend

Any pods without the specified label continue with unrestricted communication.

Traffic Direction

Network Policies can control two types of traffic:

  • Ingress: incoming traffic to the selected pods
  • Egress: outgoing traffic from the selected pods

You can configure either or both directions in a single policy. For example:

1
2
3
4
5
6
7
8
9
spec:
  podSelector:
    matchLabels:
      app: frontend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend

This configuration:

  • Applies to pods labeled app: frontend
  • Controls incoming (ingress) traffic
  • Allows traffic only from pods labeled app: backend
  • Since egress isn’t specified, all outgoing traffic is blocked (remember the implicit deny rule)

Traffic Rules

Within each direction (ingress or egress), you specify rules about:

  • Which pods can communicate (using pod selectors)
  • Which ports are allowed
  • Which protocols can be used

Here’s an example combining all these elements:

1
2
3
4
5
6
7
8
ingress:
- from:
  - podSelector:
      matchLabels:
        app: backend
  ports:
  - protocol: TCP
    port: 80

Breaking this down:

  • The from section specifies the source (pods labeled app: backend)
  • The ports section allows only TCP traffic on port 80
  • Any other ports or protocols are blocked
  • Traffic from pods not labeled app: backend is denied

These components combine to create precise rules about pod communication. In the next section, we’ll create a complete Network Policy that puts all these pieces together in a practical example.

Walkthrough: Creating Your First Kubernetes Network Policy

Let’s put these components together to create a complete Network Policy. We’ll use a common scenario: allowing frontend pods to receive traffic only from specific backend pods.

Here’s the complete policy:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-access
spec:
  podSelector:
    matchLabels:
      app: frontend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend
    ports:
    - protocol: TCP
      port: 80

Let’s walk through what this policy does:

  1. Policy Target:

    • Applies to all pods labeled app: frontend
    • Any other pods remain unaffected
  2. Ingress Rules:

    • Allows incoming connections only from pods labeled app: backend
    • Permits only TCP traffic on port 80
    • All other incoming traffic is blocked
  3. Egress Behavior:

    • Since we haven’t specified any egress rules
    • All outgoing traffic from frontend pods is blocked (implicit deny)

This policy effectively creates a one-way communication channel where backend pods can send requests to frontend pods on port 80, while blocking all other traffic.

In the next section, we’ll set up a test environment using Kind and see this policy in action.

Implementing and Testing Network Policies with Kind (Kubernetes in Docker)

Let’s see our Network Policy in action using Kubernetes in Docker (Kind). We’ll create a simple test environment with frontend and backend services, then apply and verify our policy.

Prerequisites

Ensure these tools are installed on your local machine:

Setting Up the Environment

First, let’s create a Kind cluster with Calico, a CNI plugin that supports Network Policies. Create a file named kind-config.yaml:

1
2
3
4
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane

Create the cluster and install Calico:

1
2
3
4
5
# Create cluster
kind create cluster --config kind-config.yaml

# Install Calico
kubectl apply -f https://docs.projectcalico.org/v3.25/manifests/calico.yaml

Deploying Sample Applications

Create a file named sample-app.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
apiVersion: v1
kind: Pod
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  containers:
  - name: nginx
    image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: backend
  labels:
    app: backend
spec:
  containers:
  - name: nginx
    image: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  selector:
    app: backend
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: client-pod
  labels:
    app: client
spec:
  containers:
  - name: nginx
    image: nginx

Deploy the applications:

1
kubectl apply -f sample-app.yaml

Applying the Network Policy

Create our policy in frontend-policy.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: frontend-access
spec:
  podSelector:
    matchLabels:
      app: frontend
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend
    ports:
    - protocol: TCP
      port: 80

Apply the policy:

1
kubectl apply -f frontend-policy.yaml

Verifying the Policy

Let’s test our policy by attempting connections to the frontend service:

1
2
3
4
5
# Test connection from backend pod (should succeed)
kubectl exec backend -- curl frontend-service

# Test connection from client-pod (should fail)
kubectl exec client-pod -- curl frontend-service

The first command should succeed because our policy allows traffic from pods labeled app: backend, while the second should fail because the client-pod isn’t allowed to communicate with frontend pods.

To verify the pods are running and services are properly configured:

1
2
3
4
5
6
7
8
# Check pod status
kubectl get pods

# Check service status
kubectl get services

# Check network policy
kubectl get networkpolicies

If you need to troubleshoot, you can check the policy details:

1
kubectl describe networkpolicy frontend-access

In the next section, we’ll look at best practices and common challenges when working with Network Policies.

Best Practices and Common Challenges with Kubernetes Network Policies

When implementing Network Policies in Kubernetes, following certain practices can help you maintain secure and manageable network rules.

Start with Default Deny

One of the most important security practices is to start with a default deny policy. Create this policy early in your cluster setup:

1
2
3
4
5
6
7
8
9
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}  # Empty selector matches all pods
  policyTypes:
  - Ingress
  - Egress

This policy blocks all traffic to and from every pod in the namespace. You can then add specific policies to allow required communication, following the principle of least privilege.

Policy Organization

Keep your Network Policies:

  • Focused on specific applications or components
  • Well-documented with clear comments
  • Named consistently and descriptively
  • Organized by namespace

For example, prefix policies with their application name:

1
2
3
4
metadata:
  name: frontend-allow-backend
  annotations:
    description: "Allows frontend pods to receive traffic from backend services"

Common Challenges

  1. CNI Plugin Compatibility
  • Always verify your CNI plugin supports Network Policies
  • Some features might work differently across CNI plugins
  • Document which CNI plugin your policies are tested with
  1. Troubleshooting Access Issues
  • Network Policy problems often manifest as connection timeouts
  • Use these commands for troubleshooting:
    1
    2
    3
    4
    5
    6
    7
    8
    
    # Check if policies exist
    kubectl get networkpolicies -A
    
    # View policy details
    kubectl describe networkpolicy <policy-name>
    
    # Check pod labels
    kubectl get pods --show-labels
    
  1. Policy Conflicts
  • Multiple policies affecting the same pod combine additively
  • If any policy allows traffic, it’s allowed
  • Keep policies simple to avoid unexpected interactions

Performance Considerations

When designing Network Policies:

  • Avoid creating too many individual policies
  • Use label selectors efficiently
  • Consider the impact on large-scale deployments
  • Monitor network performance after applying policies

In our next article, we’ll explore advanced networking capabilities with Cilium, including enhanced Network Policy features and improved observability.

Conclusion

Throughout this article, we’ve explored how Network Policies help secure communication in Kubernetes clusters. Starting from the default allow-all state, we’ve seen how to implement precise controls over pod-to-pod communication using labels, selectors, and traffic rules.

Key takeaways:

  • Network Policies control pod communication using pod labels and selectors
  • Once a policy is applied to a pod, it denies all traffic except what’s explicitly allowed
  • Policies require a compatible CNI plugin like Calico for enforcement
  • Using Services with Network Policies provides stable, reliable communication

We’ve also implemented a practical example using Kind, demonstrating how to:

  • Set up a test environment
  • Create and apply Network Policies
  • Verify policy behavior
  • Troubleshoot common issues

While Network Policies provide essential security controls, they’re just the beginning of Kubernetes networking security. In our next article, we’ll explore Cilium, a powerful CNI plugin that extends these capabilities with enhanced security features, better performance, and improved observability.