Understanding Kubernetes Deployments: Managing Application Releases

Understanding Kubernetes Deployments: Managing Application Releases

Introduction

In the previous articles, we built up the basic shape of a Kubernetes application.

A pod runs your application. A ReplicaSet keeps the right number of pods running. A Service gives those pods a stable address inside the cluster. Ingress gives external HTTP and HTTPS traffic a clean way to reach the right Service.

That gives us a working path from the outside world to the application:

External user → Ingress controller → Service → Pod

But there is still an important piece missing.

So far, we have looked at ReplicaSets directly because they make the core idea easy to see: keep this many pods running. If one pod disappears, the ReplicaSet creates another. If you change the count, the ReplicaSet adjusts.

That is useful, but real applications do not just sit still.

You release new versions. You fix bugs. You change container images. Sometimes a release works. Sometimes it breaks, and you need a clean way to go back.

A ReplicaSet can keep pods running, but it is not the object you usually manage when releasing an application.

That is where Deployments come in.

A Deployment sits above ReplicaSets. You describe the version of the application you want, how many copies should run, and how Kubernetes should roll changes out. The Deployment then manages the ReplicaSets needed to make that happen.

That is the shift in this article.

ReplicaSets keep pods alive. Deployments manage how those pods change over time.

In this article, we will look at what a Deployment does, how it relates to ReplicaSets and pods, how to write one in YAML, how a rollout works, and how Kubernetes can roll back when a new version does not behave the way you expected.

What a Deployment Solves

A ReplicaSet is good at one thing: keeping a number of pods running.

You tell it, “keep three pods like this running,” and it works to keep that true. If one pod disappears, it creates another. If the count changes from three to five, it creates two more.

But releasing an application is not only about the number of pods.

Imagine your frontend is running nginx:1.27, and you want to move to nginx:1.28.

With only a ReplicaSet, you can keep three pods running, but you do not have a clean release process. You still need to think about questions like:

How do I replace the old pods with new pods?
How do I avoid taking the whole application down at once?
How do I check whether the rollout is still moving?
How do I go back if the new version is broken?

That is the problem a Deployment solves.

A Deployment gives you a higher-level way to manage application changes. Instead of managing the ReplicaSet directly, you describe the version of the application you want and let the Deployment manage the ReplicaSets underneath.

The simple idea is this:

ReplicaSet: keep this many pods running.
Deployment: manage this application as it changes over time.

That means a Deployment can create a ReplicaSet, update it when the pod template changes, create a new ReplicaSet for a new version, gradually replace old pods with new ones, and keep enough pods running while the change happens.

This is why Deployments are what you usually create for normal applications.

You still get the benefit of ReplicaSets, because Deployments use ReplicaSets underneath. But you also get a better workflow for releasing, scaling, and rolling back your application.

Deployment, ReplicaSet, and Pod

A Deployment does not replace pods.

It also does not replace the idea of a ReplicaSet.

It sits above both.

The relationship looks like this:

Deployment → ReplicaSet → Pods

The Deployment manages the release. The ReplicaSet manages the number of pods. The pods run the application.

So when you create a Deployment with three replicas, Kubernetes creates more than just one object. It creates a Deployment, then the Deployment creates a ReplicaSet, and the ReplicaSet creates the pods.

You might end up with something like this:

Deployment: frontend
ReplicaSet: frontend-7c8d9f4b6
Pods:
  frontend-7c8d9f4b6-x4k2p
  frontend-7c8d9f4b6-m8kl9
  frontend-7c8d9f4b6-p9j7r

Those names are useful because they show the chain. The pods start with the ReplicaSet name because they belong to that ReplicaSet. The ReplicaSet starts with the Deployment name because it belongs to that Deployment.

This is why we learned ReplicaSets first. A Deployment uses the same mechanism, but adds release management on top of it.

If a pod disappears, the ReplicaSet still brings the count back up.

If you scale the Deployment from three pods to five, the Deployment updates the desired state, and the ReplicaSet creates the extra pods.

If you release a new version, the Deployment creates a new ReplicaSet for that version and starts moving from the old one to the new one.

So the simple mental model is:

Pod: runs the application
ReplicaSet: keeps enough pods running
Deployment: manages ReplicaSets over time

That is the key difference. A ReplicaSet cares about the current count. A Deployment cares about how the application changes from one version to the next.

A Deployment in YAML

Now that the relationship is clear, let’s write a Deployment.

Here is a Deployment that runs three nginx pods:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  labels:
    app: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: nginx
          image: nginx:1.27
          ports:
            - containerPort: 80

If this looks familiar, that is a good sign.

The shape is almost the same as the ReplicaSet YAML we wrote earlier. It has apiVersion, kind, metadata, and spec. It still has replicas, a selector, and a pod template.

The first important difference is the kind:

kind: Deployment

This tells Kubernetes that you are not creating a ReplicaSet directly. You are creating a higher-level object that will manage ReplicaSets for you.

The replicas field still means the same thing:

replicas: 3

You want three pods running.

The selector still tells Kubernetes which pods belong to this workload:

selector:
  matchLabels:
    app: frontend

And the template is still the pod recipe:

template:
  metadata:
    labels:
      app: frontend
  spec:
    containers:
      - name: nginx
        image: nginx:1.27
        ports:
          - containerPort: 80

This is the pod the Deployment wants to run: an nginx container using the nginx:1.27 image, listening on port 80.

So far, this feels very close to a ReplicaSet.

The difference is what happens later.

With a ReplicaSet, the main question is:

Do I have the right number of pods?

With a Deployment, Kubernetes can also ask:

Has the pod template changed?

That matters because the pod template includes the container image. If you later change the image from nginx:1.27 to nginx:1.28, the Deployment treats that as a new version of the application.

It can then create a new ReplicaSet for the new version and gradually replace the old pods with new ones.

That is why the YAML looks familiar, but the object is more powerful. A Deployment still describes pods, selectors, and replica counts. But it also gives Kubernetes a way to manage change over time.

Watching It Work

Save the YAML to a file called frontend-deployment.yaml, then apply it to the cluster.

kubectl apply -f frontend-deployment.yaml

Now check the Deployment:

kubectl get deployment

NAME       READY   UP-TO-DATE   AVAILABLE   AGE
frontend   3/3     3            3           20s

This tells you the Deployment is running three pods, and all three are available.

But remember, the Deployment does not manage the pods directly. It manages a ReplicaSet.

Check the ReplicaSet:

kubectl get replicaset

NAME                  DESIRED   CURRENT   READY   AGE
frontend-7c8d9f4b6    3         3         3       20s

Kubernetes created this ReplicaSet for the Deployment. The random-looking suffix is part of how Kubernetes gives the ReplicaSet its own name.

Now check the pods:

kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE
frontend-7c8d9f4b6-x4k2p    1/1     Running   0          20s
frontend-7c8d9f4b6-m8kl9    1/1     Running   0          20s
frontend-7c8d9f4b6-p9j7r    1/1     Running   0          20s

Now you can see the full chain:

Deployment: frontend
ReplicaSet: frontend-7c8d9f4b6
Pods:
  frontend-7c8d9f4b6-x4k2p
  frontend-7c8d9f4b6-m8kl9
  frontend-7c8d9f4b6-p9j7r

You created one Deployment, but Kubernetes created the ReplicaSet and the pods underneath it.

That is the important thing to notice.

When you work with normal applications, you usually talk to the Deployment. The Deployment manages the ReplicaSet. The ReplicaSet manages the pods.

The hierarchy looks like this:

Deployment → ReplicaSet → Pods

At this point, it might feel like the Deployment is only doing the same thing the ReplicaSet already did. It created three pods and kept them running.

But the reason Deployments matter appears when the application changes.

That is what we will look at next.

Rolling Out a New Version

So far, the Deployment has created three pods running nginx:1.27.

Now imagine you want to release a newer version and move to nginx:1.28.

With a Deployment, you do not delete the old pods yourself. You change the version you want, and Kubernetes handles the rollout.

You can do that by updating the image:

kubectl set image deployment/frontend nginx=nginx:1.28

This command changes the container image in the Deployment’s pod template from nginx:1.27 to nginx:1.28.

That small change is important.

The Deployment sees that the pod template has changed. The old pods were created from a template that used nginx:1.27. The new desired template uses nginx:1.28. So the Deployment starts a rollout.

You can watch the rollout status:

kubectl rollout status deployment/frontend

deployment "frontend" successfully rolled out

Behind the scenes, Kubernetes creates a new ReplicaSet for the new version. Then it starts creating new pods from the new template and removing old pods from the old ReplicaSet.

If you check the ReplicaSets, you might see something like this:

kubectl get replicaset

NAME                  DESIRED   CURRENT   READY   AGE
frontend-7c8d9f4b6    0         0         0       5m
frontend-5d9f6c8b7    3         3         3       30s

The old ReplicaSet is still there, but it no longer has any active pods. The new ReplicaSet now has the three pods running the new version.

That is the rollout.

The Deployment did not simply delete everything and start again. It moved the application from one version to another in a controlled way. It created new pods, waited for them to become available, and reduced the old pods as the new ones came up.

That is the key difference from managing a ReplicaSet directly.

With a ReplicaSet, you can keep a number of pods running.

With a Deployment, you can change the version of those pods while keeping the application running.

Rolling Back

A rollout is useful when the new version works.

But sometimes it does not.

Maybe the new image has a bug. Maybe the application starts, but returns errors. Maybe you realise the wrong version was released.

With a Deployment, you do not have to rebuild the old pods by hand. Kubernetes keeps rollout history for the Deployment, so you can ask it to go back to the previous version.

kubectl rollout undo deployment/frontend

Kubernetes then moves the Deployment back to the previous ReplicaSet. The old version becomes active again, and the newer version is scaled down.

You can check the rollout status the same way:

kubectl rollout status deployment/frontend

deployment "frontend" successfully rolled out

That is one of the big reasons Deployments matter. They do not just create pods. They give you a safer way to release changes and recover when a change goes wrong.

With a ReplicaSet, you are managing a fixed set of pods.

With a Deployment, you are managing the application’s release history.

Scaling a Deployment

Deployments are not only for rolling out new versions.

You also use them when you want to change how many copies of your application are running.

Earlier, the Deployment created three pods. If more people are using the application and you want five pods instead, you scale the Deployment:

kubectl scale deployment frontend --replicas=5

That command updates the desired count on the Deployment.

From there, Kubernetes handles the rest. The Deployment updates the ReplicaSet underneath it, and the ReplicaSet creates the extra pods needed to reach five.

You can check the Deployment again:

kubectl get deployment

NAME       READY   UP-TO-DATE   AVAILABLE   AGE
frontend   5/5     5            5           8m

And if you check the pods, you will see five running copies:

kubectl get pods

NAME                        READY   STATUS    RESTARTS   AGE
frontend-5d9f6c8b7-x4k2p    1/1     Running   0          8m
frontend-5d9f6c8b7-m8kl9    1/1     Running   0          8m
frontend-5d9f6c8b7-p9j7r    1/1     Running   0          8m
frontend-5d9f6c8b7-j3h8r    1/1     Running   0          10s
frontend-5d9f6c8b7-n5k2m    1/1     Running   0          10s

The important point is not just that the count changed. It is where you made the change.

You did not scale the pods directly. You did not scale the ReplicaSet directly. You scaled the Deployment.

That keeps the hierarchy clean:

You update the Deployment.
The Deployment manages the ReplicaSet.
The ReplicaSet manages the pods.

Scaling down works the same way:

kubectl scale deployment frontend --replicas=3

Kubernetes reduces the number of pods until the Deployment is back at the desired count.

So whether you are releasing a new version, rolling back, or changing the number of running copies, the pattern is the same: you work with the Deployment, and Kubernetes manages the lower-level objects for you.

Where This Leads

You now have the workload side of Kubernetes in a much clearer shape.

A pod runs your application. A ReplicaSet keeps enough copies of that pod running. A Deployment manages ReplicaSets so you can release new versions, roll back bad changes, and scale the application without managing pods directly.

That gives us this chain:

Deployment → ReplicaSet → Pods

The Deployment is the object you usually work with. The ReplicaSet is the mechanism Kubernetes uses underneath. The pods are the running copies of the application.

This is why Deployments are so common in Kubernetes. They give you the normal application workflow: run this version, keep this many copies available, roll out changes carefully, and give me a way back if something goes wrong.

At this point, the main application structure is in place.

Pod: runs the application
ReplicaSet: keeps enough pods running
Deployment: manages ReplicaSets and application releases
Service: gives pods a stable address
Ingress: routes external HTTP/HTTPS traffic to Services

But there is still something we have avoided so far.

Every example has placed the application behaviour inside the container image. The image says what to run, and Kubernetes runs it. That is fine for simple examples, but real applications usually need configuration.

They need environment names, feature flags, database connection details, API endpoints, and sometimes sensitive values like passwords or tokens. You do not want to rebuild the container image every time one of those values changes, and you definitely do not want secrets hardcoded into the image.

That is where ConfigMaps and Secrets come in.

They let you separate application configuration from the container image, so the same application can run in different environments with different settings. That is the next step in the series.