Featured image of post Docker for Beginners: From Code to Container and Cloud

Docker for Beginners: From Code to Container and Cloud

Build your first Docker image from a real Node.js app, package it with a custom Dockerfile, push it to Azure Container Registry, and deploy it to the cloud using Azure Container Instances—no prior experience required.

Introduction

In Part 1, you learned what Docker is and why it matters. You ran your first containers using images like hello-world and nginx, and saw how containerisation simplifies development. In Part 2, you explored how Docker works behind the scenes — from the CLI to the Docker daemon to the registry — and gained confidence in managing the container lifecycle.

Until now, Docker was a tool you used. Starting today, it becomes a tool you create with.

You’re going to take an application — one you’ve built — and turn it into something portable, consistent, and cloud-ready. You’ll package it in a Docker image, push that image to the cloud, and deploy it to run anywhere Docker is supported.

This is the shift from learning to building.

This is where Docker stops being just a cool tool and starts becoming part of your professional workflow.

Why This Matters

When you create your own Docker images, you’re solving problems developers face every day:

  • “Why does it work on your laptop but not mine?”
  • “How do I set up the same environment on staging?”
  • “Can we just ship the whole thing as-is?”

With your own Docker image:

  • Your app runs the same on every machine, no surprises
  • All your dependencies and setup steps are bundled and versioned
  • You can move from dev to prod with confidence

This isn’t just a skill — it’s a new mindset. You’re not writing scripts or debugging someone else’s system. You’re owning your environment from the first line of code to the final deployment.

What You’ll Build in This Article

By the end of this hands-on guide, you’ll have:

  1. Written a simple but real Node.js web application
  2. Created a custom Dockerfile to package that app
  3. Run your containerised app locally
  4. Pushed your Docker image to Azure Container Registry
  5. Deployed and ran that container in the cloud using Azure Container Instances

That’s not just using Docker — that’s a full development-to-deployment workflow.

Let’s take your app. Put it in a container. And ship it to the cloud. 🚀

Understanding Dockerfiles: The Blueprint for Your Image

Now that you’ve seen how containers are created from images, it’s time to learn how to create the image yourself — using a Dockerfile.

In this section, you’ll learn how to write a Dockerfile — the file that Docker uses to build your own container images from your code. This is where things shift from using Docker to truly creating with Docker.

Before we start, here’s a visual overview to anchor your understanding:

Dockerfile to Container Flow

You write a Dockerfile, Docker builds it into an image, and when you run that image, you get a container.

Let’s break it down step by step.

What Is a Dockerfile?

A Dockerfile is a plain text file that contains a list of instructions. These instructions tell Docker exactly how to:

  • Set up your application’s environment
  • Copy in your code
  • Install its dependencies
  • Define how it should run

Think of it like a recipe. You write it once, and Docker can consistently “bake” your app into a shareable, runnable image — whether on your laptop, in a CI pipeline, or in the cloud.

📝 Note: The file must be named exactly Dockerfile (with no extension) or Docker won’t recognise it.

Let’s Imagine Your App

Let’s say you’ve built a simple Node.js app.

Your project folder looks like this:

1
2
3
4
5
6
my-app/
├── app.js
├── package.json
├── package-lock.json
├── public/
└── src/

You want Docker to package this app, install the dependencies, and run it.

That’s where the Dockerfile comes in.

Writing a Dockerfile: One Step at a Time

Each instruction in your Dockerfile creates a layer — think of it like stacking sheets of instructions. Docker builds your image one layer at a time and caches each layer to speed up future builds.

Let’s write a clean, efficient Dockerfile step by step:

1. Choose a Base Image

1
FROM node:22

This line tells Docker:

“Start with Node.js version 22 as my foundation.”

That base image is already a lightweight Linux OS with Node.js pre-installed — so you don’t have to install Node manually. It saves time and keeps your image consistent.

⚠️ Avoid FROM node:latest. Pin a specific version (like 22) to make your builds predictable.

2. Set the Working Directory

1
WORKDIR /usr/src/app

This tells Docker:

“From here on, run everything inside /usr/src/app inside the image.”

It’s like changing into a folder in your terminal before running commands. If you skip this, you’d have to use full paths like /usr/src/app/app.js everywhere — messy and error-prone.

3. Copy Only What You Need (for Now)

1
COPY package*.json ./

This command copies only your package.json and package-lock.json files into the container.

Why just these?

Because in the next step, you’ll install dependencies — and that’s all npm install needs. Copying just these files first helps Docker cache the install step, so it doesn’t redo it if your app code changes.

💡 Plain English: This is a build trick. Docker will reuse this “install” layer as long as your dependencies don’t change.

4. Install App Dependencies

1
RUN npm install

This runs inside the image during build time — not every time the container starts.

Docker reads your package.json, installs all required packages, and locks them into this layer of the image.

🧠 These packages are now baked into the image — so every time you run this image, your app has everything it needs.

5. Copy the Rest of Your App Code

1
COPY . .

This copies everything else from your project folder into the working directory in the container.

📁 The first . means “from my current directory (host)”, and the second . means “to the current working directory inside the image”.

🚫 Be careful: this also includes things like .git/, .env, and node_modules/.
Use a .dockerignore file to keep your image clean and avoid copying sensitive or unnecessary files.

6. Declare the Port Your App Uses

1
EXPOSE 3000

This tells Docker:

“My app listens on port 3000 inside the container.”

📢 This is just documentation for Docker and tools like Docker Compose.
To actually access this app from your host machine, you’ll still need to publish the port with -p when running the container:

1
docker run -p 3000:3000 my-app

7. Define the Default Start Command

1
CMD ["node", "app.js"]

This tells Docker:

“When someone runs this image, start it by running node app.js.”

This only runs when the container starts — not when the image is built.

💡 You can override this at runtime with another command during development, for example:

1
docker run my-app node debug.js

A Complete Dockerfile

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Use Node.js 22 as the base image
FROM node:22

# Set working directory inside container
WORKDIR /usr/src/app

# Copy dependency files and install packages
COPY package*.json ./
RUN npm install

# Copy application source code
COPY . .

# Document the port used by the app
EXPOSE 3000

# Default command to start the app
CMD ["node", "app.js"]

Why Order Matters: Layers and Build Caching

Docker builds your image in layers, caching each one. The order you write instructions impacts how fast Docker can rebuild your image later.

Let’s say you changed just one line of code in app.js. Here’s how the caching works:

Change Made Which Layers Rebuild?
package.json changed All layers from RUN npm install onward
Only src/ code changed Only the final COPY . . and onward
Base image changed Everything, starting from FROM

Pro tip: Always copy package.json before your full app code to avoid reinstalling dependencies when only code changes.

Recap: What You Just Learned

  • A Dockerfile turns your app into a portable image by defining build steps
  • Each command in a Dockerfile creates a layer, and layer order affects rebuild time
  • You can control your runtime environment fully — no more “it works on my machine” surprises
  • You now understand every line in a professional-grade Dockerfile

Next, let’s build this image for real and run it on your machine. You’ve just gone from reading Dockerfiles to writing one. Let’s keep going.

Creating an App You Can Containerize

Now let’s shift from theory into practice.

You’ve written a Dockerfile. You understand how it builds images from source code. Now let’s give it something real to build.

In this section, you’ll create a small but complete Node.js web application — a front end and back end bundled together — that we’ll package into a Docker image and run anywhere.

It’s not a toy “Hello World.” This app mirrors the structure of real projects:

  • A front-end (HTML, CSS, JavaScript)
  • A back-end server (Node.js + Express)
  • A dependency (express)
  • A defined runtime environment (node 22)

Exactly the kind of setup that’s hard to share or deploy without Docker.

Step 1: Set Up the Project Folder

First, create a folder to hold your project:

1
2
mkdir docker-color-app
cd docker-color-app

This folder will contain your application code and your Dockerfile.

Step 2: Initialize the Node.js App

Use this command to quickly generate a package.json file with default values:

1
npm init -y

This doesn’t declare any dependencies yet — it just creates a basic configuration file. You’ll see this file again later in your Dockerfile when we install dependencies during the image build.

Step 3: Install Express

Install the Express framework, which helps us build the backend server:

1
npm install express

This adds Express as a dependency and updates your package.json file automatically.

Later, when Docker builds your image, it will run npm install using this file to install everything your app needs.

Step 4: Create the Backend Server

Create a new file in your root project directory called server.js:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
const express = require('express');
const path = require('path');

const app = express();
const PORT = 3000;

app.use(express.static(path.join(__dirname, 'public')));

app.get('/', (req, res) => {
  res.sendFile(path.join(__dirname, 'public', 'index.html'));
});

app.listen(PORT, () => {
  console.log(`Server running at http://localhost:${PORT}`);
});

This file does three things:

  • Serves static assets from a public folder
  • Responds to requests at / with your index.html
  • Starts a server on port 3000

When you run this container later, you’ll tell Docker to execute this file using a CMD instruction like:

1
CMD ["node", "server.js"]

Step 5: Add the Frontend

Create a public folder to hold your front-end files:

1
2
mkdir public
cd public

Inside public, create a file named index.html and paste the following into it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
  <title>Docker Color App</title>
  <style>
    body {
      font-family: Arial, sans-serif;
      text-align: center;
      transition: background-color 0.5s ease;
      padding: 50px;
    }
    h1 { margin-bottom: 30px; }
    .container {
      max-width: 600px;
      margin: 0 auto;
      background-color: rgba(255, 255, 255, 0.8);
      padding: 30px;
      border-radius: 10px;
      box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
    }
    button {
      background-color: #4CAF50;
      border: none;
      color: white;
      padding: 10px 20px;
      font-size: 16px;
      cursor: pointer;
      border-radius: 5px;
    }
    .color-info {
      margin-top: 20px;
      font-weight: bold;
    }
  </style>
</head>
<body>
  <div class="container">
    <h1>Docker Color Changer</h1>
    <p>Click the button to change the background color!</p>
    <button id="changeColorBtn">Change Color</button>
    <div class="color-info">
      Current color: <span id="colorValue">rgb(255, 255, 255)</span>
    </div>
  </div>
  <script>
    document.addEventListener('DOMContentLoaded', () => {
      const button = document.getElementById('changeColorBtn');
      const display = document.getElementById('colorValue');
      button.addEventListener('click', () => {
        const r = Math.floor(Math.random() * 256);
        const g = Math.floor(Math.random() * 256);
        const b = Math.floor(Math.random() * 256);
        const color = `rgb(${r}, ${g}, ${b})`;
        document.body.style.backgroundColor = color;
        display.textContent = color;
      });
    });
  </script>
</body>
</html>

Step 6: Check Your Folder Structure

You should now have:

1
2
3
4
5
6
docker-color-app/
├── package.json
├── package-lock.json
├── server.js
└── public/
    └── index.html

Step 7: Test It Locally

Let’s confirm the app works before containerizing it.

From the root project directory, run:

1
node server.js

If it works, you’ll see something like this in your terminal:

1
Server running at http://localhost:3000

Open your browser and go to http://localhost:3000

✅ If the page loads and you see a button that changes colors — great! You’re ready to containerize.

When you’re done, press Ctrl+C to stop the server.

Why This Matters

This app might be simple, but it mirrors the real-world structure of many production apps:

  • A backend server with external dependencies
  • A frontend served over HTTP
  • A runtime requirement (Node.js 22)

Without Docker, teammates and deployment servers would need to install Node, manage dependencies, and know which files to run.

With Docker, you’ll wrap everything into a clean, portable image that anyone can run — anywhere.

Let’s do that next. You’ve written the app. Let’s build the image.

From Code to Container

Now that you’ve created a real application, it’s time to package it as a Docker image and run it as a container—just like we do in real-world projects.

This section is where theory meets action. You’ll write a Dockerfile, build an image from your code, and launch a container running your app—all from the command line. By the end, your application will be running in a fully isolated, portable Docker environment.

Step 1: Create the Dockerfile

In the root of your project folder (docker-color-app), create a new file called Dockerfile (with no file extension):

1
touch Dockerfile

Open it in your favourite editor and add the following, step by step.

Step 2: Write the Dockerfile

Choose a Base Image

1
FROM node:22

This line tells Docker: start with Node.js version 22. This base image is a lightweight Linux OS with Node and npm already installed—so you don’t have to install them manually.

🔍 Why pin the version? Using node:latest sounds convenient, but it can change over time. Pinning to node:22 ensures consistency across builds—even months from now.

Set the Working Directory

1
WORKDIR /usr/src/app

This tells Docker, “from now on, all commands happen inside /usr/src/app in the container.” It’s like doing a cd before every command. Without it, you’d need to use full paths everywhere—not fun.

Install Dependencies First

1
2
COPY package*.json ./
RUN npm install

Copy just your package.json and package-lock.json first. Then run npm install. Why? Because Docker caches layers. So if your code changes but your dependencies don’t, Docker will reuse this step to save time.

⚠️ You’ll often see this pattern in production Dockerfiles—it’s one of the simplest but most effective optimisations.

Copy Your Application Code

1
COPY . .

This copies everything else into the image—including server.js and your public/ folder.

✅ Don’t forget to create a .dockerignore file so you don’t copy unnecessary files like node_modules, .git, or debug logs.

Create it now:

1
touch .dockerignore

Add the following:

1
2
node_modules
npm-debug.log

Document the App Port

1
EXPOSE 3000

This tells Docker, “my app listens on port 3000.” It’s mostly for documentation and tooling. To actually open the port to your host machine, you’ll use -p during docker run.

Define the Start Command

1
CMD ["node", "server.js"]

This tells Docker what to run when someone starts a container from your image. Here, it will launch your Node.js app.

💡 You can override this at runtime if needed—for example, to run tests or debugging scripts.

Your Complete Dockerfile

1
2
3
4
5
6
7
FROM node:22
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Step 3: Build the Docker Image

With your Dockerfile written, you’re ready to build your image:

1
docker build -t color-app:1.0 .

Breakdown:

  • docker build = tell Docker to create an image
  • -t color-app:1.0 = give your image a name and version
  • . = look for the Dockerfile in the current directory

You’ll see step-by-step output as Docker builds each layer.

Step 4: Run the Image as a Container

Let’s run your image:

1
docker run -p 3000:3000 color-app:1.0

What it does:

  • Starts your container
  • Maps port 3000 in the container to 3000 on your machine
  • Runs your app using the CMD defined in the Dockerfile

If everything worked, your terminal should say:

1
Server running at http://localhost:3000

Open your browser at http://localhost:3000 — your app is now running from a Docker container!

Step 5: Run in the Background (Detached Mode)

Want to keep the app running in the background?

1
docker run -d -p 3000:3000 --name my-color-app color-app:1.0
  • -d = detached mode (run in background)
  • --name = give the container a friendly name

To confirm it’s running:

1
docker ps

You’ll see your container listed.

Step 6: Stop and Remove the Container

When you’re done:

1
2
docker stop my-color-app
docker rm my-color-app

Clean, tidy, controlled.

🎉 What You Just Did

Let’s reflect:

✅ You wrote a real web app
✅ You wrote a real Dockerfile
✅ You built a custom image
✅ You ran your app in an isolated Docker container
✅ You now have a shippable, portable container image

This is a major milestone.

You’ve gone from Docker learner to Docker builder.

From Local to Cloud: Pushing and Deploying Your Image with Azure

You’ve built your app, containerized it with Docker, and tested it locally. Now let’s take the final leap: running your app in the cloud.

This is the exact workflow teams use to deploy real-world applications—moving from code to container to cloud. You’re about to do the same.

Step 1: Push Your Image to Azure Container Registry

To share or deploy your image, you need to store it in a container registry. We’ll use Azure Container Registry (ACR), a secure and private registry service.

1.1 Set Up Azure CLI

Make sure you have the Azure CLI installed. Then log in to Azure:

1
az login

This will open a browser window where you can sign in to your Azure account.

1.2 Create a Resource Group and Registry

1
2
3
4
az group create --name color-app-resources --location eastus

az acr create --resource-group color-app-resources \
  --name colorappregistry123 --sku Basic

This creates:

  • A resource group called color-app-resources in the East US region
  • A Basic tier Azure Container Registry named colorappregistry123

Important: Registry names must be globally unique. Replace colorappregistry123 with a unique name of your choice. Use only lowercase letters, numbers, and hyphens.

1.3 Enable Admin Access and Get Credentials

For simplicity, we’ll enable admin access to our registry:

1
az acr update --name colorappregistry123 --admin-enabled true

This allows us to use username/password authentication. In production environments, you might use other authentication methods.

Next, retrieve the registry credentials:

1
az acr credential show --name colorappregistry123

This will output something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "passwords": [
    {
      "name": "password",
      "value": "AbCdEfGhIjKlMnOpQrStUvWxYz=="
    },
    {
      "name": "password2",
      "value": "ZyXwVuTsRqPoNmLkJiHgFeDcBa=="
    }
  ],
  "username": "colorappregistry123"
}

Take note of the username and one of the passwords—you’ll need them to log in to the registry.

Now that we have our registry set up, let’s push our Docker image to it.

1.4 Log in to the Registry

1
2
docker login colorappregistry123.azurecr.io \
  --username colorappregistry123

When prompted, enter one of the passwords from the previous step.

You should see Login Succeeded if everything works correctly.

1.5 Tag and Push Your Image

Next, we need to tag our image with the registry’s address:

1
docker tag color-app:1.0 colorappregistry123.azurecr.io/color-app:1.0

This creates a new tag for your image that includes the registry address. The format is registryname.azurecr.io/imagename:tag.

Now we can push the image to ACR:

1
docker push colorappregistry123.azurecr.io/color-app:1.0

You’ll see output showing the progress as each layer of your image is uploaded:

1
2
3
4
5
6
7
The push refers to repository [colorappregistry123.azurecr.io/color-app]
5f70bf18a086: Pushed
d9cbbca60e5a: Pushed
87ea2744f0d7: Pushed
0000e345df6c: Pushed
...
1.0: digest: sha256:4a5573037f358b6cdfa2394c1c9a112e83fdf28c304f646c49f84fa77c5e1d11 size: 3245

Let’s make sure our image was pushed successfully:

1
az acr repository list --name colorappregistry123 --output table

You should see color-app in the list of repositories.

To get more details about your image:

1
az acr repository show-tags --name colorappregistry123 --repository color-app --output table

This will show you all the tags for your image, which should include 1.0.

Step 2: Deploy to Azure Container Instances (ACI)

Let’s run your containerized app in the cloud using Azure Container Instances.

Azure Container Instances (ACI) is a serverless container platform that lets you run containers without managing servers. You don’t need to provision virtual machines, configure networking, or worry about scaling—you simply deploy your containers and Azure handles the rest.

It’s perfect for:

  • Simple web applications like our color changer
  • Background processing jobs
  • Scheduled tasks
  • Quick testing in the cloud

2.1 Create the Container Instance

Run the following command to create a container instance from your ACR image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
az container create \
  --resource-group color-app-resources \
  --name color-app-container \
  --image colorappregistry123.azurecr.io/color-app:1.0 \
  --cpu 1 \
  --memory 1.5 \
  --registry-login-server colorappregistry123.azurecr.io \
  --registry-username colorappregistry123 \
  --registry-password <your-registry-password> \
  --dns-name-label color-app-<unique-suffix> \
  --ports 3000

Replace:

  • colorappregistry123 with your actual ACR registry name
  • <your-registry-password> with the password you got earlier
  • <unique-suffix> with something unique like your initials plus a number

This command:

  • Creates a container instance named color-app-container
  • Pulls your image from Azure Container Registry
  • Allocates 1 CPU core and 1.5GB of memory
  • Sets up authentication to your private registry
  • Creates a public DNS name for your container
  • Opens port 3000 for web traffic

The deployment will take a minute or two. You’ll see a lot of JSON output when it completes.

2.2 Get Your App’s Public URL

1
2
3
4
5
az container show \
  --resource-group color-app-resources \
  --name color-app-container \
  --query "{Status:instanceView.state, FQDN:ipAddress.fqdn}" \
  --out table

You should see output similar to:

1
2
3
Status    FQDN                                            IP
--------  ----------------------------------------------  -------------
Running   color-app-abc123.eastus.azurecontainer.io      20.81.111.222

This confirms that your container is running and shows its fully qualified domain name (FQDN) and IP address.

Visit the FQDN in your browser:

1
http://color-app-<your-suffix>.eastus.azurecontainer.io:3000

2.3 Success

Now you can access your application using the FQDN from the previous step. Open your browser and navigate to:

1
http://color-app-abc123.eastus.azurecontainer.io:3000

Replace the domain with your actual FQDN.

You should see your color-changing application running in the cloud:

Try clicking the button to change colors. Your application is now running in a global cloud platform, accessible from anywhere in the world!

Step 3: Stopping and Cleaning Up Resources

When you’re done experimenting, it’s important to clean up your resources to avoid incurring costs:

Stop the Container Instance

First, stop the container:

1
az container stop --resource-group color-app-resources --name color-app-container

Delete Azure Resources

To completely remove all resources we created:

1
2
3
4
5
6
7
8
# Delete the container instance
az container delete --resource-group color-app-resources --name color-app-container --yes

# Delete the container registry
az acr delete --resource-group color-app-resources --name colorappregistry123 --yes

# Delete the resource group (this removes everything in it)
az group delete --name color-app-resources --yes

These commands ensure you won’t be charged for any resources after you’re done experimenting.

What You Just Accomplished

  • Built a Docker image from your app
  • Stored it in a private cloud registry
  • Deployed and ran it in the cloud without managing servers

By mastering these concepts, you’ve gained skills that are in high demand across the software industry. Containerization is no longer just a nice-to-have—it’s a fundamental part of modern application development.

Key Takeaways

By the end of this article, you’ve done more than just follow steps — you’ve built a complete, real-world Docker workflow:

  • Created a Node.js web application with a frontend and backend
  • Wrote a Dockerfile to package your app into a portable image
  • Built and ran your application locally in a container
  • Pushed your image to Azure Container Registry
  • Deployed your app to the cloud using Azure Container Instances
  • Cleaned up your cloud resources and saw how this workflow scales

You’ve experienced what it means to go from code to container to cloud — and now you understand how containers help you build, share, and run applications consistently anywhere.

Conclusion: You’re Not Just Using Docker — You’re Building With It

You started this article with code on your laptop. You ended it with a live app running in the cloud, fully containerized, fully portable, and repeatable.

That’s a huge milestone. It’s not just a tutorial win — it’s a career skill.

This is how modern software ships: with images, registries, and cloud-native platforms. And you now understand that workflow from end to end.

👉 What’s Next: Make Your Containers Talk

So far, your container has been working solo. But most real-world systems aren’t built from just one container — they’re made of many, working together.

In the next article in the series, we’ll explore Docker networking and show how containers discover, talk to, and secure communication between each other and the outside world.

Let’s take your container skills to the next level — and go from one service to many.