Introduction
In Part 1, you learned what Docker is and why it matters. You ran your first containers using images like hello-world
and nginx
, and saw how containerisation simplifies development. In Part 2, you explored how Docker works behind the scenes — from the CLI to the Docker daemon to the registry — and gained confidence in managing the container lifecycle.
Until now, Docker was a tool you used. Starting today, it becomes a tool you create with.
You’re going to take an application — one you’ve built — and turn it into something portable, consistent, and cloud-ready. You’ll package it in a Docker image, push that image to the cloud, and deploy it to run anywhere Docker is supported.
This is the shift from learning to building.
This is where Docker stops being just a cool tool and starts becoming part of your professional workflow.
Why This Matters
When you create your own Docker images, you’re solving problems developers face every day:
- “Why does it work on your laptop but not mine?”
- “How do I set up the same environment on staging?”
- “Can we just ship the whole thing as-is?”
With your own Docker image:
- Your app runs the same on every machine, no surprises
- All your dependencies and setup steps are bundled and versioned
- You can move from dev to prod with confidence
This isn’t just a skill — it’s a new mindset. You’re not writing scripts or debugging someone else’s system. You’re owning your environment from the first line of code to the final deployment.
What You’ll Build in This Article
By the end of this hands-on guide, you’ll have:
- Written a simple but real Node.js web application
- Created a custom Dockerfile to package that app
- Run your containerised app locally
- Pushed your Docker image to Azure Container Registry
- Deployed and ran that container in the cloud using Azure Container Instances
That’s not just using Docker — that’s a full development-to-deployment workflow.
Let’s take your app. Put it in a container. And ship it to the cloud. 🚀
Understanding Dockerfiles: The Blueprint for Your Image
Now that you’ve seen how containers are created from images, it’s time to learn how to create the image yourself — using a Dockerfile.
In this section, you’ll learn how to write a Dockerfile — the file that Docker uses to build your own container images from your code. This is where things shift from using Docker to truly creating with Docker.
Before we start, here’s a visual overview to anchor your understanding:
You write a Dockerfile, Docker builds it into an image, and when you run that image, you get a container.
Let’s break it down step by step.
What Is a Dockerfile?
A Dockerfile is a plain text file that contains a list of instructions. These instructions tell Docker exactly how to:
- Set up your application’s environment
- Copy in your code
- Install its dependencies
- Define how it should run
Think of it like a recipe. You write it once, and Docker can consistently “bake” your app into a shareable, runnable image — whether on your laptop, in a CI pipeline, or in the cloud.
📝 Note: The file must be named exactly
Dockerfile
(with no extension) or Docker won’t recognise it.
Let’s Imagine Your App
Let’s say you’ve built a simple Node.js app.
Your project folder looks like this:
|
|
You want Docker to package this app, install the dependencies, and run it.
That’s where the Dockerfile comes in.
Writing a Dockerfile: One Step at a Time
Each instruction in your Dockerfile creates a layer — think of it like stacking sheets of instructions. Docker builds your image one layer at a time and caches each layer to speed up future builds.
Let’s write a clean, efficient Dockerfile step by step:
1. Choose a Base Image
|
|
This line tells Docker:
“Start with Node.js version 22 as my foundation.”
That base image is already a lightweight Linux OS with Node.js pre-installed — so you don’t have to install Node manually. It saves time and keeps your image consistent.
⚠️ Avoid
FROM node:latest
. Pin a specific version (like22
) to make your builds predictable.
2. Set the Working Directory
|
|
This tells Docker:
“From here on, run everything inside
/usr/src/app
inside the image.”
It’s like changing into a folder in your terminal before running commands. If you skip this, you’d have to use full paths like /usr/src/app/app.js
everywhere — messy and error-prone.
3. Copy Only What You Need (for Now)
|
|
This command copies only your package.json
and package-lock.json
files into the container.
Why just these?
Because in the next step, you’ll install dependencies — and that’s all npm install
needs. Copying just these files first helps Docker cache the install step, so it doesn’t redo it if your app code changes.
💡 Plain English: This is a build trick. Docker will reuse this “install” layer as long as your dependencies don’t change.
4. Install App Dependencies
|
|
This runs inside the image during build time — not every time the container starts.
Docker reads your package.json
, installs all required packages, and locks them into this layer of the image.
🧠 These packages are now baked into the image — so every time you run this image, your app has everything it needs.
5. Copy the Rest of Your App Code
|
|
This copies everything else from your project folder into the working directory in the container.
📁 The first
.
means “from my current directory (host)”, and the second.
means “to the current working directory inside the image”.
🚫 Be careful: this also includes things like
.git/
,.env
, andnode_modules/
.
Use a.dockerignore
file to keep your image clean and avoid copying sensitive or unnecessary files.
6. Declare the Port Your App Uses
|
|
This tells Docker:
“My app listens on port 3000 inside the container.”
📢 This is just documentation for Docker and tools like Docker Compose.
To actually access this app from your host machine, you’ll still need to publish the port with-p
when running the container:
|
|
7. Define the Default Start Command
|
|
This tells Docker:
“When someone runs this image, start it by running
node app.js
.”
This only runs when the container starts — not when the image is built.
💡 You can override this at runtime with another command during development, for example:
|
|
A Complete Dockerfile
|
|
Why Order Matters: Layers and Build Caching
Docker builds your image in layers, caching each one. The order you write instructions impacts how fast Docker can rebuild your image later.
Let’s say you changed just one line of code in app.js
. Here’s how the caching works:
Change Made | Which Layers Rebuild? |
---|---|
package.json changed |
All layers from RUN npm install onward |
Only src/ code changed |
Only the final COPY . . and onward |
Base image changed | Everything, starting from FROM |
✅ Pro tip: Always copy
package.json
before your full app code to avoid reinstalling dependencies when only code changes.
Recap: What You Just Learned
- A Dockerfile turns your app into a portable image by defining build steps
- Each command in a Dockerfile creates a layer, and layer order affects rebuild time
- You can control your runtime environment fully — no more “it works on my machine” surprises
- You now understand every line in a professional-grade Dockerfile
Next, let’s build this image for real and run it on your machine. You’ve just gone from reading Dockerfiles to writing one. Let’s keep going.
Creating an App You Can Containerize
Now let’s shift from theory into practice.
You’ve written a Dockerfile. You understand how it builds images from source code. Now let’s give it something real to build.
In this section, you’ll create a small but complete Node.js web application — a front end and back end bundled together — that we’ll package into a Docker image and run anywhere.
It’s not a toy “Hello World.” This app mirrors the structure of real projects:
- A front-end (HTML, CSS, JavaScript)
- A back-end server (Node.js + Express)
- A dependency (
express
) - A defined runtime environment (
node 22
)
Exactly the kind of setup that’s hard to share or deploy without Docker.
Step 1: Set Up the Project Folder
First, create a folder to hold your project:
|
|
This folder will contain your application code and your Dockerfile.
Step 2: Initialize the Node.js App
Use this command to quickly generate a package.json
file with default values:
|
|
This doesn’t declare any dependencies yet — it just creates a basic configuration file. You’ll see this file again later in your Dockerfile when we install dependencies during the image build.
Step 3: Install Express
Install the Express framework, which helps us build the backend server:
|
|
This adds Express as a dependency and updates your package.json
file automatically.
Later, when Docker builds your image, it will run npm install
using this file to install everything your app needs.
Step 4: Create the Backend Server
Create a new file in your root project directory called server.js
:
|
|
This file does three things:
- Serves static assets from a
public
folder - Responds to requests at
/
with yourindex.html
- Starts a server on port 3000
When you run this container later, you’ll tell Docker to execute this file using a CMD
instruction like:
|
|
Step 5: Add the Frontend
Create a public
folder to hold your front-end files:
|
|
Inside public
, create a file named index.html
and paste the following into it:
|
|
Step 6: Check Your Folder Structure
You should now have:
|
|
Step 7: Test It Locally
Let’s confirm the app works before containerizing it.
From the root project directory, run:
|
|
If it works, you’ll see something like this in your terminal:
|
|
Open your browser and go to http://localhost:3000
✅ If the page loads and you see a button that changes colors — great! You’re ready to containerize.
When you’re done, press Ctrl+C
to stop the server.
Why This Matters
This app might be simple, but it mirrors the real-world structure of many production apps:
- A backend server with external dependencies
- A frontend served over HTTP
- A runtime requirement (Node.js 22)
Without Docker, teammates and deployment servers would need to install Node, manage dependencies, and know which files to run.
With Docker, you’ll wrap everything into a clean, portable image that anyone can run — anywhere.
Let’s do that next. You’ve written the app. Let’s build the image.
From Code to Container
Now that you’ve created a real application, it’s time to package it as a Docker image and run it as a container—just like we do in real-world projects.
This section is where theory meets action. You’ll write a Dockerfile, build an image from your code, and launch a container running your app—all from the command line. By the end, your application will be running in a fully isolated, portable Docker environment.
Step 1: Create the Dockerfile
In the root of your project folder (docker-color-app
), create a new file called Dockerfile
(with no file extension):
|
|
Open it in your favourite editor and add the following, step by step.
Step 2: Write the Dockerfile
Choose a Base Image
|
|
This line tells Docker: start with Node.js version 22. This base image is a lightweight Linux OS with Node and npm already installed—so you don’t have to install them manually.
🔍 Why pin the version? Using
node:latest
sounds convenient, but it can change over time. Pinning tonode:22
ensures consistency across builds—even months from now.
Set the Working Directory
|
|
This tells Docker, “from now on, all commands happen inside /usr/src/app
in the container.” It’s like doing a cd
before every command. Without it, you’d need to use full paths everywhere—not fun.
Install Dependencies First
|
|
Copy just your package.json
and package-lock.json
first. Then run npm install
. Why? Because Docker caches layers. So if your code changes but your dependencies don’t, Docker will reuse this step to save time.
⚠️ You’ll often see this pattern in production Dockerfiles—it’s one of the simplest but most effective optimisations.
Copy Your Application Code
|
|
This copies everything else into the image—including server.js
and your public/
folder.
✅ Don’t forget to create a
.dockerignore
file so you don’t copy unnecessary files likenode_modules
,.git
, or debug logs.
Create it now:
|
|
Add the following:
|
|
Document the App Port
|
|
This tells Docker, “my app listens on port 3000.” It’s mostly for documentation and tooling. To actually open the port to your host machine, you’ll use -p
during docker run
.
Define the Start Command
|
|
This tells Docker what to run when someone starts a container from your image. Here, it will launch your Node.js app.
💡 You can override this at runtime if needed—for example, to run tests or debugging scripts.
Your Complete Dockerfile
|
|
Step 3: Build the Docker Image
With your Dockerfile written, you’re ready to build your image:
|
|
Breakdown:
docker build
= tell Docker to create an image-t color-app:1.0
= give your image a name and version.
= look for the Dockerfile in the current directory
You’ll see step-by-step output as Docker builds each layer.
Step 4: Run the Image as a Container
Let’s run your image:
|
|
What it does:
- Starts your container
- Maps port 3000 in the container to 3000 on your machine
- Runs your app using the CMD defined in the Dockerfile
If everything worked, your terminal should say:
|
|
Open your browser at http://localhost:3000 — your app is now running from a Docker container!
Step 5: Run in the Background (Detached Mode)
Want to keep the app running in the background?
|
|
-d
= detached mode (run in background)--name
= give the container a friendly name
To confirm it’s running:
|
|
You’ll see your container listed.
Step 6: Stop and Remove the Container
When you’re done:
|
|
Clean, tidy, controlled.
🎉 What You Just Did
Let’s reflect:
✅ You wrote a real web app
✅ You wrote a real Dockerfile
✅ You built a custom image
✅ You ran your app in an isolated Docker container
✅ You now have a shippable, portable container image
This is a major milestone.
You’ve gone from Docker learner to Docker builder.
From Local to Cloud: Pushing and Deploying Your Image with Azure
You’ve built your app, containerized it with Docker, and tested it locally. Now let’s take the final leap: running your app in the cloud.
This is the exact workflow teams use to deploy real-world applications—moving from code to container to cloud. You’re about to do the same.
Step 1: Push Your Image to Azure Container Registry
To share or deploy your image, you need to store it in a container registry. We’ll use Azure Container Registry (ACR), a secure and private registry service.
1.1 Set Up Azure CLI
Make sure you have the Azure CLI installed. Then log in to Azure:
|
|
This will open a browser window where you can sign in to your Azure account.
1.2 Create a Resource Group and Registry
|
|
This creates:
- A resource group called
color-app-resources
in the East US region - A Basic tier Azure Container Registry named
colorappregistry123
Important: Registry names must be globally unique. Replace
colorappregistry123
with a unique name of your choice. Use only lowercase letters, numbers, and hyphens.
1.3 Enable Admin Access and Get Credentials
For simplicity, we’ll enable admin access to our registry:
|
|
This allows us to use username/password authentication. In production environments, you might use other authentication methods.
Next, retrieve the registry credentials:
|
|
This will output something like:
|
|
Take note of the username and one of the passwords—you’ll need them to log in to the registry.
Now that we have our registry set up, let’s push our Docker image to it.
1.4 Log in to the Registry
|
|
When prompted, enter one of the passwords from the previous step.
You should see Login Succeeded
if everything works correctly.
1.5 Tag and Push Your Image
Next, we need to tag our image with the registry’s address:
|
|
This creates a new tag for your image that includes the registry address. The format is registryname.azurecr.io/imagename:tag
.
Now we can push the image to ACR:
|
|
You’ll see output showing the progress as each layer of your image is uploaded:
|
|
Let’s make sure our image was pushed successfully:
|
|
You should see color-app
in the list of repositories.
To get more details about your image:
|
|
This will show you all the tags for your image, which should include 1.0
.
Step 2: Deploy to Azure Container Instances (ACI)
Let’s run your containerized app in the cloud using Azure Container Instances.
Azure Container Instances (ACI) is a serverless container platform that lets you run containers without managing servers. You don’t need to provision virtual machines, configure networking, or worry about scaling—you simply deploy your containers and Azure handles the rest.
It’s perfect for:
- Simple web applications like our color changer
- Background processing jobs
- Scheduled tasks
- Quick testing in the cloud
2.1 Create the Container Instance
Run the following command to create a container instance from your ACR image:
|
|
Replace:
colorappregistry123
with your actual ACR registry name<your-registry-password>
with the password you got earlier<unique-suffix>
with something unique like your initials plus a number
This command:
- Creates a container instance named
color-app-container
- Pulls your image from Azure Container Registry
- Allocates 1 CPU core and 1.5GB of memory
- Sets up authentication to your private registry
- Creates a public DNS name for your container
- Opens port 3000 for web traffic
The deployment will take a minute or two. You’ll see a lot of JSON output when it completes.
2.2 Get Your App’s Public URL
|
|
You should see output similar to:
|
|
This confirms that your container is running and shows its fully qualified domain name (FQDN) and IP address.
Visit the FQDN in your browser:
|
|
2.3 Success
Now you can access your application using the FQDN from the previous step. Open your browser and navigate to:
|
|
Replace the domain with your actual FQDN.
You should see your color-changing application running in the cloud:
Try clicking the button to change colors. Your application is now running in a global cloud platform, accessible from anywhere in the world!
Step 3: Stopping and Cleaning Up Resources
When you’re done experimenting, it’s important to clean up your resources to avoid incurring costs:
Stop the Container Instance
First, stop the container:
|
|
Delete Azure Resources
To completely remove all resources we created:
|
|
These commands ensure you won’t be charged for any resources after you’re done experimenting.
What You Just Accomplished
- Built a Docker image from your app
- Stored it in a private cloud registry
- Deployed and ran it in the cloud without managing servers
By mastering these concepts, you’ve gained skills that are in high demand across the software industry. Containerization is no longer just a nice-to-have—it’s a fundamental part of modern application development.
Key Takeaways
By the end of this article, you’ve done more than just follow steps — you’ve built a complete, real-world Docker workflow:
- Created a Node.js web application with a frontend and backend
- Wrote a Dockerfile to package your app into a portable image
- Built and ran your application locally in a container
- Pushed your image to Azure Container Registry
- Deployed your app to the cloud using Azure Container Instances
- Cleaned up your cloud resources and saw how this workflow scales
You’ve experienced what it means to go from code to container to cloud — and now you understand how containers help you build, share, and run applications consistently anywhere.
Conclusion: You’re Not Just Using Docker — You’re Building With It
You started this article with code on your laptop. You ended it with a live app running in the cloud, fully containerized, fully portable, and repeatable.
That’s a huge milestone. It’s not just a tutorial win — it’s a career skill.
This is how modern software ships: with images, registries, and cloud-native platforms. And you now understand that workflow from end to end.
👉 What’s Next: Make Your Containers Talk
So far, your container has been working solo. But most real-world systems aren’t built from just one container — they’re made of many, working together.
In the next article in the series, we’ll explore Docker networking and show how containers discover, talk to, and secure communication between each other and the outside world.
Let’s take your container skills to the next level — and go from one service to many.