Docker for Beginners: Running Multi-Container Apps with Docker Compose

Docker for Beginners: Running Multi-Container Apps with Docker Compose

Introduction: Bringing the Docker Pieces Together

So far, we have learned Docker one piece at a time.

We learned that images package application environments and containers run from images, Dockerfiles describe how to build images, registries store and share images, Docker networks let containers communicate, and Docker volumes keep data beyond a container’s lifecycle.

Those pieces are useful on their own.

But real applications usually need several of them at the same time.

An API might need a database. The API needs environment variables so it knows how to connect. The database needs persistent storage so its data survives. The API needs a published port so your browser can reach it. Both containers need to be on the same network so they can talk to each other.

You could create all of that manually with separate Docker commands:

Create a network.
Create a volume.
Run the database container.
Build the API image.
Run the API container.
Publish the API port.
Pass the database connection settings.
Make sure both containers can find each other.

That works, but it becomes awkward quickly.

Docker Compose gives us a cleaner way.

With Compose, we describe the application stack in one file. That file says which services exist, how they are built or which images they use, which ports are published, which environment variables are set, and which volumes should be attached.

Then one command starts the whole stack.

In this article, we will build a small multi-container application:

Browser
→ Node.js API
→ PostgreSQL database
→ Docker volume

The API will be available from your machine on localhost:3000. Inside the Docker network, the API will connect to PostgreSQL by service name. PostgreSQL will store its data in a named Docker volume.

This is where the earlier Docker pieces come together:

Dockerfile → builds the API image
Compose service → runs the API and database containers
Docker network → lets the API reach the database by name
Published port → lets your browser reach the API
Docker volume → keeps the database data

By the end, Docker Compose should feel like a practical tool for one clear job:

Describe a multi-container application once,
then run the whole thing locally with one command.

What Docker Compose Solves

Before Compose, we would need to create and connect all the pieces ourselves.

For this small application, that would mean thinking through several separate Docker tasks:

Create a Docker network.
Create a Docker volume for the database.
Run a PostgreSQL container with the right environment variables.
Build the API image from its Dockerfile.
Run the API container on the same network as the database.
Publish the API port so your browser can reach it.
Pass the database connection settings into the API container.

Each piece is something you already understand from the earlier articles.

The problem is not that any one command is difficult.

The problem is that the application is now a stack. The containers, network, volume, ports, and environment variables all belong together. If you run them manually, you have to remember how each piece was created and how they connect.

Docker Compose solves that by moving the stack definition into one file.

Instead of describing the application with a chain of manual commands, you describe it in a compose.yaml file.

That file becomes the local blueprint for the application:

api service
→ built from our Dockerfile
→ published on localhost:3000
→ configured with database environment variables

db service
→ runs from the official PostgreSQL image
→ stores data in a named Docker volume
→ reachable by the API using the service name db

Then you can start the whole stack with:

docker compose up

And stop it with:

docker compose down

That is the main value of Compose.

It does not replace images, containers, networks, ports, environment variables, or volumes.

It brings them together.

Compose lets you describe a multi-container application once, then run that same local stack repeatedly without rebuilding the setup by hand each time.

What We Are Building

For this article, we will keep the application small but realistic.

We will build a two-container stack:

Browser
→ API container
→ PostgreSQL container
→ Docker volume

The API will be a small Node.js Express application.

It will have two routes:

GET /
→ returns a simple message so we know the API is running

GET /users
→ connects to PostgreSQL and returns users as JSON

The database will run in a separate PostgreSQL container.

PostgreSQL will store its data in a named Docker volume, so the data can survive even if the database container is removed and recreated.

The API will not connect to the database using an IP address. It will connect using the Compose service name:

db

That is the same networking idea we learned earlier. Containers on the same Docker network can find each other by name. Compose creates that network for the application automatically.

The stack will use these pieces:

api service
→ built from our local Dockerfile
→ published to localhost:3000
→ connects to PostgreSQL using DB_HOST=db

db service
→ runs from the official postgres image
→ receives database settings from environment variables
→ stores data in a named volume

The project will look like this:

compose-api/
├── app.js
├── package.json
├── Dockerfile
└── compose.yaml

This is enough to show the main Compose idea without turning the article into a large application tutorial.

The important part is not the API code itself.

The important part is how Compose describes the whole local stack in one file.

Create the Sample API

First, create a new folder for the project:

mkdir compose-api
cd compose-api

Now create a package.json file:

{
  "name": "compose-api",
  "version": "1.0.0",
  "description": "A small Node.js API for learning Docker Compose",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.3",
    "pg": "^8.11.5"
  }
}

This application uses two dependencies:

express
→ creates the HTTP API

pg
→ lets Node.js connect to PostgreSQL

Now create app.js:

const express = require("express");
const { Pool } = require("pg");

const app = express();
const port = process.env.PORT || 3000;

const pool = new Pool({
  host: process.env.DB_HOST || "localhost",
  port: process.env.DB_PORT || 5432,
  database: process.env.DB_NAME || "demo_db",
  user: process.env.DB_USER || "demo_user",
  password: process.env.DB_PASSWORD || "demo_password",
});

async function initialiseDatabase() {
  await pool.query(`
    CREATE TABLE IF NOT EXISTS users (
      id SERIAL PRIMARY KEY,
      name TEXT NOT NULL
    );
  `);

  await pool.query(`
    INSERT INTO users (name)
    SELECT 'Alice'
    WHERE NOT EXISTS (
      SELECT 1 FROM users WHERE name = 'Alice'
    );
  `);
}

app.get("/", (req, res) => {
  res.send("Docker Compose API is running");
});

app.get("/users", async (req, res) => {
  try {
    const result = await pool.query("SELECT id, name FROM users ORDER BY id;");
    res.json(result.rows);
  } catch (error) {
    console.error("Database query failed:", error);
    res.status(500).json({ error: "Database query failed" });
  }
});

initialiseDatabase()
  .then(() => {
    app.listen(port, () => {
      console.log(`API listening on port ${port}`);
    });
  })
  .catch((error) => {
    console.error("Failed to initialise database:", error);
    process.exit(1);
  });

This is a small Express API.

The / route confirms the API is running.

The /users route connects to PostgreSQL and returns the users from the database.

The app also creates the users table automatically and inserts one user, Alice, if she does not already exist. That keeps the walkthrough simple because we do not need a separate database migration step.

Notice the database settings:

const pool = new Pool({
  host: process.env.DB_HOST || "localhost",
  port: process.env.DB_PORT || 5432,
  database: process.env.DB_NAME || "demo_db",
  user: process.env.DB_USER || "demo_user",
  password: process.env.DB_PASSWORD || "demo_password",
});

The app reads its database connection details from environment variables.

That is important for Compose.

Later, the Compose file will set:

DB_HOST=db

The value db will be the name of the PostgreSQL service inside the Compose application. That means the API will connect to the database by service name, not by IP address.

Write the Compose File

Now we have two pieces ready:

app.js
package.json
Dockerfile

The API code exists, and the Dockerfile explains how to package that API into an image.

Now we need the Compose file.

Create a file called compose.yaml in the same project folder:

compose-api/
├── app.js
├── package.json
├── Dockerfile
└── compose.yaml

Before we write it, there is one important Compose idea to understand.

Compose can work with services in two common ways.

It can build an image from a local Dockerfile:

api:
  build: .

Or it can use an existing image from a registry:

db:
  image: postgres:16

We will use both patterns in this walkthrough.

The api service uses build: . because it is our own application code. Compose will look in the current folder, find the Dockerfile, build the API image, and then run a container from it.

The db service uses image: postgres:16 because we do not need to build our own PostgreSQL image. The official PostgreSQL image already gives us what we need.

Now add this to compose.yaml:

services:
  api:
    build: .
    ports:
      - "3000:3000"
    environment:
      DB_HOST: db
      DB_PORT: 5432
      DB_NAME: demo_db
      DB_USER: demo_user
      DB_PASSWORD: demo_password
    depends_on:
      - db

  db:
    image: postgres:16
    environment:
      POSTGRES_DB: demo_db
      POSTGRES_USER: demo_user
      POSTGRES_PASSWORD: demo_password
    volumes:
      - postgres-data:/var/lib/postgresql/data

volumes:
  postgres-data:

This file describes the whole local stack.

Let’s walk through it.

The api service is our Node.js API:

api:
  build: .

This tells Compose to build the API image from the Dockerfile in the current folder.

The API is published to your machine on port 3000:

ports:
  - "3000:3000"

The first 3000 is the port on your machine.

The second 3000 is the port inside the API container.

So the mapping means:

localhost:3000
→ port 3000 inside the api container

The API also receives database connection settings as environment variables:

environment:
  DB_HOST: db
  DB_PORT: 5432
  DB_NAME: demo_db
  DB_USER: demo_user
  DB_PASSWORD: demo_password

The important one is:

DB_HOST: db

db is the name of the PostgreSQL service in the same Compose file. Compose creates a network for the application, and services can reach each other by service name on that network.

So the API does not need the database container’s IP address.

It connects to:

db:5432

The depends_on line tells Compose to start the database service before the API service:

depends_on:
  - db

This controls startup order. It does not guarantee the database is fully ready to accept connections, but it is enough for this beginner walkthrough because our API exits if it cannot initialise the database, and Compose can be rerun easily. In production-grade setups, you would usually add health checks or retry logic.

The db service uses the official PostgreSQL image:

db:
  image: postgres:16

It receives its database name, username, and password through PostgreSQL’s expected environment variables:

environment:
  POSTGRES_DB: demo_db
  POSTGRES_USER: demo_user
  POSTGRES_PASSWORD: demo_password

Finally, the database stores its data in a named volume:

volumes:
  - postgres-data:/var/lib/postgresql/data

The left side is the Docker volume:

postgres-data

The right side is the path inside the PostgreSQL container:

/var/lib/postgresql/data

That is where this PostgreSQL image stores its database files.

At the bottom of the file, we declare the named volume:

volumes:
  postgres-data:

So Compose now knows about the whole stack:

api service
→ build from local Dockerfile
→ publish localhost:3000
→ connect to db:5432

db service
→ use postgres:16 image
→ configure database with environment variables
→ store data in postgres-data volume

Next, we can start the whole stack with one command.

Run the Stack with Docker Compose

Now that compose.yaml describes the stack, Compose can create and run the pieces for us.

Run this command from the project folder:

docker compose up --build -d

The up command starts the application stack.

The --build flag tells Compose to build the API image before starting the containers. We need that here because the api service uses:

build: .

That means Compose should build an image from the Dockerfile in the current folder.

The -d flag runs the stack in detached mode. That means the containers keep running in the background and your terminal is returned to you.

When you run docker compose up --build -d, Compose reads the file and turns it into real Docker resources.

It will:

Build the api image from the Dockerfile.
Pull the postgres:16 image if needed.
Create a default network for the Compose project.
Create the postgres-data volume if it does not already exist.
Start the db container with the POSTGRES_* environment variables.
Start the api container with the DB_* environment variables.
Publish localhost:3000 to port 3000 inside the api container.

That is the important point.

You are not creating the network, volume, database container, API container, environment variables, and port mapping one command at a time. Compose reads the desired stack from compose.yaml and creates those pieces together.

Now check the running services:

docker compose ps

You should see both services:

NAME                IMAGE             COMMAND                  SERVICE   STATUS         PORTS
compose-api-api-1   compose-api-api   "docker-entrypoint.s…"   api       Up             0.0.0.0:3000->3000/tcp
compose-api-db-1    postgres:16       "docker-entrypoint.s…"   db        Up             5432/tcp

The exact container names may be slightly different, but you should see one service for api and one service for db.

One important note: depends_on starts the db container before the api container, but it does not guarantee that PostgreSQL is fully ready to accept connections at the exact moment the API starts.

On a fresh run, PostgreSQL may take a few seconds to initialize. If the API starts too quickly, it may exit with a connection error.

You can spot that with:

docker compose ps

If the API is not running, check its logs:

docker compose logs api

You may see an error like:

Failed to initialise database: Error: connect ECONNREFUSED db:5432

That means the API tried to connect before PostgreSQL was ready.

For this beginner walkthrough, wait a few seconds and start the stack again:

docker compose up -d

Then check again:

docker compose ps

Once both services show as running, the stack is ready.

You can also look at the logs for each service:

docker compose logs api
docker compose logs db

At this point, the whole stack is running:

Browser
→ localhost:3000
→ api service
→ db service
→ postgres-data volume

Next, we will test the API and confirm that it can reach the database.

Test the API and Database Connection

Now that both services are running, test the API from your browser or terminal.

Start with the root route:

http://localhost:3000/

You should see:

Docker Compose API is running

That proves the browser can reach the API through the published port:

Browser
→ localhost:3000
→ api container

Now test the database-backed route:

http://localhost:3000/users

You should see a JSON response like this:

[
  {
    "id": 1,
    "name": "Alice"
  }
]

That proves the API can reach PostgreSQL inside the Compose network.

The request path looks like this:

Browser
→ localhost:3000
→ api service
→ db service
→ postgres-data volume

This is the full Compose stack working.

Your browser does not connect directly to PostgreSQL. PostgreSQL is not published to your machine with a host port. It is only reachable inside the Compose network.

The API is the public entry point for this local stack.

That is an important design habit:

Expose what needs to be reached from outside.
Keep internal services private unless they need to be exposed.

In this example:

api
→ published to localhost:3000

db
→ private inside the Compose network

The API reaches the database using the service name:

db:5432

That service name works because Compose created a network for the application and attached both services to it.

So this test proves three things:

Port publishing works:
browser → localhost:3000 → api

Service-name networking works:
api → db:5432

Volume-backed database storage is in place:
db → postgres-data

This is the main result of the article.

Compose has brought together the pieces we learned separately: containers, networking, ports, environment variables, and volumes.

What Compose Created for You

At this point, it is worth slowing down and looking at what Compose created.

You wrote one file:

compose.yaml

Then you ran one command:

docker compose up --build -d

From that, Compose created several Docker resources.

It built the API image from your Dockerfile:

compose-api-api

It pulled the official PostgreSQL image if it was not already available:

postgres:16

It created two containers:

api container
db container

It created a default network for the project so the services could communicate by name:

compose-api_default

That is why the API can connect to:

db:5432

It created a named volume for the database:

compose-api_postgres-data

That is what backs this part of the Compose file:

volumes:
  - postgres-data:/var/lib/postgresql/data

It also published the API port to your machine:

localhost:3000
→ api container port 3000

So Compose did not replace the Docker concepts we already learned.

It used them.

The stack now looks like this:

Docker Compose project
├── network: compose-api_default
├── volume: compose-api_postgres-data
├── service: api
│   ├── built from local Dockerfile
│   ├── published on localhost:3000
│   └── connects to db:5432
└── service: db
    ├── uses postgres:16 image
    └── stores data in postgres-data volume

That is the main “aha” moment.

Compose is not magic. It is a way to describe Docker resources together:

services
→ containers

ports
→ host-to-container access

environment
→ container configuration

volumes
→ persistent storage

networks
→ service-to-service communication

Instead of creating each piece manually, Compose reads the file and builds the local application stack for you.

Stop, Restart, and Clean Up the Stack

Now that the stack is running, let’s look at how to manage it.

To stop the running containers without deleting them, run:

docker compose stop

This stops the api and db containers, but keeps the containers, network, and volume.

You can start them again with:

docker compose start

That is useful when you want to pause the stack and resume it later.

If you want to stop and remove the stack containers and network, run:

docker compose down

This removes the containers and the default Compose network.

But it does not remove the named volume by default.

That matters because our PostgreSQL data is stored in:

compose-api_postgres-data

So after:

docker compose down

the containers are gone, but the database volume can still exist. If you bring the stack back up, PostgreSQL can reuse the same volume.

Start it again:

docker compose up -d

Then test:

http://localhost:3000/users

You should still see the user data because the volume survived.

If you want to remove the stack and delete the named volume, run:

docker compose down -v

Be careful with -v.

That flag removes the named volumes created by the Compose project. In this walkthrough, that means deleting the PostgreSQL data volume.

So the cleanup model is:

docker compose stop
→ stop containers, keep everything

docker compose start
→ start the stopped containers again

docker compose down
→ remove containers and network, keep named volumes

docker compose down -v
→ remove containers, network, and named volumes

That connects directly back to the storage lesson from the previous article.

Containers are replaceable.

Volumes are where important data can survive.

The Docker Compose Mental Model

Docker Compose is more than a shorter way to run containers.

It is a way to describe a multi-container application as a repeatable local stack.

Without Compose, the setup is spread across several manual commands. You need to remember the network, the volume, the database settings, the API settings, the port mappings, and the order in which the pieces fit together.

That works for a small demo, but it becomes fragile as soon as the application grows.

A real application might have:

An API
A database
A cache
A background worker
A message queue
Several environment variables
One or more volumes
Published ports
Private internal services

If all of that exists only as a list of manual commands, it is easy to miss a step.

Docker Compose moves that knowledge into one file.

The compose.yaml file becomes the local definition of the application stack.

It says:

These are the services.
These are the images to use.
This service should be built from local code.
These are the ports to publish.
These are the environment variables each service needs.
These are the volumes that should keep data.
These services can talk to each other by name.

That is the real value.

Compose gives you a repeatable way to bring the stack up, stop it, and bring it back again.

docker compose up --build -d

means:

Read the stack definition.
Build what needs to be built.
Pull what needs to be pulled.
Create the network.
Create or reuse the volumes.
Start the services with the right configuration.

And:

docker compose down

means:

Stop and remove the stack containers and network,
while keeping named volumes unless you explicitly remove them.

So the mental model is:

Dockerfile
→ describes how to build one image

compose.yaml
→ describes how several containers run together

That distinction matters.

A Dockerfile answers:

How do I package this application?

A Compose file answers:

How do I run this application stack locally?

In this article, our stack was small:

api
db
postgres-data volume
localhost:3000

But the same idea scales to larger local setups. The Compose file may become longer, but the purpose stays the same: keep the application’s local runtime shape in one repeatable place.

Compose is not replacing the Docker concepts we learned earlier.

It is bringing them together:

services
→ containers that belong to the application

build
→ create an image from local code

image
→ use an existing image from a registry

ports
→ expose selected services to your machine

environment
→ inject configuration into containers

volumes
→ keep data outside the container lifecycle

networks
→ let services communicate by name

That is the Compose mental model.

It turns a set of separate Docker pieces into one local application stack you can run again and again.

Where This Leads

You now have a working Docker Compose stack.

That is a major step from running individual containers.

In the earlier articles, we learned the Docker pieces one by one:

Images
Containers
Dockerfiles
Networks
Ports
Environment variables
Volumes

In this article, Compose brought those pieces together.

You described a local application stack in one file, then used one command to build the API image, start the API container, start the PostgreSQL container, create the network, create the volume, publish the API port, and inject the configuration each service needed.

That is the local development value of Docker Compose.

It gives you a repeatable way to run a multi-container application on your machine.

But local development is not the end of the story.

Once the application stack works locally, the next question is:

How does this idea translate to the cloud?

A Compose file is excellent for describing a local stack, but cloud platforms usually have their own way of describing and running containerized services.

The same application idea still carries forward:

API service
→ database connection
→ environment variables
→ ingress/public access
→ persistent data
→ container image

But in the cloud, those pieces are usually represented by managed services instead of local Docker resources.

For example, the API image might be pushed to Azure Container Registry. The API might run in Azure Container Apps. The database might move from a local PostgreSQL container to a managed database service. Environment variables become application configuration. Local port publishing becomes cloud ingress.

So the model evolves from:

Docker Compose locally
→ api container
→ db container
→ local Docker network
→ local Docker volume

to something more like:

Azure Container Apps
→ API container app
→ managed database
→ environment variables/secrets
→ ingress
→ registry-based image deployment

That is where we will go next.

In the next article, we will take the same multi-container thinking and look at how it maps to Azure Container Apps.

The goal is not to pretend that Compose and Azure are the same thing.

The goal is to understand what changes when a local container stack becomes a cloud-hosted container application.