Docker for Beginners: Running Multi-Container Apps with Azure Container Apps

Docker for Beginners: Running Multi-Container Apps with Azure Container Apps

Introduction: From Local Multi-Container Apps to Azure Container Apps

In the previous article, we used Docker Compose to run a small multi-container application locally.

The stack had more than one moving part:

Browser
→ API service
→ database service
→ Docker volume

Compose gave us a clean way to describe that local setup in one file. It created the containers, connected them on a Docker network, injected environment variables, published the API port, and attached persistent storage for the database.

That is a great local development model.

But local development is not the same as running containers in the cloud.

In the cloud, the same application idea still exists, but the pieces are represented differently. Instead of a local Docker network, you use a managed cloud environment. Instead of a local container port published to localhost, you use cloud ingress. Instead of images built and kept on your machine, you push images to a registry so Azure can pull and run them.

That is where Azure Container Apps comes in.

Azure Container Apps lets you run containerized services in Azure without managing virtual machines or a Kubernetes cluster directly. It is a good next step because it lets us keep the same multi-service thinking, but move it into a managed cloud platform.

In this article, we will build a small cloud-hosted container application:

Browser
→ public API Container App
→ internal Worker Container App

The API will be the public entry point. It will receive a browser request and call the worker service.

The worker will not be public. It will run as an internal container app and return a response to the API.

That gives us a simple but useful cloud pattern:

Public service
→ internal service

This article is not about deploying a Compose file directly to Azure.

Instead, we are translating the container ideas you already know into Azure resources:

Container image
→ Azure Container Registry

Local service
→ Azure Container App

Local port publishing
→ Azure ingress

Local Docker network
→ Container Apps environment

By the end, you should understand the main shift:

Docker Compose describes and runs a local container stack.

Azure Container Apps runs containerized services
inside a managed cloud environment.

The container image remains portable.

The deployment model changes.

What We Are Building

For this walkthrough, we will build a small Azure Container Apps setup with two services:

Browser
→ API Container App
→ Worker Container App

The API Container App will be public.

It will expose a simple HTTP endpoint that you can open from your browser. When you call that endpoint, the API will call the worker service.

The Worker Container App will be internal.

It will not be exposed directly to the internet. Its job is simple: receive a request from the API, process it, and return a response.

So the request flow will look like this:

Browser
→ public API URL
→ API calls internal worker
→ worker returns a JSON response
→ API returns the result to the browser

This gives us a clean cloud version of a common application pattern:

Public entry point
→ internal service

The project will contain two small Node.js services:

aca-multi-service-demo/
├── api/
│   ├── app.js
│   ├── package.json
│   └── Dockerfile
└── worker/
    ├── app.js
    ├── package.json
    └── Dockerfile

The API and worker will each have their own Dockerfile because they are two separate containerized services.

Later, we will build two images:

api image
worker image

Then we will push both images to Azure Container Registry so Azure Container Apps can pull and run them.

The important part is not the application code itself.

The important part is the cloud container model:

One public Container App
One internal Container App
Images pulled from a registry
Services running inside the same Container Apps environment

That is the Azure Container Apps version of a small multi-service container application.

One Service, One Container App

In the previous article, our local application had multiple services in one Compose file.

In Azure Container Apps, we will model the application slightly differently. Each service will become its own Container App:

api service
→ API Container App

worker service
→ Worker Container App

Both Container Apps will run inside the same Container Apps environment.

That environment gives the services a shared managed boundary. The API and worker can communicate with each other inside that environment, but they do not need to be exposed in the same way.

The API will be public because the browser needs to reach it.

The worker will be internal because only the API needs to call it.

So the pattern will be:

Browser
→ public API Container App
→ internal Worker Container App
→ Azure Files mount

This gives us a clean cloud version of a multi-service container application.

Each service has its own container image, its own Container App, and its own ingress behavior. The API becomes the public entry point, while the worker stays private inside the Container Apps environment.

Next, we will create the two small services that make up this application.

Create the API and Worker Apps

Now we will create the two small services for the walkthrough.

The project will have this structure:

aca-multi-service-demo/
├── api/
│   ├── app.js
│   ├── package.json
│   └── Dockerfile
└── worker/
    ├── app.js
    ├── package.json
    └── Dockerfile

Create the folders first:

mkdir aca-multi-service-demo
cd aca-multi-service-demo

mkdir api
mkdir worker

Create the API service

The API is the public-facing service.

It will expose two routes:

GET /
→ confirms the API is running

GET /process
→ calls the internal worker service

Create api/package.json:

{
  "name": "aca-demo-api",
  "version": "1.0.0",
  "description": "Public API service for Azure Container Apps demo",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.3"
  }
}

Now create api/app.js:

const express = require("express");

const app = express();
const port = process.env.PORT || 3000;
const workerUrl = process.env.WORKER_URL || "http://worker";

app.get("/", (req, res) => {
  res.send("API Container App is running");
});

app.get("/process", async (req, res) => {
  try {
    const response = await fetch(`${workerUrl}/process`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json"
      },
      body: JSON.stringify({
        source: "api",
        message: "Hello from the public API"
      })
    });

    if (!response.ok) {
      throw new Error(`Worker returned status ${response.status}`);
    }

    const result = await response.json();

    res.json({
      message: "API called the internal worker successfully",
      workerResponse: result
    });
  } catch (error) {
    console.error("Failed to call worker:", error);

    res.status(500).json({
      error: "Failed to call worker",
      details: error.message
    });
  }
});

app.listen(port, () => {
  console.log(`API listening on port ${port}`);
  console.log(`Worker URL: ${workerUrl}`);
});

The important line is this:

const workerUrl = process.env.WORKER_URL || "http://worker";

The API does not hard-code an IP address for the worker.

Later, when we deploy the API to Azure Container Apps, we will pass the worker address as an environment variable.

Create the worker service

The worker is the internal service.

It will expose two routes:

GET /
→ confirms the worker is running

POST /process
→ receives a request from the API and returns a JSON response

Create worker/package.json:

{
  "name": "aca-demo-worker",
  "version": "1.0.0",
  "description": "Internal worker service for Azure Container Apps demo",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.18.3"
  }
}

Now create worker/app.js:

const express = require("express");

const app = express();
const port = process.env.PORT || 4000;

app.use(express.json());

app.get("/", (req, res) => {
  res.send("Worker Container App is running");
});

app.post("/process", async (req, res) => {
  const timestamp = new Date().toISOString();

  res.json({
    message: "Worker processed the request",
    received: req.body || {},
    processedAt: timestamp
  });
});

app.listen(port, () => {
  console.log(`Worker listening on port ${port}`);
});

The worker does not expose anything directly to the browser.

Its job is to receive a request from the API, process it, and return a response.

At this point, we have two small services:

api
→ public-facing service
→ calls the worker

worker
→ internal service
→ responds to the API

Next, we need Dockerfiles for both services so each one can be built into its own container image.

Write Dockerfiles for Both Services

Now that we have two small Node.js services, each service needs its own Dockerfile.

The reason is simple: the API and worker are separate containerized services.

Each one will become its own image:

api folder
→ API image
→ API Container App

worker folder
→ worker image
→ Worker Container App

Both services use the same basic Node.js Dockerfile pattern.

API Dockerfile

Create api/Dockerfile:

FROM node:22

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD ["npm", "start"]

This Dockerfile packages the API service.

It starts from the Node.js 22 image, installs the API dependencies, copies the API code, documents port 3000, and starts the service with:

npm start

The API listens on port 3000, so later the API Container App will use target port 3000 for ingress.

Worker Dockerfile

Create worker/Dockerfile:

FROM node:22

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 4000

CMD ["npm", "start"]

This Dockerfile packages the worker service.

It uses the same pattern as the API Dockerfile, but the worker listens on port 4000.

That matters later because the Worker Container App will use target port 4000 for its internal ingress.

At this point, the project should look like this:

aca-multi-service-demo/
├── api/
│   ├── app.js
│   ├── package.json
│   └── Dockerfile
└── worker/
    ├── app.js
    ├── package.json
    └── Dockerfile

We now have everything needed to build two container images:

api image
worker image

Next, we will prepare the Azure CLI and subscription so Azure Container Apps and Azure Container Registry can be created from the command line.

Prepare the Azure CLI and Subscription

To deploy this application to Azure Container Apps, the Azure CLI needs the Container Apps commands, and the subscription needs the required resource providers registered.

Azure Container Apps uses the containerapp Azure CLI extension. The subscription also needs providers for Container Apps, Log Analytics integration, and Azure Container Registry.

First, sign in to Azure:

az login

If you have more than one subscription, set the subscription you want to use:

$SUBSCRIPTION_ID = "<your-subscription-id>"

az account set --subscription $SUBSCRIPTION_ID

Confirm the active subscription:

az account show `
  --query "{Name:name, SubscriptionId:id, TenantId:tenantId}" `
  --output table

Next, install or update the Azure Container Apps extension:

az extension add `
  --name containerapp `
  --upgrade

Now register the resource providers we need:

az provider register --namespace Microsoft.App
az provider register --namespace Microsoft.OperationalInsights
az provider register --namespace Microsoft.ContainerRegistry

The core providers for Azure Container Apps are Microsoft.App and Microsoft.OperationalInsights. We also need Microsoft.ContainerRegistry because our API and worker images will be stored in Azure Container Registry.

You can check the registration status of each provider:

az provider show --namespace Microsoft.App --query "{Provider:namespace, State:registrationState}" --output table
az provider show --namespace Microsoft.OperationalInsights --query "{Provider:namespace, State:registrationState}" --output table
az provider show --namespace Microsoft.ContainerRegistry --query "{Provider:namespace, State:registrationState}" --output table

The expected output should show:

Registered

If one of the providers shows Registering, wait a short while and check again.

Once your subscription and the required providers are ready, we will move on to creating the Azure resources needed for this project: the resource group, Azure Container Registry, and Container Apps environment.

Create the Azure Resources

Now that the Azure CLI and subscription are ready, we can create the Azure resources for the walkthrough.

We will create:

Resource group
Azure Container Registry
Container Apps environment

Start by setting a few variables:

$RESOURCE_GROUP = "rg-aca-multi-service-demo"
$LOCATION = "australiaeast"

$ACR_NAME = "<globally-unique-acr-name>"

$CONTAINERAPPS_ENVIRONMENT = "cae-aca-multi-service-demo"

The Azure Container Registry name must be globally unique.

It also needs to follow Azure naming rules. For this walkthrough, keep it lowercase and simple. For example:

$ACR_NAME = "acamultiservicedemo12345"

Now create the resource group:

az group create `
  --name $RESOURCE_GROUP `
  --location $LOCATION

Create the Azure Container Registry:

az acr create `
  --resource-group $RESOURCE_GROUP `
  --name $ACR_NAME `
  --sku Basic

This registry will store the API and worker images.

Now create the Container Apps environment:

az containerapp env create `
  --name $CONTAINERAPPS_ENVIRONMENT `
  --resource-group $RESOURCE_GROUP `
  --location $LOCATION

The Container Apps environment is the managed boundary where our API and worker Container Apps will run.

Azure Container Apps also needs a logging destination for the environment. In this beginner walkthrough, we are not creating a Log Analytics workspace ourselves. Because no workspace is provided in the command, Azure CLI creates one automatically for the environment.

You may see a message like this:

No Log Analytics workspace provided.
Generating a Log Analytics workspace with name "workspace-..."

That is expected.

This command creates the shared Container Apps environment, not the individual API or worker apps. We will create those later with az containerapp create, after the container images have been pushed to Azure Container Registry.

At this point, the Azure foundation is ready:

Azure Container Registry
→ stores the API and worker images

Container Apps environment
→ hosts the API and worker Container Apps

Next, we will build and push the two container images to Azure Container Registry.

Build and Push the Images to Azure Container Registry

Now we have two Dockerfiles and one Azure Container Registry.

The next step is to build one image for each service and push both images to the registry.

We will build:

api folder
→ API image
→ Azure Container Registry

worker folder
→ worker image
→ Azure Container Registry

First, get the registry login server:

$ACR_LOGIN_SERVER = az acr show `
  --name $ACR_NAME `
  --query loginServer `
  --output tsv

Check the value:

$ACR_LOGIN_SERVER

It should look something like this:

acamultiservicedemo12345.azurecr.io

Now set the image names and tag:

$API_IMAGE_NAME = "aca-demo-api"
$WORKER_IMAGE_NAME = "aca-demo-worker"
$IMAGE_TAG = "1.0"

Sign in to Azure Container Registry:

az acr login --name $ACR_NAME

Run the build commands from the project root folder, aca-multi-service-demo.

That matters because the build context paths are ./api and ./worker. If you run the command from inside the api folder, ./api means “an api folder inside the api folder,” which does not exist.

Now build and tag the API image:

docker build `
  -t "$ACR_LOGIN_SERVER/$API_IMAGE_NAME`:$IMAGE_TAG" `
  ./api

This builds the API image from the Dockerfile inside the api folder.

The image name includes the ACR login server:

<acr-name>.azurecr.io/aca-demo-api:1.0

That full name tells Docker where the image should be pushed.

Now build and tag the worker image:

docker build `
  -t "$ACR_LOGIN_SERVER/$WORKER_IMAGE_NAME`:$IMAGE_TAG" `
  ./worker

This builds the worker image from the Dockerfile inside the worker folder.

Now push both images to Azure Container Registry:

docker push "$ACR_LOGIN_SERVER/$API_IMAGE_NAME`:$IMAGE_TAG"
docker push "$ACR_LOGIN_SERVER/$WORKER_IMAGE_NAME`:$IMAGE_TAG"

When the pushes finish, both images are stored in Azure Container Registry.

You can confirm that by listing the repositories:

az acr repository list `
  --name $ACR_NAME `
  --output table

You should see:

aca-demo-api
aca-demo-worker

You can also check the tags for each image:

az acr repository show-tags `
  --name $ACR_NAME `
  --repository $API_IMAGE_NAME `
  --output table
az acr repository show-tags `
  --name $ACR_NAME `
  --repository $WORKER_IMAGE_NAME `
  --output table

You should see:

1.0

At this point, Azure can pull the images.

That is the important shift:

Local Dockerfiles
→ local Docker build
→ images pushed to Azure Container Registry
→ images ready for Azure Container Apps

Next, we will deploy the internal worker Container App, then deploy the public API Container App and configure it to call the worker.

Deploy the Internal Worker Container App

Now we will deploy the worker.

The worker is an internal service. The API needs to call it, but the browser should not reach it directly.

First, create the full worker image name:

$WORKER_IMAGE = "$ACR_LOGIN_SERVER/$WORKER_IMAGE_NAME`:$IMAGE_TAG"

For this beginner walkthrough, we will use Azure Container Registry admin credentials so Azure Container Apps can pull the private image from ACR.

In production, managed identity is usually the better pattern, but it adds extra identity and RBAC steps. For now, registry credentials keep the deployment path focused.

Enable the ACR admin user:

az acr update `
  --name $ACR_NAME `
  --admin-enabled true

Get the ACR username and password:

$ACR_USERNAME = az acr credential show `
  --name $ACR_NAME `
  --query username `
  --output tsv

$ACR_PASSWORD = az acr credential show `
  --name $ACR_NAME `
  --query "passwords[0].value" `
  --output tsv

Now create the worker Container App:

az containerapp create `
  --name "worker" `
  --resource-group $RESOURCE_GROUP `
  --environment $CONTAINERAPPS_ENVIRONMENT `
  --image $WORKER_IMAGE `
  --target-port 4000 `
  --ingress internal `
  --registry-server $ACR_LOGIN_SERVER `
  --registry-username $ACR_USERNAME `
  --registry-password $ACR_PASSWORD

The image comes from Azure Container Registry:

--image $WORKER_IMAGE

The worker listens on port 4000, so the target port is 4000:

--target-port 4000

The worker uses internal ingress:

--ingress internal

That means it is reachable from inside the Container Apps environment, but it is not exposed directly to the public internet. Azure Container Apps supports both external and internal ingress; internal ingress limits access to the Container Apps environment.

After the command finishes, check the worker app:

az containerapp show `
  --name "worker" `
  --resource-group $RESOURCE_GROUP `
  --query "{Name:name, ExternalIngress:properties.configuration.ingress.external, Fqdn:properties.configuration.ingress.fqdn}" `
  --output table

You should see that external ingress is set to false.

The worker exists, but it is not a public endpoint for browser traffic.

Next, we will create the public API Container App and configure it to call the worker by name inside the same Container Apps environment. Azure Container Apps supports service-to-service calls inside the same environment using the app name, such as http://worker, and that traffic stays inside the environment.

Deploy the Public API Container App

Now we will deploy the API.

The API is the public entry point for this walkthrough. Your browser will call the API, and the API will call the internal worker.

First, create the full API image name:

$API_IMAGE = "$ACR_LOGIN_SERVER/$API_IMAGE_NAME`:$IMAGE_TAG"

The API needs to know where the worker is. Because both Container Apps will run in the same Container Apps environment, the API can call the worker by app name:

http://worker

Store that value in a variable:

$WORKER_URL = "http://worker"

Now create the API Container App:

az containerapp create `
  --name "api" `
  --resource-group $RESOURCE_GROUP `
  --environment $CONTAINERAPPS_ENVIRONMENT `
  --image $API_IMAGE `
  --target-port 3000 `
  --ingress external `
  --registry-server $ACR_LOGIN_SERVER `
  --registry-username $ACR_USERNAME `
  --registry-password $ACR_PASSWORD `
  --env-vars WORKER_URL=$WORKER_URL

There are a few important parts here.

The API image comes from Azure Container Registry:

--image $API_IMAGE

The API listens on port 3000, so the target port is 3000:

--target-port 3000

The API uses external ingress:

--ingress external

That means Azure gives the API a public endpoint.

The API also receives the worker address as an environment variable:

--env-vars WORKER_URL=$WORKER_URL

Inside the API code, this value is read here:

const workerUrl = process.env.WORKER_URL || "http://worker";

So the API does not need to know the worker’s IP address. It calls the worker by service name inside the Container Apps environment.

After the command finishes, get the API URL:

$API_FQDN = az containerapp show `
  --name "api" `
  --resource-group $RESOURCE_GROUP `
  --query properties.configuration.ingress.fqdn `
  --output tsv

Now print the full URL:

"https://$API_FQDN"

At this point, the cloud application is deployed:

Browser
→ public API Container App
→ internal Worker Container App

Next, we will test the public API endpoint and confirm that it can call the internal worker.

Test the Public API and Internal Worker Call

Now that the API is deployed, let’s test the public endpoint and confirm that it can call the internal worker.

Open the API root URL in your browser:

https://<your-api-fqdn>

You should see:

API Container App is running

That confirms the browser can reach the public API Container App.

Now test the /process endpoint:

https://<your-api-fqdn>/process

This endpoint makes the API call the worker inside the same Container Apps environment.

You should see a JSON response like this:

{
  "message": "API called the internal worker successfully",
  "workerResponse": {
    "message": "Worker processed the request",
    "received": {
      "source": "api",
      "message": "Hello from the public API"
    },
    "processedAt": "2025-08-14T10:00:00.000Z"
  }
}

The timestamp will be different on your machine. That is fine.

This confirms two things:

External ingress works:
Browser
→ public API Container App

Internal service-to-service communication works:
API Container App
→ internal Worker Container App

The important part is that the browser never calls the worker directly.

The worker is internal. It is reachable from inside the Container Apps environment, but it is not exposed as a public browser endpoint.

So the deployed cloud flow is:

Browser
→ public API Container App
→ internal Worker Container App
→ JSON response

At this point, you have a working multi-service container application running in Azure Container Apps.

Where the Docker Pieces Went

At this point, it is worth connecting the Azure deployment back to the Docker ideas we have been building throughout the series.

We did not stop using Docker concepts when we moved to Azure.

We used the same concepts in a cloud-managed way.

Dockerfile
→ described how to build each service image

Docker image
→ became the deployable package for the API and worker

Azure Container Registry
→ stored the images so Azure Container Apps could pull them

Container App
→ ran each containerized service

Container Apps environment
→ gave the API and worker a shared managed boundary

External ingress
→ exposed the API publicly

Internal ingress
→ kept the worker private inside the environment

Environment variables
→ passed the worker address into the API

So the shape changed, but the core ideas stayed familiar.

Locally, Docker Compose helped us describe a multi-container stack on our machine.

In Azure, we created managed resources that run the same kind of multi-service application pattern:

Local Docker / Compose
→ services, ports, networks, environment variables

Azure Container Apps
→ container apps, ingress, environment, registry images

The important shift is this:

Docker Compose runs the stack locally.

Azure Container Apps runs containerized services
inside a managed Azure environment.

The container image is still the portable unit.

The platform decides how that image is pulled, configured, exposed, and managed.

Clean Up Azure Resources

This walkthrough created real Azure resources.

When you are finished testing, delete the resource group so you do not leave anything running.

Because we placed the demo resources in one resource group, cleanup is simple:

az group delete `
  --name $RESOURCE_GROUP `
  --yes

Azure may take a few minutes to delete everything.

If you want to check whether the resource group still exists, run:

az group exists `
  --name $RESOURCE_GROUP

If the result is:

false

the resource group has been removed.

The Azure Container Apps Mental Model

Azure Container Apps gives you a managed way to run containerized services.

The mental model is simple:

Container image
→ the packaged application

Azure Container Registry
→ where Azure pulls the image from

Container App
→ one running containerized service

Container Apps environment
→ the shared managed boundary around related apps

Ingress
→ controls whether the service is public or internal

Environment variables
→ configure the service at runtime

In this article, we used that model to deploy two services:

API Container App
→ public
→ external ingress
→ browser can reach it

Worker Container App
→ internal
→ internal ingress
→ only the API needs to call it

That is the key design pattern.

Not every service needs to be public.

You expose the entry point, and you keep supporting services internal.

So the full model becomes:

Browser
→ public API Container App
→ internal Worker Container App

That is a small example, but it reflects a real cloud application shape.

A front door receives traffic. Internal services do work behind it.

Where This Leads

You have now moved from local Docker workflows into a managed cloud container platform.

You built two images, pushed them to Azure Container Registry, deployed them as separate Container Apps, exposed one publicly, kept one internal, and confirmed that the public API could call the internal worker.

That is a strong next step after Docker Compose.

From here, there are several natural directions.

You could replace ACR admin credentials with managed identity.

You could move sensitive values into Container App secrets instead of plain environment variables.

You could add scaling rules so services scale based on HTTP traffic, queues, or events.

You could explore revisions and traffic splitting for safer deployments.

You could add persistent storage, Azure Blob Storage, or a managed database depending on what the application needs.

You could also automate the deployment with Bicep, Terraform, or Azure Developer CLI once the resource model is clear.

The main lesson is this:

Docker teaches you how to package and run containers.

Azure Container Apps teaches you how to run containerized services
in a managed cloud environment.

The image remains portable.

The deployment model changes.