Docker for Beginners: Docker Networking Explained
-
Ahmed Muhi - 02 Aug, 2024
Introduction: From One Container to Connected Containers
In the previous article, you built your own Docker image and ran it as a container.
That was a big step. You moved from using images created by other people to packaging and running your own application.
But most real applications are not just one container.
A frontend might need to call an API.
An API might need to connect to a database.
A background worker might need to talk to a queue.
That means we need to understand how containers communicate.
Docker networking answers two practical questions:
How do containers talk to each other?
How does traffic from your machine reach a container?
Those are the two paths this article will focus on.
First, we will create a custom Docker network and run two containers on it. That will show how containers can find each other by name without hard-coding IP addresses.
Then we will look at port mapping, which is how traffic from your browser reaches an application running inside a container.
By the end, the networking model should feel much clearer:
Container to container:
use a Docker network and container names.
Host to container:
use localhost and a published port.
That is the core idea. Containers can be isolated, but they are not trapped. Docker gives you controlled ways to connect them to each other and to the outside world.
The Two Networking Questions
Before we run any commands, it helps to separate Docker networking into two questions.
The first question is:
How do containers talk to each other?
This is container-to-container traffic.
For example, your API container might need to connect to a database container. In that case, the API does not need to go out to the public internet and come back in. Both containers can live on the same Docker network and talk directly.
Inside a Docker network, containers can use each other’s names.
So instead of the API trying to connect to a hard-coded IP address, it can connect to something like:
database:5432
That is easier to read, easier to manage, and much more stable than relying on container IP addresses.
The second question is:
How does traffic from my machine reach a container?
This is host-to-container traffic.
For example, if a web server is running inside a container, your browser cannot automatically reach it. The container might be listening on port 80 or 3000 internally, but that does not mean the same port is open on your machine.
To make it reachable from your browser, you publish a port:
localhost:8080 on your machine
→ port 80 inside the container
That is what -p 8080:80 does when you run a container.
So the two paths are different:
Container to container:
container name → container port
Host to container:
localhost → published host port → container port
This distinction is the key to understanding Docker networking.
Now let’s prove the first path by creating a Docker network and making two containers talk to each other by name.
Create a Custom Docker Network
Let’s start with container-to-container communication.
We will create a small private network and place two containers inside it. Once they are on the same network, they will be able to find each other by name.
Create a custom Docker network:
docker network create demo-network
This creates a Docker network called demo-network.
You can check that it exists with:
docker network ls
You should see demo-network in the list.
By default, this is a bridge network. A bridge network is a private network inside the Docker host. Containers attached to the same bridge network can talk to each other, but containers outside the network are kept separate unless you connect them.
For now, the important idea is simple:
A Docker network gives containers a shared place to communicate.
Next, we will start two containers and attach both of them to this network.
Run Two Containers on the Same Network
Now we will start two small containers and attach both of them to demo-network.
We will use the alpine image because it is a very small Linux image and works well for quick networking tests.
Start the first container:
docker run -dit --name container-a --network demo-network alpine sh
Start the second container:
docker run -dit --name container-b --network demo-network alpine sh
There are a few important parts in these commands.
--name container-a and --name container-b give the containers clear names. Those names matter because containers on the same Docker network can use each other’s names for communication.
--network demo-network attaches each container to the network we created earlier.
-dit keeps the container running in the background with an interactive shell available. We are not running a full application here. We just need two containers that stay alive long enough for us to test networking.
Now check that both containers are running:
docker ps
You should see both container-a and container-b in the output.
At this point, the setup looks like this:
demo-network
├── container-a
└── container-b
Both containers are on the same Docker network. Next, we will test whether one container can reach the other by name.
Test Container-to-Container Communication
Now we have two containers on the same Docker network.
The next question is simple:
Can `container-a` reach `container-b` by name?
Let’s test it.
Open a shell inside container-a:
docker exec -it container-a sh
You are now inside the container.
Alpine is intentionally small, so it may not include every networking tool by default. Install ping support:
apk update && apk add iputils
Now ping container-b by name:
ping container-b
You should see replies from container-b:
PING container-b (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.123 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.101 ms
The IP address may be different on your machine. That is fine.
The important part is that you did not need to know the IP address ahead of time. You used the container name:
container-b
Docker resolved that name to the correct container inside demo-network.
This is the main container-to-container networking idea:
Containers on the same custom Docker network can talk to each other by name.
That is why hard-coding container IP addresses is usually the wrong model. Container IPs can change when containers are recreated. Container names are much easier to work with inside a Docker network.
When you are done, stop the ping with:
Ctrl+C
Then exit container-a:
exit
At this point, we have proved the first networking path:
container-a
→ demo-network
→ container-b
Next, let’s slow down and explain what the bridge network is doing behind the scenes.
What the Bridge Network Is Doing
Now that the ping worked, let’s explain what Docker did for us.
When we created demo-network, Docker created a private network inside the Docker host. Because this is a bridge network, containers attached to it can communicate with each other through that virtual network.
The setup looks like this:
Docker host
└── demo-network
├── container-a
└── container-b
Each container gets its own network interface and its own IP address on that network.
But the important part is that you usually do not need to use those IP addresses directly.
Docker provides name resolution inside custom bridge networks. That means when container-a tries to reach:
container-b
Docker can resolve that name to the current IP address of container-b inside demo-network.
That is why this worked:
ping container-b
This is also why custom networks are better than relying on the default network for multi-container applications. On a custom network, you get a cleaner boundary and built-in name-based communication between the containers attached to it.
So the practical model is:
Same custom Docker network
→ containers can talk by name
That is the container-to-container side of Docker networking.
But this is only one side of the story.
Your browser is not inside demo-network. Your browser runs on your machine, outside the container network. So if a container runs a web server, we still need a way for traffic from your machine to reach it.
That is where port mapping comes in.
Port Mapping: Letting Your Browser Reach a Container
Now we have proved the first networking path:
container-a
→ demo-network
→ container-b
Containers on the same custom Docker network can talk to each other by name.
Now let’s look at the second path:
your browser
→ your machine
→ container
This is where port mapping matters.
A container can run a web server internally, but that does not automatically mean your browser can reach it. The container has its own network space. If you want traffic from your machine to reach the container, you need to publish a port.
You already saw this pattern earlier with Nginx:
docker run -d -p 8080:80 --name web-demo nginx
The important part is:
-p 8080:80
This means:
localhost:8080 on your machine
→ port 80 inside the container
The first number is the host port.
The second number is the container port.
So when you open:
http://localhost:8080
Docker forwards that traffic into the container on port 80, where Nginx is listening.
You can check the running container with:
docker ps
You should see a port mapping that looks something like this:
0.0.0.0:8080->80/tcp
That line means Docker is listening on port 8080 on your machine and forwarding traffic to port 80 inside the container.
So the host-to-container model is:
Browser
→ localhost:8080
→ Docker port mapping
→ container port 80
→ Nginx
This is different from container-to-container communication.
Inside a Docker network, containers use container names:
api → database:5432
From your machine, you use localhost and the published port:
browser → localhost:8080
That distinction is the main thing to remember.
Inside Docker, use the container name.
From your machine, use the published host port.
Common Networking Issues
Before we wrap up, let’s cover a few common issues you might hit while testing Docker networking.
localhost does not open the container
If you open:
http://localhost:8080
and nothing loads, first check whether the container is running:
docker ps
If the container is not listed, it may have stopped or failed to start.
If it is running, check the port mapping:
0.0.0.0:8080->80/tcp
You need a published host port for your browser to reach the container. If you started the container without -p, the web server may be running inside the container, but your machine has no port mapped to it.
Port already in use
If you run:
docker run -d -p 8080:80 --name web-demo nginx
and Docker complains that the port is already allocated, something else is already using port 8080 on your machine.
Use a different host port:
docker run -d -p 8081:80 --name web-demo-2 nginx
Then open:
http://localhost:8081
The container still listens on port 80 internally. You only changed the port on your machine.
Containers cannot talk to each other
If one container cannot reach another by name, check that both containers are on the same Docker network.
Run:
docker network inspect demo-network
Look for both container-a and container-b in the network output.
If one of them is missing, it is not attached to that network.
You can also check a specific container:
docker inspect container-a
Then look at the Networks section.
Name resolution is not working
If ping container-b does not work, do not immediately switch to container IP addresses.
First, check the basics:
Are both containers running?
Are both containers attached to the same custom network?
Did you spell the container name correctly?
Container names only resolve properly inside the Docker network where those containers are attached.
That is why this works:
container-a → container-b
but this does not mean your host machine can automatically resolve container-b in a browser or terminal.
Clean up the demo containers and network
When you are finished testing, remove the demo containers:
docker rm -f container-a container-b web-demo
If you created web-demo-2 while testing another port, remove that too:
docker rm -f web-demo-2
Then remove the custom network:
docker network rm demo-network
This keeps your Docker environment tidy and prevents old demo resources from getting in the way later.
The Docker Networking Mental Model
You have now seen both sides of Docker networking.
First, you created a custom Docker network and placed two containers inside it. That showed the container-to-container path:
container-a
→ demo-network
→ container-b
Because both containers were attached to the same custom network, Docker let them communicate by name. You did not need to hard-code container IP addresses.
Then you looked at port mapping. That showed the host-to-container path:
browser
→ localhost:8080
→ container port 80
→ Nginx
These are two different networking paths, and that is the most important idea in this article.
Inside Docker, containers use Docker networks and container names:
api
→ database:5432
From your machine, you use localhost and a published port:
browser
→ localhost:8080
So the mental model is:
Container to container:
same Docker network + container name + container port
Host to container:
localhost + published host port + container port
Once that distinction is clear, Docker networking becomes much easier to reason about.
A container can be private inside Docker, reachable only by other containers on the same network. Or you can publish a port and make part of that container reachable from your machine.
That gives you control.
You decide which containers can talk to each other, and which container ports are exposed outside Docker.
Where This Leads
You now know how containers connect.
That is a big step. A single container is useful, but most real applications are made of multiple parts. A frontend talks to an API. An API talks to a database. A worker talks to a queue.
Docker networks give those pieces a way to communicate without hard-coding IP addresses.
But communication is only one part of running real applications.
The next question is data.
So far, our containers have been easy to create and remove. That is one of Docker’s strengths. But it also raises an important problem:
What happens to data when a container is removed?
If a database writes records inside a container, will those records still exist after the container is deleted?
If an application writes uploaded files, logs, or generated content, where should that data live?
That is where Docker storage comes in.
In the next article, we will look at Docker volumes and bind mounts. We will see why containers are temporary by default, how to keep data beyond a container’s lifecycle, and how to decide where application data should live.