Docker Networking - ghdrako/doc_snipets GitHub Wiki

Every container has its own network stack and resources isolated in a sandbox. Sandbox means isolating the container from the Internet, networks, and other containers by default. The sandbox includes Ethernet interfaces, ports, routing tables, and DNS config.

Every time a container is created, it is assigned a unique IP. You can get this IP by inspecting the container using

docker inspect --format='{{.NetworkSettings.IPAddress}}' <Container Name/ID>

By default, the container is connected to the bridge network unless specified to be connected to a user-defined network.

docker network ls  #  list the networks

The Docker engine will create the preceding three networks for you by default: bridge, host, and none.

  • none: The loopback (lo) interface for local communication within a container.
  • host: The container gets attached to the host network stack and shares the host’s IP addresses and ports.
  • bridge: The default network if the network is not configured using the --net option of the docker run subcommand.

Since the containers are isolated by default, they communicate using the endpoints/ virtual Ethernet interfaces where the packets transfer over the docker0 network. The docker0 acts as a switch allowing the frames between the containers and the external network

docker network ls  # wyswietlenie dostepnych sieci

By default, Docker operates in a bridge network mode. A bridge network creates a single network interface on the host that acts as a bridge to another subnet configured on the host. All incoming (ingress) and outgoing (egress) network traffic travel between the container subnet and the host using the bridge network interface.

After installing Docker Engine in a Linux environment, if you run the ifconfig command, Docker will create a new virtual bridged network interface called docker0. This interface bridges a Docker private subnet that gets created by default (usually 172.16.0.0/16) to the host machine's networking stack. If a container is running in the default Docker network with an IP address of 172.17.8.1 and you attempt to contact that IP address, the internal route tables will direct that traffic through the docker0 bridge interface and pass the traffic to the IP address of the container on the private subnet. Unless ports are published through Docker, this container's IP address cannot be accessed by the outside world. Throughout this chapter, we will dive deep into various network drivers and configuration options provided by Docker.

ifconfig

 Docker bridge interface is called docker0 and has an IP address of 172.17.0.1.

docker inspect webserver1 -f {{.NettworkSettings}}

By default container lives in the default Docker bridge network. Gateway this container is configured to use is the IP address of the docker0 bridge interface. Docker will use this interface as an egress point to access networks in other subnets outside itself, as well as forwarding traffic from our environment to the containers in the subnet.

Docker and Docker Compose have command-line options to specify a Docker local network that the application will use. Using this Docker local network allows our containers to access another container's ports without having to bind/expose these ports to the host's ports.

ou use the docker network create command to create a named network that your containers can use to privately communicate with one another.

docker network create net1

docker run ... --network net1

We are no longer defining the HOSTIP environment variable because the Docker local networking system provides a DNS function that allows the programs in our containers to look up the other containers by name. The name is the name of the container, which is specified in the docker run commands scripts with the –name command-line option.

  mongo_host = process.env.MONGO_HOST || "mongodb",
  mongo_port = 27017,
  mongoUrl = 'mongodb://${mongo_host}:${mongo_port}'

Communication across links

Links allow containers to discover each other and securely transfer information about one container to another container. When you set up a link, you create a conduit between a source container and a recipient container.

In older legacy versions of Docker, simple DNS resolution was possible by establishing links between containers using the --link flag in the docker run command. Using linking, Docker would create an entry in the linked container's hosts file, which would enable simple name resolution.

Docker creates a secure tunnel between the containers that doesn’t need to expose any ports externally on the container; when we started the db container we did not use either the -P or -p flags. That’s a big benefit of linking: we don’t need to expose the source container.

Format --link

--link <name or id>:alias
--link <name or id>

Docker exposes connectivity information for the source container to the recipient container in two ways:

  • Environment variables,
    • all environment variables originating from Docker within a container are made available to any container that links to it
    • docker sets an <alias>_NAME environment variable for each target container listed in the --link parameter.
docker run --rm --name web2 --link db:db training/webapp env
DB_NAME=/web2/db
DB_PORT=...
DB_PORT_5432_TCP_PROTO=...
DB_PORT_5432_TCP_PORT=...
DB_PORT_5432_TCP_ADR=...

You can see that Docker has created a series of environment variables with useful information about the source db container. Each variable is prefixed with DB_

  • Updating the /etc/hosts file.
docker run -it --rm --link proxy curlimages/curl curl -v http://proxy:15001/status/500
docker run -itd --name containerlink1 alpine:latest
docker run -itd --name containerlink2 --link containerlink1 alpine:latest
docker exec -it containerlink2 /bin/sh
#  ping containerlink1
# cat /etc/hosts

Docker is adding an entry for the containerlink1 container name as well as its container ID. This enables the containerlink2 container to know the name, and the container ID is mapped to the IP address. Link is one direction connection and deprecated.

Recent versions of Docker support a native DNS service between containers running on the same Docker network. This allows containers to look up the names of other containers running in the same Docker network. The only caveat with this approach is that native Docker DNS doesn't work on the default Docker bridge network; thus, other networks must first be created to build your containers in.

For native Docker DNS to work, we must first create a new network using the docker network create command.

docker network create dnsnet --subnet 192.168.54.0/24 --gateway 192.168.54.1
docker network create dnsnet  #command will create a network with a Docker-allocated subnet and gateway.
docker network ls
docker network inspect dnsnet
ifconfig # is bridge network

docker run -itd --network dnsnet --network-alias alpinedns1 --name alpinedns1 alpine:latest
docker run -itd --network dnsnet --network-alias alpinedns2 --name alpinedns2 alpine:lates
docker inspect alpinedns1
docker exec -it alpinedns1 /bin/sh
#ping alpinedns2
docker exec -it alpinedns2 /bin/sh
#ping alpinedns1

The --network-alias flag is use to create a custom DNS entry for the container. The --name flag is used to give the container a human-readable name within the Docker API. This makes it easy to start, stop, restart, and manage containers by name.

Docker DNS, as opposed to the legacy link method, allows bidirectional communication between containers in the same Docker network. Docker DNS is using true DNS as opposed to /etc/hosts file entries inside the container

User-Defined Bridge Network

You can group your containers into user-defined private networks. You explicitly choose which container can connect to which.

It is always better if you create a bridge network yourself.

The reasons are multiple:

  • Better isolation across containers. Containers on the same bridge network are discoverable and can talk to each other. They automatically expose all ports to each other, and no ports are exposed to the outside world. Having a separate user-defined bridged network for each application provides better isolation between containers of different applications.

  • Easy name resolution across containers. For services joining the same bridged network, containers can connect to each other by name. For containers on the default bridged network, the only way for containers to connect to each other is via IP addresses or by using the --link flag, which has been deprecated.

  • Easy attachment/detachment of containers on user-defined networks. For containers on the default network, the only way to detach them is to stop the running container and re-create it on the new network.

docker network create --driver bridge mynet        # create user defined bridge network
docker run -itd --net mynet --name c1 ubuntu bash  # assign to mynet 
docker run -itd --net mynet --name c2 ubuntu bash  # assign to mynet
docker run -itd --net mynet --name c3 ubuntu bash  # assign to mynet
docker run -itd --name c4 ubuntu bash              # assign to default docker0 bridge network
docker network connect bridge c3
docker container ls
docker network inspect bridge

Exposing Container Ports

By exposing the container ports the services inside the containers can be accessed from outside hosts.

There are two ways to expose the container services and make the port mapping:

  • Bind using the –p or --publish option with the run command at the command line. For -p option, the complete syntax is ::
docker run -d -p 80 nginx #  bind port 80 inside the container to a random free port at the host the IP will be 0.0.0.0, which means that it is publicly
accessible
docker run -p 195.23.0.15:80:80
docker run -p 80:80
docker run -p 80
docker run -p 195.23.0.15::80 # generated random free prot on host
docker container ls -a  # find the port bindings - column PORTS
docker port <container name/ID>
$docker inspect --format='{{.NetworkSettings.Ports}}' <container name/ID>
  • Expose the port in the Dockerfile and use the –P option afterward.
Dockerfile:
FROM nginx
EXPOSE 80

docker build -t nginx2 .
# The -P option tells Docker to bind each exposed container port to a random port on the host's interface. 
docker run -d --name nginx2 -P nginx2
docker container ls

Difference Windows and Linux Network

  • There are no bridge networks
  • Localhost must be explicitly published. As Windows cannot resolve localhost
  • To access the site on the Windows Docker host, you need to make the request using the container's IP address using the ipconfig command

Bridge Networks

A bridge network is a user-defined network that allows for all containers connected on the same network to communicate with each other. The benefit is that the containers on the same bridge network can connect, discover, and talk to each other while those not on the same bridge cannot communicate directly.

Bridge networks are useful when you have containers running on the same host that need to talk to each other—if the containers that need to communicate are on different Docker hosts, then an overlay network is needed.

Host Networks

As the name suggests, with a host network, a container is attached to the Docker host. This means that any traffic coming to the host is routed to the container. Since all of container’s ports are directly attached to the host, in this mode, the concept of publishing ports doesn’t make sense. Host mode is perfect when you have only one container running on the Docker host.

docker run -d --network host nginx:alpine

Overlay Networks

Overlay networks create a network spanning multiple docker hosts. This type of network is called an overlay because it lays on top of the existing host network, allowing containers connected to the overlay network to communicate across multiple hosts. Overlay networks are an advanced topic and are primarily used when a cluster of Docker hosts is set up in Swarm mode. Overlay networks also let you encrypt the application data traffic across the them.

Macvlan Networks

Macvlan networks leverage the Linux Kernel’s ability to assign multiple logical addresses based on MAC to a single physical interface. This means that you can assign a MAC address to a container’s virtual network interface, making it appear as if the container has a physical network interface connected to the network. This brings unique opportunities, especially for legacy applications that expect a physical interface to be present and connected to the physical network. Macvlan networks require an additional dependency on the Network Interface Card (NIC) to support what is known as “promiscuous” mode—a special mode that allows the NIC to receive all traffic and direct it to a controller, instead of receiving only traffic that the NIC expects to receive.

None Networking

As the name suggests, none networking is when the container isn’t connected to any network interface and does not receive or send any network traffic. In this networking mode, only the loopback interface is created, allowing the container to talk to itself, but not to the outside world or with other containers. A container can be launched with none networking using the command shown here:

docker run -d --name nginx --network=none -p 80:80 nginx

Connecting Containers to Named Bridge Networks

Docker lets you easily connect a container to another network on the fly. To do this, type the following command:

dockr network connect <network name> <container name>
docker network connect database adminer # to connect the adminer container to the database network
docker inspect adminer | jq ".[0].NetworkSettings.Networks" # two networks 
docker network disconnect bridge adminer # disconnect container from default bridge  network
⚠️ **GitHub.com Fallback** ⚠️