★ Docker Networking Made Simple or 3 Ways to Connect LXC Containers

 — 9 minutes read


In my previous article, I introduced Docker as a lightweight alternative to hypervisor-based virtualization. The article described the basic usage of Docker. Today, we dig a bit deeper and cover advanced topics regarding Docker networking and how to connect containers with each other as well as the outside world. Finally, I show how to fully integrate containers in your real network. As an example, I use a web service that processes birthdays of famous physicists1 served by three concurrently running Python web servers backed by a MongoDB. The example is freely available on GitHub as a fully working Vagrant box.

In general, there are three ways to configure networking for Docker containers that complement each other: Link, Port, and Pipework.

Link - Automatically Connect Services

Docker links automatically propagate exposed ports of one container as shell variables to another container. In this way, the second container can dynamically adjust network settings upon startup without the need to modify an image nor configurations.

In order to start a container, run
> docker run -name <name> <image id | image repository:tag>
The name denotes a symbolic name the running container instance should be identified by. For example, the MongoDB container
> docker run -d -name mongodb docker-network-demo/mongodb:latest

Each container gets a dynamic IP address automatically assigned upon start. You can query it by name

> docker inspect <name> | grep IPAddress | awk '{print $2}' | tr -d '",\n’
172.17.0.2

In the Dockerfile, port 27017 is exposed for the MongoDB’s container image making it accessible from outside the container.2

VOLUME ["/data/mongodb"]
EXPOSE 27017
ENTRYPOINT ["/usr/bin/mongod"]
CMD ["--port", "27017", "--dbpath", "/data/mongodb", "--smallfiles"]

By using the additional parameter -link <name:alias> to the run command, a link is established to the container <name> and called <alias>. For a web server from our example:
> docker run -d -name webserver1 -link mongodb:mongo docker-network-demo/webserver:latest

This link connects the web server to the MongoDB and automatically creates environment variables in the web server container passing IP address and exposed port from the MongoDB to the web server container. The environment variables are prefix with the alias name capitalized and the exposed port number. To show these variables run

> vagrant@docker-network-demo:/vagrant$ docker run -t -i -link mongodb:mongo
   docker-network-demo/webserver env
HOME=/
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=04a4ee4470e1
TERM=xterm
MONGO_PORT=tcp://172.17.0.2:27017
MONGO_PORT_27017_TCP=tcp://172.17.0.2:27017
MONGO_PORT_27017_TCP_ADDR=172.17.0.2
MONGO_PORT_27017_TCP_PORT=27017
MONGO_PORT_27017_TCP_PROTO=tcp
MONGO_NAME=/insane_albattani/mongo
container=lxc

As you can see, $MONGO_PORT_27017_TCP_ADDR and $MONGO_PORT_27017_TCP_PORT contain the necessary information to connect to the MongoDB instance.

These environment variables can also be used in other Dockerfiles like our the web server:

ADD webserver.py /opt/webserver/webserver.py
EXPOSE 8080
CMD /opt/webserver/webserver.py 8080 $MONGO_PORT_27017_TCP_ADDR $MONGO_PORT_27017_TCP_PORT

In this way, the MongoDB connection parameters are automatically passed to the Python script webserver.py without any external dependency management. It is import to note here that there are several syntax forms of the CMD command and only this one allows us to use the environment variables; cf. [1] and [2].

As result, let’s start our full example:

> docker run -d -name mongodb docker-network-demo/mongodb:latest
> docker run -d -name webserver1 -link mongodb:mongo docker-network-demo/webserver:latest
> docker run -d -name webserver2 -link mongodb:mongo docker-network-demo/webserver:latest
> docker run -d -name webserver3 -link mongodb:mongo docker-network-demo/webserver:latest

For a quick demo, we define a shell function to retrieve a container’s IP address by name:

function getWebserverIP() {
  docker inspect $1 | grep IPAddress | awk '{print $2}' | tr -d '",\n'
}

Now we can post birthdays to the three web servers which are persisted to MongoDB:

> curl -X POST -H "Content-Type: application/json" -d '{"name":"James Clerk Maxwell",
  "birthday":"13.06.1831"}' http://$(getWebserverIP webserver1):8080
> curl -X POST -H "Content-Type: application/json" -d '{"name":"Albert Einstein",
  "birthday":"14.03.1879"}' http://$(getWebserverIP webserver2):8080
> curl -X POST -H "Content-Type: application/json" -d '{"name":"Werner Heisenberg",
  "birthday":"05.12.1901”}' http://$(getWebserverIP webserver3):8080

And retrieve these birthdays

> curl http://$(getWebserverIP webserver1)
name = James Clerk Maxwell, birthday = 13.06.1831
name = Albert Einstein, birthday = 14.03.1879
name = Werner Heisenberg, birthday = 05.12.1901

As you can see, no parameterization is necessary to connect the web servers to the MongoDB as docker run -link automatically passes the correct network connection information.

Port - Expose Services to Host Port

Docker links interconnect containers. But how do we connect containers to the outside world? There are two different approaches here that are complementary. Docker port and pipework.

The docker run -port parameter serves another purpose to enable networking with Docker containers. It comes in three syntax flavors
-p=[]: Publish a container’s port to the host (format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort)

and specifies on which host IP address and port an exposed container port should be provided. In case the host IP address or port are omitted, all host interfaces or the same port as exposed are used, respectively. In this way, a connection to the host port is forwarded to the container.

In our example, we expose the web server container’s port to the same port of the host:
> docker run -d -name webserver -link mongodb:mongo -p 8080:8080 docker-network-demo/webserver:latest

In order to check to which IP addresses and port an exposed port — so called private port — is mapped to, run

> docker port webserver 8080
0.0.0.0:8080

In this case, the port is exposed to the same host port as desired and is listening on all host interfaces.

Now you can access the example web service by the host IP address (see Vagrantfile on GitHub):
> curl -X POST -H "Content-Type: application/json" -d '{"name":"Albert Einstein", "birthday":"14.03.1879"}' http://10.2.0.10:8080
Pipework - More Networks for your Containers

Docker creates a special Linux bridge called docker0 on startup. All containers are automatically connected to this bridge and the IP subnet for all containers is randomly set by Docker. Currently, it is not possible to directly influence the particular IP address of a Docker container. Luckily, there is a shell script pipework which you can use to add another interface to a container with a specified IP address. I contributed to the script, so as to allow adding arbitrary numbers of interfaces. For example
> sudo ./pipework docker0 -i eth1 $(docker ps -q -l) 10.2.0.11/16

adds the interface eth1 with IP address 10.2.0.11 to the last started container.3

Integrate Docker Containers into your Host Network

In addition to specifying additional interfaces, you can even put all containers on the same network as the host with some Linux networking tricks.
Let’s suppose for the following that your host is connected to the local network by interface eth1 with IP address 10.2.0.10/16 as it is configured in the Vagrantfile on GitHub.

First, we need to place the Docker bridge interface into the same network as our host for which the -bip parameter of the Docker daemon is intended. On Ubunutu, you can change this parameter by modifying the file /etc/default/docker:
DOCKER_OPTS=-bip=10.2.0.10/16

Second, since this leads to two interfaces on the same IP network on your host (docker0 as well as the eth1), the TCP/IP stack does not know which hosts of that network can be reached via which interface. Therefore, we need to help the stack with its routing decisions. The easiest way to connected these two network segments is to make use of the Docker bridge which is already configured. In this way, we don’t need additional routing information but solely rely on the network level 2 mechanism.

First, we unbind the network address of eth1 and then, we attach eth1 as another interface to the docker0 bridge:

> ip addr del 10.2.0.10/16 dev eth1
> ip link set eth1 master docker0
> ifconfig docker0
docker0 Link encap:Ethernet HWaddr 08:00:27:09:bc:ad
inet addr:10.2.0.10 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::b023:75ff:fead:c70e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:59816 errors:0 dropped:0 overruns:0 frame:0
TX packets:126844 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2491020 (2.4 MB) TX bytes:112308729 (112.3 MB)

> ifconfig eth1
eth1 Link encap:Ethernet HWaddr 08:00:27:09:bc:ad
inet6 addr: fe80::a00:27ff:fe09:bcad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:646 errors:0 dropped:0 overruns:0 frame:0
TX packets:159 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:111487 (111.4 KB) TX bytes:11220 (11.2 KB)

In the Vagrant box example on GitHub, I added these two commands to /etc/rc.local so that they are executed automatically after each boot.

When using this mode of networking, it is important to avoid IP address conflicts, because Docker still randomly assigns IP address to started containers. But now these IP addresses are coming from a real IP network. Therefore, I usually complement this setup with pipework to set static IP addresses for my container which I know will not create an IP address conflict. Marek Goldman suggests other ways of circumvention that work as well.

There’s more

Docker offers much more functionality when dealing with Linux Containers. For example, you can use a image registry which is like a git for Docker images, and you can run publicly or privately in your organization. With these registries you can push to and pull from Docker images which are versioned and may be tagged.

I will blog about these topics in the future. So stay tuned and feel free to contact me, if you have questions or comments.

Footnotes

1. You can actually send arbitrary JSON documents which will be correctly processed.
2. By default, only exposed ports of a container are accessible.
3. Protip: Save docker ps -q -l as an alias; it saves you time when working with Docker.


First published at codecentric Blog.