Menu
Keep in mind that this chapter covers Linking Containers via the — link flag. A new tool Docker Compose is the recommended way moving forward but for this tutorial, I will cover the — link flag only and leave it to the reader to look at Docker Compose.
Most people are introduced to Docker and Linux containers as a way to approach solving a very specific problem they are experiencing in their organization. The problem they want to solve often revolves around either making the dev/test cycle faster and more reliable while simultaneously shortening the related feedback loops, or improving the packaging and deploying of applications into production in a very similar fashion. Today, there are a lot of tools in the ecosystem that can significantly decrease the time it takes to accomplish these tasks while also vastly improving the ability of individuals, teams, and organizations to reliably perform repetitive tasks successfully.
That being said, tools have become such a big focus in the ecosystem that there are many people who haven’t really spent much time thinking about all the ways containers alone can provide interesting solutions to problems that can occur in the course of any technical task.
To get the creative juices flowing and help folks start thinking outside the box, we’ll examine a few scenarios and explore how containers can be used to provide possible solutions. You'll notice that many of these examples utilize file mounts to access data stored on local machines.
Note that all of these were tested on Mac OS X running a current stable release of Docker: Community Edition. Also, most of the examples assume you have a unix-based operating system, but they can often be adjusted to work on Windows.
Preparation
If you are planning on running these examples, go ahead and download the following images ahead of time so you can see how the commands run without the additional time required to pull down the images the first time:
Scenario 1
Using containers for console commands
There are often applications that are very useful to have but don't run or are very difficult to compile on the platform we are using. Containers can provide a very easy way to run these applications, despite the apparent barriers (and even if we can run the application natively, containers can be a very compelling approach to packaging and distributing programs). In this example, we are using an ImageMagick container to resize an image. Although this particular example is easy to accomplish in other ways, it should give some insight into how a container can be used to take advantage of a wide variety of similar console-based tools.
Scenario 2
Using containers for development environments
- Have you ever needed to set up your development environment at a new job or on a new computer and struggled to get it right?
- Have you ever had problems because your development and Q/A environments used slightly different versions of the compiler?
By using containers, it is possible to ensure your builds are repeatable, even for complex development environments. In this example, we are using a Docker image that contains a robust Go development environment to compile a small console game for Linux, OS X, and Windows.
Mac OS X
Linux
Windows
Scenario 3
Using containers to solve OS version incompatibilities
This example will only work on a Linux server running on Dell hardware, but it provides a good example of how containers can make it easier to run certain classes of software.
Dell's OpenManage Server Administrator (OMSA) is critical for monitoring and configuring Dell hardware, but Dell supports only a few Linux distributions and can be slow to provide updates for newer releases. By using containers, we can ensure that OMSA is packaged with the Linux platform (e.g., CentOS 7) and libraries that it requires, while still having the freedom to run it on the Linux platform (e.g., CoreOS) that we require.
In the example below, we launch a container that runs continuously in the background with a few processes that Dell uses to facilitate communication between the Dell tools and the underlying hardware. We then wait 40 seconds while the container finishes starting up all the background process that it launches.
Once the container is running, we can utilize
docker exec
to treat the running container like a simple command line tool and query the hardware, just as if the OMSA tools were installed and running in a more traditional manner. With this command, we retrieve the chassis info.And then we can immediately run another command to clear the ESM log, or whatever else we might need.
Scenario 4
Using containers to explore the underlying host
Docker: Community Edition (CE) does a great job of making the Docker server feel like it runs natively on Mac OS X and Windows. Honestly, it does too good a job. When you are first trying to learn how Docker works, it can actually be very deceiving because Docker: CE launches a lightweight Linux virtual machine (VM) on both of these platforms, but it is not obvious to the end user that this is the case, and there is no way to log in to this VM and take a look around. So, given all this, how do you learn more about how the Docker VM works?
In this scenario, we can utilize a partially privileged container and Linux namespaces to launch a container that will allow us to see all the processes that are running on the underlying host and explore its filesystem.
Note: This is an important example of why running privileged containers in production can be very dangerous. Although we have only shared the host's PID namespace with this container and given it two Linux capabilities, it can easily access the host's underlying filesystem.
Scenario 5
Using containers to sidestep a read-only filesystem
Another quirk of Docker: Community Edition is that the virtual machine's root filesystem has been made read-only in recent releases. This was done to help prevent people from breaking Docker by fooling around inside the VM, utilizing techniques like the one we just showed. This is understandable since they need to be able to support their product in such a wide range of environments, but it can also be problematic.
While teaching classes about Docker and specifically trying to demonstrate how Linux Control Groups (cgroups) impact the resources that are available to a container, it is often desirable to run a simple tool on the Linux host that makes it easy to monitor the processes that are running and see how they are performing.
In class, we will often run a container that stresses the underlying VM by generating some load on the CPU and memory so that we can see what effect it has on the underlying VM.
A simple and visually appealing tool, like
htop
is ideal for observing the impact on the VM, but since the root filesystem of the virtual machine is read-only, there is no way to install htop
directly into the VM. So, instead of using the VM directly, we can run a container that shares the host's PID (process) namespace so that
htop
can see all the processes running on the host and allow us to understand how our stress
program is impacting the Docker server.Scenario 6
Using containers to run X11 graphical applications
This specific example is designed for Mac OS X, but it can be easily modified for Linux and Windows. On Windows, you will also need to install a third-party X11 server, like Xming, Cygwin/X or MobaXterm.
On Mac OS X, if you do not already have the
homebrew
package manager installed, you can get it from Homebrew.To make this X11 container work, we need to prepare our system the first time by installing socat and Xquartz, an X11 server, on the Mac. Once Xquartz is installed, we need to reboot the Mac so that the X11 server is set up properly for the current user.
After this one-time setup, we can now run a Linux X11 graphical application by running
socat
to facilitate communication between the container and the Mac's X11 server, and then launching the desired container.After a few moments you should see a usable Firefox browser window open on your system.
Note: On the Mac, it is actually possible to set the DISPLAY to
docker.for.mac.host.internal:0
instead of using the primary IP address. Also, don't forget to kill the socat
process when you are done playing around with this.Conclusion
Hopefully this article has helped expose you to some of the less obvious ways that containers can be used. I can't recommend enough that you take the time to become familiar with the underlying technologies that enable containers and how things like namespaces, cgroups, Linux capabilities, and even hardware virtualization can be combined to solve problems in new and creative ways.
Learn DevOps with these recommended books, videos, and tutorials.
Article image: Container City 2 at Trinity Buoy Wharf, London in September 2012. (source: Cmglee on Wikimedia Commons).
Estimated reading time: 3 minutes Docker Desktop for Mac provides several networking features to make iteasier to use.
Features
VPN Passthrough
Docker Desktop for Mac’s networking can work when attached to a VPN. To do this,Docker Desktop for Mac intercepts traffic from the containers and injects it intoMac as if it originated from the Docker application.
Port Mapping
When you run a container with the
-p
argument, for example:Docker Desktop for Mac makes whatever is running on port 80 in the container (inthis case,
nginx
) available on port 80 of localhost
. In this example, thehost and container ports are the same. What if you need to specify a differenthost port? If, for example, you already have something running on port 80 ofyour host machine, you can connect the container to a different port:Now, connections to
localhost:8000
are sent to port 80 in the container. Thesyntax for -p
is HOST_PORT:CLIENT_PORT
.HTTP/HTTPS Proxy Support
See Proxies.
Known limitations, use cases, and workarounds
Following is a summary of current limitations on the Docker Desktop for Macnetworking stack, along with some ideas for workarounds.
There is no docker0 bridge on macOS
Because of the way networking is implemented in Docker Desktop for Mac, you cannot see a
docker0
interface on the host. This interface is actually within the virtualmachine.I cannot ping my containers
Docker Desktop for Mac can’t route traffic to containers.
Per-container IP addressing is not possible
The docker (Linux) bridge network is not reachable from the macOS host.
Use cases and workarounds
There are two scenarios that the above limitations affect:
I want to connect from a container to a service on the host
The host has a changing IP address (or none if you have no network access). From18.03 onwards our recommendation is to connect to the special DNS name
host.docker.internal
, which resolves to the internal IP address used by thehost.This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.The gateway is also reachable as
gateway.docker.internal
.I want to connect to a container from the Mac
Port forwarding works for
localhost
; --publish
, -p
, or -P
all work.Ports exposed from Linux are forwarded to the host.Our current recommendation is to publish a port, or to connect from anothercontainer. This is what you need to do even on Linux if the container is on anoverlay network, not a bridge network, as these are not routed.
The command to run the
nginx
webserver shown in Getting Startedis an example of this.To clarify the syntax, the following two commands both expose port
80
on thecontainer to port 8000
on the host:To expose all ports, use the
-P
flag. For example, the following commandstarts a container (in detached mode) and the -P
exposes all ports on thecontainer to random ports on the host.See the run command for more details onpublish options used with
mac, networkingdocker run
.