Introduction to Containers in Practice
Containers are complex, but they needn’t be mysterious. Here’s my de-jargoned guide to what they are and why they matter.
I had the pleasure of speaking at SwanseaCon this year for the second time and wanted to spread some love and understanding about containers. I find it’s great to be able to take something technical and make it accessible to a wider audience. I believe it helps us work better together.
As the title suggests, this is about containers “in practice”. As it happens, I spend a lot of time with four actual shipping containers that belong to Beechbrae, a social enterprise and charity based in the Scottish central belt, founded by my partner Ally who I can best describe as a kind, warm and unflinchingly kick-ass woman who’s created a remarkable organisation, out of sheer will and intention, to connect people with nature and each other.
When I say I know something about containers in practice, it’s more visceral than it might seem at first blush. I’ve also been working on projects with software containers using Docker since 2015, which is most of the time that Docker has been around.
It was great to have people in the room from a range of backgrounds including development, QA and delivery roles and a privilege to have been able to give everyone a warmer sense of confidence and understanding.
If you strike up a conversation with Google about what containers are, you’ll find a blizzard of different takes on the subject, from shouty advertising to a bunch of articles doing their best to explain the subject.
There are two primary concepts to understand about building and running containers:
You create or download images and create and run containers based on them. Let’s look at each of these in turn.
You can think of an image as pretty much like a hard drive with pre-installed software. Plug that hard drive into a computer and turn it on the system will start up and begin running the software that’s been installed. If you’ve ever worked with virtual machines, a docker image is much like a disk image. A Docker image is primarily made up of:
- a filesystem
- a command to run on startup
- environment variables and other configuration
The clever part is that, rather use an image directly, Docker creates containers from images.
To use a Docker image, you create a container. You can create as many containers you like from an image and each one will branch off down its own dedicated path. They’ll all share the same base image, but are free to diverge and change independently.
This is great for keeping software installations clean and avoiding “configuration drift” because rather than having to maintain strict configuration of a server, a fresh, clean container can be created and the old one deleted. It’s an instant rewind as the new container branches out from the image starting point.
A container is primarily made up of:
- A running process
- A filesystem
- A network interface
A container is literally a single command running in the context of its own dedicated filesystem. Where things get really neat is that this command has access to a dedicated network interface. This is especially interesting when you’re running multiple containers.
Normally if you’re running say a web server, it will attach itself to a port on the host computer. That means you can only run one copy of the web server on the machine (unless you get into individually assigning each instance a different port configuration). With containers, you can run 1, 2, 10 or 11 (goes to eleven right?) copies and each one will have its own port range to use. That’s incredibly powerful when you’re running disposable microservices that can come and go by the second. Ain’t nobody got time to configure that.
Try it yourself
If this has piqued your interest, head over to the workshop materials repository on Github and step through the tutorial. This will guide you through a bunch of interesting steps:
- Installing Docker
- Running your first container
- Running a web server without installing software on your machine
- Building a polyglot environment, running Node, Python, Go and Java simultaneously
- Introduce you to the concept of container scheduling: running a suite of containers using Docker Compose as a placeholder for Kubernetes
These steps are designed to help you explore how containers provide self-contained, disposable software installations that can run side-by-side in their dedicated environments, and how this leads naturally to a declarative infrastructure-as-code deployment model that allows you to scale services up and down in seconds.
Thank you and next steps
If you were able to come along to the session, thank you for being there. If not, I hope this has given you some insight into the simple concepts at the heart of containerisation.
If this piques your interest, I encourage you to try out the workshop materials and feel for yourself how containers work in practice. The rabbit hole goes much deeper, particularly if you take the Kubernetes road, but grasping the fundamentals and potential of containerisation so you can reason with accuity about how it might fit into your world should be within reach for most people with a base of technology understanding and inclination.
If you’d like to explore containers for your organisation, including advanced uses for clean development environments and simplified build pipelines for continuous delivery, feel free to get in touch.