Docker Containers, huh? — docker whale

As a developer there is nothing more frustrating than not being able to properly run an application that you download from a friend or coworker because you don’t have a necessary component that the application requires. In comes a container to save the day.

Docker Containers, what are they? They are a piece of software that contains the ability to access these 5 important parts of an application: code, runtime, system tools, system libraries and settings. They are all held in one lightweight package called a “dockerfile”. Let’s dive a little deeper into what each of those things actually is, and how Docker takes all of those things and lets us use them on any machine that can run the docker engine.

First is the application code. That word, “code,” sounds intimidating, like it’s something that we’re not meant to understand. Code is the set of instructions that we give to a computer to accomplish a task. When we use Docker the “dockerfile” contains all of the written instructions our program needs to be run. It can hold files which can be written in any software language that your application needs. So if you love ALGOL, first props to you, the code for your application can be written in that and your dockerfile will help you keep that nice and tidy when you need to deploy it.

Next is the runtime of your application. This is the overarching requirements that your application needs to do its job. Things like the operating system that your application needs to run, the other processes that need to do work in the background so what you want to happen actually happens. This can be summed up as all of the specifics of the language and system that your program needs to do its job. Docker works well in that it is able to hold only the specific parts of the runtime that your program needs. If your application doesn’t need to know all about every aspect of the operating system the dockerfile won’t hold information on the whole thing. This is one way that dockerfiles keep themselves light on memory and high on performance.

Dockerfiles also contain the system tools that your application needs to run. These might be a specific game engine or even things that an operating system typically provides a user but might not be available on all machines. Some examples are built into operating systems like the memory manager, the performance manager, and file explorer. As long as you are able to write the dockerfile and run the docker engine on the computer the necessary tools can be bundled up and then used on another machine.

System libraries are external bits of code that help you abstract common tasks away from writing the logic yourself. Some specific examples are moment.js to handle time in your JavaScript application, AForge.NET to help incorporate artificial intelligence into a Windows application, and django-extensions to get useful resources in a Python project.

The last are the system settings. If your application is very visual heavy you want to tell the computer that it should use the best resolution the screen can provide. Or if the application has some great sound the computer will want access to the best available speakers that the system can offer, or at least let the user know they should plug in their headphones.

How can docker do all of these things in just one file?

Using a background process, or daemon, called containerd. This is what allows the dockerfile to extract all of the information it holds and then run the dependencies needed for any application that can be made into a dockerfile. This allows for the ability to use lightweight applications almost anywhere without needing to have all of the dependencies already on your machine. Those will come along with the application and let you access what you want to do so you can get to work. — linux-penguin-demon — linux-penguin-demon

The daemons are very interesting. Those things that just work in the background to keep things going are really intriguing. Looks like I’ve got another topic to research next.