Everyone should know Docker! This is my reference to it. In this post I will guide you through concepts, installation, handling, capabilities. Docker is perfect for running apps/services or creating dev environments.
What is Docker?
Docker is based on a system called containerization. It is the framework for building virtualized consitent and inconsistent app-containers.
What is containerization?
Containers are an intelligent method of dedicating app workloads. Every app or service has it own container environment which can be built, ran and distributed. All containers are dedicated in an isolated runtime. Further on it's also possible to ship containers to diffrent hosts, as long as the Docker platform is the same. Unlike virtual machines Docker containers don't need a guest operating system. All apps run directly on one host operation system. Communication follows through Docker to the OS kernel.
Docker has the capability to run multiple app containers at the same time and this even if the apps/services are the same. But how do the apps get into containers? -Well that's an easy thing: Images! A container is a clone of an image. Unlimited containers can be build from one image but the image can be never edited directly.The Docker Hub provides a ton of images with all relevant apps or services, for example: MySQL, nginx, Apache httpd, Ubuntu, Wordpress and many many more...
Apps/Services in Containers
The daemon (dockerd) is the server site service to manage Docker objects like containers, images etc.
The client is primary a cli to help the user interact with the daemon on the same system or remote
The registry stores images. I already mentioned the Docker Hub that is the public default main repository.
Source: Docker Docs
The installation is fairly simple. Docker is supported for Linux, Windows and MacOS but I would recommend you always to go for Linux. You can find all guides here. This is my short variant for Linux/Debian of it:
- Prepare your Linux, after that run: (update all system packages)
- Install dependency packages with:
"sudo apt-get install
- Add official Docker GPG key with:
"curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add" - this is used for examine electronical signatures
- Check the key against output from:
"sudo apt-key fingerprint 0EBFCD88"
should be something like this:
- The next step follows to add the Docker repository do this with:
"deb [arch=amd64] https://download.docker.com/linux/ubuntu
stable"" - you can edit stable for changing release channels of the repository | amd64 is representing your system architecture, check this with dpkg --print-architecture
- Now we get to install the Docker engine. Run:
"sudo apt-get update"
"sudo apt-get install docker-ce docker-ce-cli containerd.io"
- We are principally done right now, you can run "docker run hello-world" for a test container. Docker will attempt to download the hello-world image from the repository and create a container. When all this was successfully you should get a message that Docker installation seems to work correct.
Most important commands are listed here:
Docker basic commands
Create a container
- Pull an image from the hub of an service (e.g. ubuntu)
docker pull ubuntu
- Start the container from this image
docker run --name myfirstcontainer -it ubuntu bash
Let's analyze that:
-name myfirstcontainer = the userfriendly name of the container is "myfirstcontainer" (of course every container gets an unique id, but its a real struggle to work with it)
-it = interactive mode
ubuntu = image name (remember images can never be edited directly!)
bash = how to access the container direct
Now you are in your first container! Notice that the hostname changed (on the cli its now a random combination). The cli you now have is in a complete seperate isolated environment. Since the image is ubuntu, all system functions and structures are available as in a normal ubuntu machine.
All done! To leave the container just type exit.
With this knowledge every service is deployed in less then a minute. And once an image is downloaded it can be cloned as much as you need it.
- To check the current state of your container type:
Docker container ls -a
A new entry in the list with "myfirstcontainer" should be visible. Additional information can be also found there.
- Reaccess the container with:
Docker exec -it myfirstcontainer bash
All settings that you made in the containerenvironment are still consistent.
When creating a container multiple configurations can be taken. Most useful I found, are here:
Docker basic parameter, full list here
So there is one other reason Docker is brilliant; everytime a development environment is needed, just create a container with "--rm". The container and all content will be fully deleted after leaving the container.
Creating own Dockerimages
Until yet we did always pull official images from the Docker Hub. But a core feature of Docker is to create individual personal Docker images. This is fundamentally an easy thing but can be scaled into high complexity. An example would look like:
I am taking the ubuntu image from the hub as base. Then add a predefined label "Maintainer" with my info, run some commands inside the image (later on every continer which is built from this image will have those commands implemented) and add an output.
Now we build the image. Docker goes all steps through, beginning with taking the base image and then execute the commands inside a temporary container and create an image out of this.
And as you see, examplefolder is existing in this container.
So this are the basics of Docker. In my opinion for sure a great tool for all needs!