Title: Docker cheatsheet
Author: Solène
Date: 24 September 2020
Tags: docker
Description: 

Simple Docker cheatsheet. This is a short introduction about Docker
usage
and common questions I have been asking myself about Docker.

The official documentation for building docker images can be [found
here](https://docs.docker.com/develop/)

## Build an image

Building an image is really easy. As a requirement, you need to be
in a directory that can contain data you will use for building the
image but most importantly, you need a `Dockerfile` file.

The Dockerfile file hold all the instructions to create the container.
A simple example would be this description:

    FROM busybox
    CMD "echo" "hello world"

This will create a docker container using busybox base image
and run `echo "hello world"` when you run it.

To create the container, use the following command in the same
directory in which Dockerfile is:

    $ docker build -t your-image-name .


## Advanced image building

If you need to compile sources to distribute a working binary,
you need to prepare the environment to have the required
dependencies to compile and then you need to compile a static
binary to ship the container without all the dependencies.

In the following example we will use a debian environment to build
the software downloaded by git.

    FROM debian as work
    WORKDIR /project

    RUN apt-get update
    RUN apt-get install -y git make gcc
    RUN git clone git://bitreich.org/sacc /project
    RUN apt-get install -y libncurses5-dev libncurses5
    RUN make LDFLAGS="-static -lncurses -ltinfo"

    FROM debian

    COPY --from=work /project/sacc /usr/local/bin/sacc

    CMD "sacc" "gopherproject.org"

I won't explain every command here, but you may see that I have
split the packages installation in two commands. This was to help
debugging.

The trick here is that the docker build process has a cache feature.
Every time you use a `FROM`, `COPY`, `RUN` or `CMD` docker will
cache the current state of the build process, if you re-run the
process docker will be able to pick up the most recent state until
the change.

I wasn't sure how to compile statically the software at first, and
having to install git make and gcc and run git clone EVERY TIME
was very time consuming and bandwidth consuming.

In case you run this build and it fails, you can re-run the build
and docker will catch up directly at the last working step.

If you change a line, docker will reuse the last state with a
FROM/COPY/RUN/CMD command before the changed line. Knowing about
this is really important for more efficient cache use.


## Run an image

With the previously locally built image we can run it with the command:

    $ docker run your-image-name
    hello world

By default, when you use an image name to run, if you don't have a
local image that match the name *docker* will check on the docker
official repository if an image exists, if so, it will be pulled
and run.

    $ docker run hello-world

This is a sample official container that will display some
explanations about docker.

If you want to try a gopher client, I made a docker version of it
that you can run with the following command:

    $ docker run -t -i rapennesolene/sacc

Why did you require `-t` and `-i` parameters? The former
is to tell docker you want a tty because it will manipulate
a terminal and the latter is to ask an interactive session.


## Persistant data

By default, every data of the docker container get wiped out
once it stops, which may be really undesirable if you use
docker to deploy a service that has a state and require an
installation, configuration files etc...

Docker has two ways to solve it:

1) map a local directory
2) map a docker volume name

This is done with the parameter `-v` with the `docker run` command.

    $ docker run -v data:/var/www/html/ nextcloud

This will map a persistent storage named "data" on the host
on the path `/var/www/html` in the docker instance. By using `data`,
docker will check if `/var/lib/docker/volumes/data` exists, if so
it will reuse it and if not it will create it.

This is a convenient way to name volumes and let docker manage it.

The other way is to map a local path to a container environment
path.

    $ docker run -v /home/nextcloud:/var/www/html nextcloud

In this case, the directory `/home/nextcloud` on the host and
`/var/www/html` in the docker environment will be the same directory.