Outer.Systems

Make yourself at home

One line and feel yourself like at home. Tech

One of the use-cases of docker is to provide an adapted environment for working. Let’s call it a toolbox, as it provides some tools or a sandbox, as once you’re inside, you can do whatever you want and it’s always ready.

Why?

I’m not a big fan of customization. I don’t want to have shortcuts working one machine and not the other one. I don’t like to reset my environment in case of upgrade. I really hate having to find again how to fix details.

A loonnng time ago, my approach was to change as few as possible the default configurations. But it’s not fixing everything. Then I switch to puppet, one place to describe what I want sounds nice but I find describing the final state not precise enough. It’s just better than only describing the transitions (yeah, I’m pointing ansible). But configuring that you have to describe a daemon when you change a configuration file is overworking and it doesn’t prevent you to have a dirty environment. That’s why I like docker. The documentation is the Dockerfile and/or the entrypoint script and that’s all. I found two ways to set up fast your own working environment.

The lifeboat

There’s a very interesting approach done by coreOS with them toolbox. The idea is to download a fedora (by default, it’s customizable), make it persistent on the disk, and start it without using docker. Just systemd functionalities. I would say that half of the work is done. Of course it’s a good half. I love the possibility to have an environment depending of the user, possibly started at the login, and without docker (!). Perfect for debugging and restarting the docker service!

The script is in the repo coreos/toolbox and I encourage you to read it to see the usage of docker just as an easy and universal debootstrap:

# docker pull "${TOOLBOX_DOCKER_IMAGE}:${TOOLBOX_DOCKER_TAG}"
# docker run --name=${machinename} "${TOOLBOX_DOCKER_IMAGE}:${TOOLBOX_DOCKER_TAG}" /bin/true
# docker export ${machinename} | sudo tar -x -C "${machinepath}" -f -
# docker rm ${machinename}

And using systemd to start the “container” without delay:

# sudo systemd-nspawn \
  --directory="${machinepath}" \
  --capability=all \
  --share-system \
  --bind=/:/media/root \
  --bind=/usr:/media/root/usr \
  --bind=/run:/media/root/run \
  --user="${TOOLBOX_USER}" "$@"

The biggest drawback is, of course, that we miss some docker functionalities like the image which is downloaded only once (on the first call), and what you install can’t be separated if you want a newer environment, so you have to reinstall everything… well at the end you have to produce your own image and refresh it yourself.

Home, sweet home

An other approach is to use a docker image as environment to isolate you while working. A bit like starting puppet/chef/script on a machine to set it like you want. I created an image, cell/debsandbox, to go in this direction. The goal is “just” to change the machine around the current directory.

The main difficulty is that you don’t have all the information needed at build-time (like the username, the UID, …). So a part has to be set at run-time, through the entrypoint script using options:

# docker run -ti --rm -w $PWD -v $PWD:$PWD -v /etc/localtime:/etc/localtime:ro -v $HOME/.ssh:$HOME/.ssh -e USER -e UID=$(id --user) -e GID=$(id --group) -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK -e SSH_AUTH_SOCK -v $(which docker):$(which docker) -v /var/run/docker.sock:/var/run/docker.sock cell/debsandbox
$ 

In order to maintain this list on all my computers, I just set this as calling script named /usr/local/bin/dsb:

eval $(docker run --rm cell/debsandbox cmd) $@

The command “cmd” generates all the argument list.

With this I don’t need to deploy hugo or docker-compose to build and run my blog, I just have to be in the good directory, start my environment, and start my command inside:

# dsb
$ hugo server --bind=0.0.0.0 --port=8080  --baseUrl=http://${IP}:8080/ --buildDrafts --watch

or just start my environment for one command:

# dsb hugo
# dsb docker-compose build
# dsb docker-compose up

If I want to change something to this image, I just have to pushed it in the github repo, thanks to the automated builds, 5mn later the new image is available. “dsb refresh” pull the latest version, and all my environment is up-to-date.

Conclusion

So now, let’s choose the use-case to use the right way to provide the tools. I start to think of creating a rescue image using the coreos/toolbox script for emergencies. I’ll keep my cell/debsandbox for daily busyness.