Although I’m more than comfortable using command line tools to manage things there are times where a GUI is just more convenient. Pruning old containers, images and volumes in Docker are all things that are easier much to manage under a new tool I saw via twitter the other day. Portainer promises to make the task of managing Docker a bit easier and they’ve made good progress on delivering on that promise.  Getting up and running with it is incredibly simple because, as you’d expect, it’s available as a Docker image.  Simply issue the following this slightly Mac specific command:

docker run -d -p 9000:9000 -v "/var/run/docker.sock:/var/run/docker.sock" -v portainer:/data --name portainer portainer/portainer

This will get Portainer up and running on your system. If you’re on a Linux system you can skip mapping docker.sock. The other mapping just gives a persistent store for the little bit of data Portainer generates.  For full documentation visit their documentation site.

Recently I found myself needing to setup Elasticsearch for a project and thought it’d be the perfect opportunity to utilize Docker. This article discusses a portion of how I setup a set of three VMs to quickly get Elasticsearch up and running with minimal fuss. Hopefully you find it useful and can adapt the information provided here into your own setup.

I chose to use Docker for this project for a few reasons. For one, it allowed me to avoid iterating over and over to create a puppet module/configuration when the work of installing Elasticsearch as already done for me and nothing more than a command away. It allowed me to concentrate on configuration Elasticsearch itself rather than how to express how it should be configured in a different environment. Another reason I went with Docker is that you end up naturally documenting how the service runs on that system because the configuration is actually part of the command line arguments. It’s visible in both the script I use to create the container as well as if you issue docker inspect. Everything you need to know is more centrally located rather than poking around at system files and possibly missing something on one system. In the end, I have a setup that easier to understand, easier to debug and ultimately more portable.

For my setup I started with three Ubuntu 16.04 VMs configured using puppet to have the latest version of Docker installed and ready to go. These machines were configured with 16GB of memory and were each given an IP address (192.168.0.51, 192.168.0.52 and 192.168.0.53). From there I created a simple script called start_elasticsearch.sh to create the container on each server:

#!/bin/bash

docker rm -f elasticsearch
docker run --restart always \
 --ulimit memlock=-1:-1 \
 --ulimit nofile=65536:65536 \
 --cap-add=IPC_LOCK \
 -d \
 -e ES_JAVA_OPTS="-Xms7g -Xmx7g" \
 -v /var/lib/elasticsearch/:/usr/share/elasticsearch/data \
 -v /etc/timezone:/etc/timezone -v /etc/localtime:/etc/localtime \
 -p 9300:9300 -p 9200:9200 \
 --name elasticsearch \
 elasticsearch:5.2 \
 -Ediscovery.zen.ping.unicast.hosts=192.168.0.51,192.168.0.52,192.168.0.53 \
 -Ebootstrap.memory_lock=true \
 -Enetwork.host=0.0.0.0 \
 -Enode.name=${HOSTNAME} \
 -Enetwork.publish_host=`ip addr | grep inet | grep ens160 | awk '{print $2}' | cut -d '/' -f 1` \
 -Ediscovery.zen.minimum_master_nodes=2

Be sure to change the IP addresses listed in the -Ediscovery.zen.ping.unicast.hosts line as well as the interface name in -Enetwork.publish_host line (grep ens160) to match the values of your systems. You can adjust other values as well to match your setup such as ES_JAVA_OPT to set heap size (use about 50% of total system memory). Save the script out and the mark it executable with chmod +x start_elasticsearch.sh. Tuck it away wherever you want, you only need to run it again if these parameters are updated.

Next, create a directory at /var/lib/elasticsearch. This is where the actual files will be stored. Repeat these steps on each server in your cluster. You can now run the script which will download the Elasticsearch version specified and start it with the parameters you provided. Once all instances are started they’ll discover each other and will just work.

From here you can access any of the servers at its IP address on port 9200. For production use, it’s wise to put some kind of load balancer in front of the cluster. I used nginx to get the job done.

In a future post, I’ll detail how to achieve this same setup using docker-compose and possibly docker swarm.

Found myself with an odd situation when running Jenkins using Docker. The time displayed was correct but claimed it was UTC which lead to some odd behavior. The best way to resolve this is to force the Docker container to use the correct timezone from the host system.

To do so, add the following to your run command:

-v /etc/timezone:/etc/timezone -v /etc/localtime:/etc/localtime