I was recently introduced to a superb piece of software called Proxmox. Proxmox is a virtualization environment not unlike VMware ESXi. Capable of running full KVM based virtual machines or lightweight LXC based guests, Proxmox has proven to be the perfect solution for a home lab setup. Installing Proxmox is no different than installing any other Linux distribution and with minimal effort can be clustered together to form a system capable of migrating a guest from one host to another. With the right hardware you can even perform live migrations. Although Proxmox supports and is capable a lot more than I need it satisfies my desire to have a more “enterprise” like way to virtualize hardware in my home.

Proxmox is free with support plans available. If I were to use it anywhere other than at home I’d definitely play for the support subscription as it allows you to get access to the proper update repositories as well as, obviously, support. Without the support subscription your Proxmox is basically part of a testing repo meaning you get faster access to updates but also updates that are less tested.

In the coming weeks I’ll detail a bit more how I’m using Proxmox, how to setup KVM or LXC based hosts and provision them using Ansible.

Sometimes when using Ansible there is the need to reboot a server and wait for it to return. This simple recipe will allow you to achieve that while also getting some nice feedback so you know what is going on. You can place these tasks into a role or just in your playbook:

 

Recently I found myself needing to setup Elasticsearch for a project and thought it’d be the perfect opportunity to utilize Docker. This article discusses a portion of how I setup a set of three VMs to quickly get Elasticsearch up and running with minimal fuss. Hopefully you find it useful and can adapt the information provided here into your own setup.

I chose to use Docker for this project for a few reasons. For one, it allowed me to avoid iterating over and over to create a puppet module/configuration when the work of installing Elasticsearch as already done for me and nothing more than a command away. It allowed me to concentrate on configuration Elasticsearch itself rather than how to express how it should be configured in a different environment. Another reason I went with Docker is that you end up naturally documenting how the service runs on that system because the configuration is actually part of the command line arguments. It’s visible in both the script I use to create the container as well as if you issue docker inspect. Everything you need to know is more centrally located rather than poking around at system files and possibly missing something on one system. In the end, I have a setup that easier to understand, easier to debug and ultimately more portable.

For my setup I started with three Ubuntu 16.04 VMs configured using puppet to have the latest version of Docker installed and ready to go. These machines were configured with 16GB of memory and were each given an IP address (192.168.0.51, 192.168.0.52 and 192.168.0.53). From there I created a simple script called start_elasticsearch.sh to create the container on each server:

Be sure to change the IP addresses listed in the -Ediscovery.zen.ping.unicast.hosts line as well as the interface name in -Enetwork.publish_host line (grep ens160) to match the values of your systems. You can adjust other values as well to match your setup such as ES_JAVA_OPT to set heap size (use about 50% of total system memory). Save the script out and the mark it executable with chmod +x start_elasticsearch.sh. Tuck it away wherever you want, you only need to run it again if these parameters are updated.

Next, create a directory at /var/lib/elasticsearch. This is where the actual files will be stored. Repeat these steps on each server in your cluster. You can now run the script which will download the Elasticsearch version specified and start it with the parameters you provided. Once all instances are started they’ll discover each other and will just work.

From here you can access any of the servers at its IP address on port 9200. For production use, it’s wise to put some kind of load balancer in front of the cluster. I used nginx to get the job done.

In a future post, I’ll detail how to achieve this same setup using docker-compose and possibly docker swarm.

I’ve been putting a lot of time into this little project. Nobody uses it (yet?) and truth be told I barely use it in the house but it’s been such a great way to learn a number of different things including python, mDNS (bonjour), creating installer files for debian and OS X systems and even git that I can’t stop working on it.

I’m now releasing version 0.3.0. This version brings a few changes but most notably the Linux client is now ready. The next release will be coming shortly and will focus on making the client the more robust about how it deals with network disconnects.

You can read more about the 0.3.0 release at https://github.com/dustinrue/Dencoder/wiki

I just spent that last couple of hours trying to figure out why I couldn’t create a new software RAID set on my Ubuntu 10.04 system. Long story short, it turned out to be device mapper grabbing hold of the drives at boot. No amount of lsof would show that the devices were busy. The key was running dmsetup table and seeing that the drives in question were indeed “locked” by the device mapper.

This thread was the key I needed to get it all figured out – http://www.mail-archive.com/[email protected]/msg10661.html

After issuing dmsetup remove followed by the device name shown in dmsetup table I was off and running.

Well not quite. Turns out people have been getting confused on the pricing grid Oracle has on their site for the various products they provide. The confusion comes from the Embedded version of MySQL not supporting InnoDB and that the community edition isn’t listed as part of the grid.

The community edition still has InnoDB built in as an available storage engine but you can’t buy support from Oracle.

http://www.mysql.com/products/
http://palominodb.com/blog/2010/11/04/oracle-not-removing-innodb