Pushed some updates to my proxmox packer project at https://github.com/dustinrue/proxmox-packer. The updates include effort to support Packer 1.9.2 and the most recent Proxmox plugin. The makefile has been updated to run the necessary init command to get the Proxmox plugin installed as well.

More importantly, the changes fix an issue that prevented the provisioner from running leading to broken or missing cloud-init support in the resulting templates.

Semi-related to my previous post, this post quickly touches on the fact that having swap on your system is not always a bad thing. I have seen “disable swap” become a common “performance hack” suggested by a lot of people and it appears to be growing in popularity. I believe a lot of people are simply parroting something they heard once but don’t actually know when it makes sense to disable swap on a system. I have found that outright disabling swap has a detrimental effect on system performance.

The basic idea behind not using swap is sound, on the surface. The argument is that swap is both much much slower than system memory and that if you are hitting swap then you need more memory. To add to this, a lot of people don’t understand how memory works on Linux (and indeed all major operating systems). Linux wants to use as much memory as possible. If you give it 1TB of memory (or more) then it will do everything it can to eventually use all of it. However, how it uses this memory can be confusing. Looking at this output from free -m, it may not be obvious what is happening:

[root@web2 system]# free -m
              total        used        free      shared  buff/cache   available
Mem:            809         407         137          37         263         251
Swap:          1023         282         741

In the above example output from free -m you will see the columns total, used, free, shared, buff/cache and available. The values for each, respectively is 809, 407, 137, 37, 263 and 251.

In a lot of cases, the value most people will look at is “free.” Unfortunately, on a system that has been running for some time, this value will almost always give the impression that the system is low on memory. Like so many things, there is a lot more to it than what the free value shows. In reality, the value you want to pay attention to is available. This value represents the amount of free memory with memory that can be reclaimed at any time for other purposes added in. The “cache” portion of the buff/cache value is what can be reclaimed and it represents the amount of data from disks that is cached into memory. It is this cache that operating systems try to keep full in order to avoid expensive disk reads and is why a system with a lot of memory can potentially have very little free memory.

A system that is low in available memory will also not be able to cache a lot of disk reads (because remember that available is free+cache added together) which will lead to lower overall performance. Of course, loading an entire disk into memory won’t necessarily have a positive affect on overall performance either. If a file is read once and never used again, does it really need to be cached? Having a lot of memory can lead to things being needlessly cached. A system with 16GB of memory can perform just as well as a system with 32GB of memory if most of the 32GB memory is filled with files that are very rarely read again.

Getting to why having swap is not evil, some apps and portions of apps aren’t always being used, even if they are running. For this reason, having swap available on a system is beneficial because the operating system can page application memory to disk and free up memory for to use as a disk cache for more active applications. In instances, such as the web server hosting this site, having swap available is a necessity because it allows me to have a system with less memory while still maintaining proper performance in normal conditions. Services that are necessary but rarely used are swapped out leaving room in memory for application code to be kept there instead. WordPress is considered “hot data” where as systemd is not. Once the system has booted systemd, while necessary, is not actively doing anything and can be paged to disk without affecting performance in noticeable way. However, swap is an issue if you are dipping into it continuously. This will quickly become evident if you have a lack of available memory as well as a high usage of swap. In this case, you truly do need more memory in the system.

I hope this post helps clear up some of the confusion around memory usage on systems. Have anything to share? Did I get something wrong? Leave a comment!

I have been running Linux as a server operating system for over twenty years now. For a brief period of time, I also ran it as my desktop solution around 2000-2001. Try as I might however, I could never really fully embrace it. I have always found Linux as a desktop operating system annoying to deal with and too limiting (for my use cases, your mileage may vary). A recent series by Linus Tech Tips doing a great job of highlighting some of the reasons why Linux as a desktop operating system has never really gone mainstream (chromebooks being a notable exception).

Check out the videos:

And

Arm processors, used in Raspberry Pi’s and maybe even in a future Mac, are gaining in popularity due to their reduced cost and improved power efficiency over more traditional x86 offerings. As Arm processor adoption accelerates the need for Docker images that support both x86 and Arm will become more and more a necessity. Luckily, recent releases of Docker are capable of building images for multiple architectures. In this post I will cover one way to achieve this by combining a recent release of Gitlab (12+), k3s and the buildx plugin for Docker.

I am taking inspiration for this post from two places. First, this excellent writeup was a great help in getting things start – https://dev.to/jdrouet/multi-arch-images-with-docker-152f. This post was also instrumental in getting this going – https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408.

I assume you already have a working installation of Gitlab with the container registry configured. Optionally, you can use Docker Hub but I won’t cover that in detail. Using Docker Hub involves changing the repository URL and then logging into Docker Hub. You will also need some system available capable of running k3s that is using at least Linux 4.15+. For this you can use either Ubuntu 18.04+ or CentOS 8. There may be other options but I know these two will work. The kernel version is a hard requirement and is something that caused me some headache. If I had just RTFM I could have saved myself some time. For my setup I installed k3s onto a CentOS 8 VM and then connected it to Gitlab. For information on how to setup k3s and connecting it to Gitlab please see this post.

Once you are running k3s on a system with a supported kernel you can start building multi-arch images using buildx. I have created an example project available at https://github.com/dustinrue/buildx-example that you can import into Gitlab to get you started. This example project targets a runner tagged as kubernetes to perform the build. Here is a breakdown of what the .gitlab-ci.yml file is doing:

  • Installs buildx from GitHub (https://github.com/docker/buildx) as a Docker cli plugin
  • Registers qemu binaries to emulate whatever platform you request
  • Builds the images for the requested platforms
  • Pushes resulting images up to the Gitlab Docker Registry

Unlike the linked to posts I also had to add in a docker buildx inspect --bootstrap to make things work properly. Without this the new context was never active and the builds would fail.

The example .gitlab-ci.yml builds multiple architectures. You can request what architectures to build using the --platform flag. This command, docker buildx build --push --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -t ${CI_REGISTRY_URL}:${CI_COMMIT_SHORT_SHA} . will cause images to be build for the listed architectures. If you need a list of available architectures you can target you can add docker buildx ls right before the build command to see a list of supported architectures.

Once the build has completed you can validate everything using docker manifest inspect. Most likely you will need to enable experimental features for your client. Your command will look similar to this DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect <REGISTRY_URL>/drue/buildx-example:9ae6e4fb. Be sure to replace the path to the image with your image. Your output will look similar to this if everything worked properly:

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:611e6c65d9b4da5ce9f2b1cd0922f7cf8b5ef78b8f7d6d7c02f793c97251ce6b",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:6a85417fda08d90b7e3e58630e5281a6737703651270fa59e99fdc8c50a0d2e5",
         "platform": {
            "architecture": "arm64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:30c58a067e691c51e91b801348905a724c59fecead96e645693b561456c0a1a8",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v7"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:3243e1f1e55934547d74803804fe3d595f121dd7f09b7c87053384d516c1816a",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      }
   ]
}

You should see multiple architectures listed.

I hope this is enough to get you up and running building multi-arch Docker images. If you have any questions please open an issue on Github and I’ll try to get it answered.

A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.

I was recently introduced to a superb piece of software called Proxmox. Proxmox is a virtualization environment not unlike VMware ESXi. Capable of running full KVM based virtual machines or lightweight LXC based guests, Proxmox has proven to be the perfect solution for a home lab setup. Installing Proxmox is no different than installing any other Linux distribution and with minimal effort can be clustered together to form a system capable of migrating a guest from one host to another. With the right hardware you can even perform live migrations. Although Proxmox supports and is capable a lot more than I need it satisfies my desire to have a more “enterprise” like way to virtualize hardware in my home.

Proxmox is free with support plans available. If I were to use it anywhere other than at home I’d definitely play for the support subscription as it allows you to get access to the proper update repositories as well as, obviously, support. Without the support subscription your Proxmox is basically part of a testing repo meaning you get faster access to updates but also updates that are less tested.

In the coming weeks I’ll detail a bit more how I’m using Proxmox, how to setup KVM or LXC based hosts and provision them using Ansible.

UPDATE: This method is old and outdated. Most of the time this is probably what you actually want – https://docs.ansible.com/ansible/latest/modules/reboot_module.html.

Sometimes when using Ansible there is the need to reboot a server and wait for it to return. This simple recipe will allow you to achieve that while also getting some nice feedback so you know what is going on. You can place these tasks into a role or just in your playbook:

- name: Store target host and user
  set_fact:
  target_host: "{{ ansible_host }}"
  target_user: "{{ ansible_user }}"
 
- name: Reboot the server
  shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
  async: 1
  poll: 0
  ignore_errors: true
 
- name: Wait for server to shutdown
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc != 0
  failed_when: result.rc == -1
  retries: 200
  delay: 1
 
- name: Wait for server to be ready
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc == 0
  retries: 200
  delay: 3

I’ve been putting a lot of time into this little project. Nobody uses it (yet?) and truth be told I barely use it in the house but it’s been such a great way to learn a number of different things including python, mDNS (bonjour), creating installer files for debian and OS X systems and even git that I can’t stop working on it.

I’m now releasing version 0.3.0. This version brings a few changes but most notably the Linux client is now ready. The next release will be coming shortly and will focus on making the client the more robust about how it deals with network disconnects.

You can read more about the 0.3.0 release at https://github.com/dustinrue/Dencoder/wiki