Arm processors, used in Raspberry Pi’s and maybe even in a future Mac, are gaining in popularity due to their reduced cost and improved power efficiency over more traditional x86 offerings. As Arm processor adoption accelerates the need for Docker images that support both x86 and Arm will become more and more a necessity. Luckily, recent releases of Docker are capable of building images for multiple architectures. In this post I will cover one way to achieve this by combining a recent release of Gitlab (12+), k3s and the buildx plugin for Docker.

I am taking inspiration for this post from two places. First, this excellent writeup was a great help in getting things start – https://dev.to/jdrouet/multi-arch-images-with-docker-152f. This post was also instrumental in getting this going – https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408.

I assume you already have a working installation of Gitlab with the container registry configured. Optionally, you can use Docker Hub but I won’t cover that in detail. Using Docker Hub involves changing the repository URL and then logging into Docker Hub. You will also need some system available capable of running k3s that is using at least Linux 4.15+. For this you can use either Ubuntu 18.04+ or CentOS 8. There may be other options but I know these two will work. The kernel version is a hard requirement and is something that caused me some headache. If I had just RTFM I could have saved myself some time. For my setup I installed k3s onto a CentOS 8 VM and then connected it to Gitlab. For information on how to setup k3s and connecting it to Gitlab please see this post.

Once you are running k3s on a system with a supported kernel you can start building multi-arch images using buildx. I have created an example project available at https://github.com/dustinrue/buildx-example that you can import into Gitlab to get you started. This example project targets a runner tagged as kubernetes to perform the build. Here is a breakdown of what the .gitlab-ci.yml file is doing:

  • Installs buildx from GitHub (https://github.com/docker/buildx) as a Docker cli plugin
  • Registers qemu binaries to emulate whatever platform you request
  • Builds the images for the requested platforms
  • Pushes resulting images up to the Gitlab Docker Registry

Unlike the linked to posts I also had to add in a docker buildx inspect --bootstrap to make things work properly. Without this the new context was never active and the builds would fail.

The example .gitlab-ci.yml builds multiple architectures. You can request what architectures to build using the --platform flag. This command, docker buildx build --push --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -t ${CI_REGISTRY_URL}:${CI_COMMIT_SHORT_SHA} . will cause images to be build for the listed architectures. If you need a list of available architectures you can target you can add docker buildx ls right before the build command to see a list of supported architectures.

Once the build has completed you can validate everything using docker manifest inspect. Most likely you will need to enable experimental features for your client. Your command will look similar to this DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect <REGISTRY_URL>/drue/buildx-example:9ae6e4fb. Be sure to replace the path to the image with your image. Your output will look similar to this if everything worked properly:

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:611e6c65d9b4da5ce9f2b1cd0922f7cf8b5ef78b8f7d6d7c02f793c97251ce6b",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:6a85417fda08d90b7e3e58630e5281a6737703651270fa59e99fdc8c50a0d2e5",
         "platform": {
            "architecture": "arm64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:30c58a067e691c51e91b801348905a724c59fecead96e645693b561456c0a1a8",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v7"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:3243e1f1e55934547d74803804fe3d595f121dd7f09b7c87053384d516c1816a",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      }
   ]
}

You should see multiple architectures listed.

I hope this is enough to get you up and running building multi-arch Docker images. If you have any questions please open an issue on Github and I’ll try to get it answered.

Not too long ago I wrote about using Packer to build VM templates for Proxmox and created a Github project with the files. In the end I provided basic information on how to setup cloud-init within the Proxmox GUI. This time we’re going to dive a bit deeper into using cloud-init within Proxmox and customize it as needed.

First, lets quickly cover what cloud-init is. Cloud-init is a system for configuring an operating system on first boot. It is always used on cloud based systems like AWS, Azure, OpenStack and can be used on non-cloud based systems like Proxmox, VirtualBox or any system where you can present the info as a CD-ROM. Using cloud-init you can pass in instance meta-data information, network configuration and user information. As part of the user information you can also provide commands to be run. It is the ability to run commands on initial boot that we’re going to tap into.

Out of the box, Proxmox provides a basic cloud-init system that you can enable through the web interface that works well if all you need is to create a user with an SSH key and configure the network. But if you want to customize it you will need to ensure you have snippets enabled and visit the cli of your Proxmox system.

Continue reading

Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.

Continue reading

A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.

RancherOS, available at https://rancher.com/rancher-os/, is a lightweight container operating system. It is easy to install and easy to configure but a bit light on documentation for some specific use cases. Here, I will describe how I setup RancherOS (1.5.5 as of this writing) for use with my locally installed Rancher 2.x based bare metal cluster. I will also touch on using cloud-config to configure RancherOS at boot to include the iSCSI subsystem and auto join my cluster.

I run my nodes on a Proxmox based hypervisor and have FreeNAS based storage providing NFS and iSCSI. I’m not going to cover the installation of Rancher, Proxmox or FreeNAS but just focus on basic configuration of RancherOS.

RancherOS itself is able to accept configure information using a cloud-config file. Using a cloud-config file allows you to configure a number of things during the first boot up. I take advantage of this to configure some persistent volumes, add my ssh key, enable the iSCSI subsystem and even automatically join my cluster. Here is what the file looks like, with some values removed/shortened:

# cloud-config

# create an rc.local which will cause this system to join the cluster. Replace required values for your server URL and your token
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      wait-for-docker
      if [ ! -f /opt/init-done ]; then
        docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.5 --server <your rancher server url> --token <your rancher token> --worker --node-name $(ip ro | grep default | awk '{print $7}')
        touch /opt/init-done
      fi

rancher:
  # in my setup I use iSCSI to provide block storage to pods, for this to work on RancherOS the iSCSI subsystem must be enabled
  services_include:
    open-iscsi: true
  # setup some local persistent storage for a few important volumes
  # this ensures Kubernetes works properly across reboots
  services:
    user-volumes:
      volumes:
        - /home:/home
        - /opt:/opt
        - /var/lib/kubelet:/var/lib/kubelet
        - /etc/kubernetes:/etc/kubernetes
ssh_authorized_keys:
  - <paste your ssh public key here>

For my setup I saved this file onto a web host accessible within my network. Below you will see how we tell RancherOS about the file during the setup process. You can find more configuration options at https://rancher.com/docs/os/v1.x/en/installation/configuration/.

Please note that the most important settings are the persistent mount options. You should at least use those if you plan to connect the RancherOS instance to a Rancher based Kubernetes cluster.

With the cloud-config file created we can now install RancherOS. There are a few options for installing RancherOS but for my setup I am simply using the basic iso file. For my target machine, a unibody 2008 MacBook, I had to burn the image to CDR. I booted the ISO and waited for it to finish the boot process. Once it was ready, I entered my install command:

sudo ros install -d /dev/sda -c http://<hostname>/rancheros.yaml

This command will instruct the installer to download the config file specified, save it locally (into /var/lib/rancher/conf) and then get everything ready on /dev/sda. I answer y to the reboot question and the system reboots into RancherOS. After a while the system will join your cluster and be ready for use.

That’s it. Your RancherOS node should now be ready to for use and will support iSCSI based block storage. In future posts I will try to discuss setting up other aspects of a bare metal Kubernetes cluster (where bare metal basically refers to running it anywhere but some cloud provider). If you have questions please reach out to me via Twitter.

References:
Using iscsi on RancherOS https://docs.openebs.io/docs/next/prerequisites.html

There are times when it is necessary or desirable to access servers through a single host, called a bastion. This is the first host you'd access prior to using ssh to access some other host. Limiting access to the other hosts would either be controlled by firewall rules or simply because they don't have public IPs. Whatever your reason, a bastion host is a great way to increase security by decreasing the number of exposed hosts on the internet.

For the best security, all hosts should be configured to allow only key based authentication. This immediately negates any brute force based attempts to access your server. While convenient, it isn't necessary for you to use the same keys on all servers you access. Search the web for the best way to achieve key only authentication on your distribution of choice.

Configuring access to any server using a bastion host starts by first defining how you will connect to the bastion host itself. To get started, simply add an entry into your .ssh/config file that describes how to access the bastion host itself. As an example, lets say you have a bastion host at IP 192.168.0.1 and you've installed your public key to user called 'bastionuser'. Your entry would look like this:

HostName bastionhost
User bastionuser
Host 192.168.0.1

This entry does two things. It gives you a very easy way to ssh to your bastion host and it gives you a target you can use as a proxy to access other hosts. To use the entry you can simply issue 'ssh bastionhost' and you'll access your bastion host as user bastionuser using your default private key.

With access to the bastion host itself out of the way, you're now ready to create .ssh/config entries to access other servers that are only accessible through the bastion host. For this example, lets say a server with IP 192.168.1.2 is available from the bastion host. You'd create an entry that looks like this:

HostName targetserver
User targetuser
Host 192.168.1.2
ProxyCommand ssh bastionhost -W %h:%p

That's it! When you want to ssh to the target server, simply issue ssh targetserver and your connection will first hit the bastion host to be used as a proxy. Note that, at all times, your local private key will be used to make the connection unless you explicitly tell ssh to use something else using IdentityFile <path to file>. Even if you use different keys, those keys must always exist on your local system, keys on remote systems will never be used. It's up to you to find a way to distribute your keys to all other target servers.

In addition to using a bastion host for to access a single server or a set of them, you can also chain multiple bastion hosts together simply by configuring more entries with ProxyCommand. For example, lets say a server at 192.168.2.2 is only accessible from targetserver. You'd create an entry like this:

HostName finaldestination
User finaluser
Host 192.168.2.2
ProxyCommand ssh targetserver -W %h:%p

With this entry in place it is now possible to access your final destination by issuing ssh finaldestination. This configuration will instruct ssh to attempt to access finaldestination using target server, but in order to access targetserver to first go through the bastion host. There is technically no limit to the number of hosts you can proxy through but you'll eventually hit the limits of latency.

Whatever your reason for placing an NGINX proxy in front of your Gitlab installation, you need to ensure you’re using the right configuration to support all of Gitlab’s features. I recently discovered that although my installation was mostly working I couldn’t get pipeline/build logs properly. I discovered that my proxy configuration was to blame. After some searching around I finally found that my config wasn’t quite right. To get the most out of Gitlab and ensure a smooth experience use configuration shown below as a template for your own. In my setup I use LetsEncrypt for SSL so if you’re not you can remove any of the SSL specific parts. The important configuration information is contained the the location block.

 

upstream gitlab {
  server <ip of your gitlab server>:<port>;
}

server {
    listen          443;
    server_name     <your gitlab server hostname;

    ssl on;
    ssl_certificate <path to cert>;
    ssl_certificate_key <path to key>;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    server_tokens off;


    gzip on;
    gzip_vary on;
    gzip_disable "msie6";
    gzip_types application/json;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;

    location / {
       client_max_body_size   0;
       proxy_set_header    Host                $http_host;
       proxy_set_header    X-Real-IP           $remote_addr;
       proxy_set_header    X-Forwarded-Ssl     on;
       proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
       proxy_set_header    X-Forwarded-Proto   $scheme;

      proxy_pass https://gitlab;
    }
}

This configuration will properly pass all requests through to your Gitlab server as well as allow CI/CD pipeline logs to pass through properly.

I was recently introduced to a superb piece of software called Proxmox. Proxmox is a virtualization environment not unlike VMware ESXi. Capable of running full KVM based virtual machines or lightweight LXC based guests, Proxmox has proven to be the perfect solution for a home lab setup. Installing Proxmox is no different than installing any other Linux distribution and with minimal effort can be clustered together to form a system capable of migrating a guest from one host to another. With the right hardware you can even perform live migrations. Although Proxmox supports and is capable a lot more than I need it satisfies my desire to have a more “enterprise” like way to virtualize hardware in my home.

Proxmox is free with support plans available. If I were to use it anywhere other than at home I’d definitely play for the support subscription as it allows you to get access to the proper update repositories as well as, obviously, support. Without the support subscription your Proxmox is basically part of a testing repo meaning you get faster access to updates but also updates that are less tested.

In the coming weeks I’ll detail a bit more how I’m using Proxmox, how to setup KVM or LXC based hosts and provision them using Ansible.

UPDATE: This method is old and outdated. Most of the time this is probably what you actually want – https://docs.ansible.com/ansible/latest/modules/reboot_module.html.

Sometimes when using Ansible there is the need to reboot a server and wait for it to return. This simple recipe will allow you to achieve that while also getting some nice feedback so you know what is going on. You can place these tasks into a role or just in your playbook:

- name: Store target host and user
  set_fact:
  target_host: "{{ ansible_host }}"
  target_user: "{{ ansible_user }}"
 
- name: Reboot the server
  shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
  async: 1
  poll: 0
  ignore_errors: true
 
- name: Wait for server to shutdown
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc != 0
  failed_when: result.rc == -1
  retries: 200
  delay: 1
 
- name: Wait for server to be ready
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc == 0
  retries: 200
  delay: 3