For nearly as long as I’ve been using Linux I have had some system on my home network that is acting as a server or test bed for various pieces of software or services. In the beginning this system might be my DHCP and NAT gateway, later it might be a file server but over the years I have almost always had some sort system running that acted as a server of some kind. These systems would often be configured using the same operating system that I was using in the workplace and running similar services where it made sense. This has always given me a way to practice upgrades, service configuration changes and just be as familiar with things as I possibly could.

As I’ve moved along in my career, the services I deal with have gotten more complex and what I want running at home as grown more complex to match. Although my home lab pales in comparison to what others have done I thought it would still be fun to go through what I have running.

Hardware

Like a lot of people, the majority of the hardware I’m running is older hardware that isn’t well suited for daily use. Some of the hardware is stuff I got free, some of it is hardware previously used to run Windows and so on. Unlike what seems to be most home lab enthusiasts, I like to keep things as basic as possible. If a consumer grade device is capable of delivering what I need at home then I will happily stick to that.

On the network side, my home is serviced with cable based Internet. This goes into an ISP provided Arris cable modem and immediately behind this is a Google WiFi access point. Nothing elaborate here, just a “basic” WiFi router handles all DHCP and NAT for my entire network and does a fine job with it. After the WiFi router is a Cisco 3560g 10/100/1000 switch. This sixteen year old managed switch does support a lot of useful features but most of my network is just sitting on VLAN 1 as I don’t have a lot of need for segmenting my network. Attached to the switch are two additional Google WiFi access points, numerous IoT devices, phones, laptops and the like.

Also attached to the switch are, of course, items that I consider part of the home lab. This includes a 2011 HP Compaq 8200 Elite Small Form Factor PC, an Intel i5-3470 based system built around 2012 and a Raspberry Pi 4. The HP system has a number of HDD and SSD drives, 24GB memory, a single gigabit ethernet port and hosts a number of virtual machines. The built Intel i5-3470 system has 16GB memory, a set of three 2TB HDDs and a single SSD for hosting the OS. The Pi4 is a 4GB model with an external SSD attached.

Operating Systems

Base operating system on the HP is Proxmox 7. This excellent operating system is best describe as being similar to VMware ESXi. It allows you to host as many Virtual Machines as your hardware will support, can be clustered and even migrate VMs between cluster nodes. Proxmox is definitely a happy medium between having a single system and being a full on cloud like OpenStack. I can effectively achieve a lot of a cloud stack would provide but with greater simplicity. Although I can create VMs and manually install operating systems, I have created a number of templates to make creating VMs quicker and easier. The code for building the templates is at https://github.com/dustinrue/proxmox-packer.

On the Intel i5-3470 based system is TrueNAS Core. This system acts as a Samba based file store for the entire home network including remote Apple Time Machine support, NFS for Proxmox and iSCSI for Kubernetes. TrueNAS Core is an excellent choice for creating a NAS. Although it is capable of more, I stick just to just the basic file serving functionality and don’t get into any of the extra plugins or services it can provide.

The Raspberry Pi 4 is running the 64bit version of Pi OS. Although it is a beta release it has proven to work well enough.

Software and Services

The Proxmox system hosts a number of virtual machines. These virtual machines provide:

Kubernetes

On top of Proxmox I also run k3s to provide Kubernetes. Kubernetes allows me to run software and test Helm charts that I’m working on. My Kubernetes cluster consists of a single amd64 based VM running on Proxmox and the Pi4 to give me a true arm64 node. In Kubernetes I have installed:

  • cert-manager for SSL certifications. This is configured against my DNS provider to validate certificates.
  • ingress-nginx for ingress. I do not deploy Traefik on k3s but prefer to use ingress-nginx. I’m more familiar with its configuration and have good luck with it.
  • democratic-csi for storage. This package is able to provide on demand storage for pods that ask for it using iSCSI to the TrueNAS system. It is able to automatically create new storage pools and share them using iSCSI.
  • gitlab-runner for Gitlab runner. This provides my Gitlab server with the ability to do CI/CD work.

I don’t currently use Kubernetes at home for anything other than short term testing of software and Helm charts. Of everything in my home lab Kubernetes is the most “lab” part of it where I do most of my development of Helm charts and do basic testing of software. Having a Pi4 in the cluster really helps with ensuring charts are targeting operating systems and architectures properly. It also helps me validate that Docker images I am building do work properly across different architectures.

Personal Workstation

My daily driver is currently an i7 Mac mini. This is, of course, running macOS and includes all of the usual tools and utilities I need. I detailed some time ago the software I use at https://blog.dustinrue.com/2020/03/whats-on-my-computer-march-2020-edition/.

Finishing Up

As you can see, I have a fairly modest home lab setup but it provides me with exactly what I need to provide the services I actually use on a daily basis as well as provide a place to test software and try things out. Although there is a limited set of items I run continuously I can easily use this for testing more advanced setups if I need to.

As I continue to mess around with various ways of installing and running Kubernetes in my home lab using Rancher I keep coming up with different ways to solve similar problems. Each time I set it up using different host OSs I learn a bit more which my primary goal. The latest iteration uses CentOS 8 and allows for iSCSI based persistent storage to work properly. I want to use CentOS 8 because it includes a newer kernel required for doing buildx based multi-arch builds. In this post, I’d like to go through the process of setting up CentOS 8 with Docker and what utilities to install to support NFS and iSCSI based persistent storage so that it works properly with Rancher.

Continue reading

Successful connection test

In this post I’m going to review how I installed Rundeck on Kubernetes and then configured a node source. I’ll cover the installation of Rundeck using the available helm chart, configuration of persistent storage, ingress, node definitions and key storage. In a later post I’ll discuss how I setup a backup job to perform a backup of the server hosting this site.

For this to work you must have a Kubernetes cluster that allows for ingress and persistent storage. In my cluster I am using nginx-ingress-controller for ingress and freenas-iscsi-provisioner. The freenas-iscsi-provisioner is connected to my FreeNAS server and creates iSCSI based storage volumes. It is set as my default storage class. You will also need helm 3 installed.

With the prerequisites out of the way we can get started. First, add the helm chart repository by following the directions on located on https://hub.helm.sh/charts/incubator/rundeck. Once added, perform the following to get the values file so we can edit it:

helm show values incubator/rundeck > rundeck.yaml
Continue reading

Arm processors, used in Raspberry Pi’s and maybe even in a future Mac, are gaining in popularity due to their reduced cost and improved power efficiency over more traditional x86 offerings. As Arm processor adoption accelerates the need for Docker images that support both x86 and Arm will become more and more a necessity. Luckily, recent releases of Docker are capable of building images for multiple architectures. In this post I will cover one way to achieve this by combining a recent release of Gitlab (12+), k3s and the buildx plugin for Docker.

I am taking inspiration for this post from two places. First, this excellent writeup was a great help in getting things start – https://dev.to/jdrouet/multi-arch-images-with-docker-152f. This post was also instrumental in getting this going – https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408.

I assume you already have a working installation of Gitlab with the container registry configured. Optionally, you can use Docker Hub but I won’t cover that in detail. Using Docker Hub involves changing the repository URL and then logging into Docker Hub. You will also need some system available capable of running k3s that is using at least Linux 4.15+. For this you can use either Ubuntu 18.04+ or CentOS 8. There may be other options but I know these two will work. The kernel version is a hard requirement and is something that caused me some headache. If I had just RTFM I could have saved myself some time. For my setup I installed k3s onto a CentOS 8 VM and then connected it to Gitlab. For information on how to setup k3s and connecting it to Gitlab please see this post.

Once you are running k3s on a system with a supported kernel you can start building multi-arch images using buildx. I have created an example project available at https://github.com/dustinrue/buildx-example that you can import into Gitlab to get you started. This example project targets a runner tagged as kubernetes to perform the build. Here is a breakdown of what the .gitlab-ci.yml file is doing:

  • Installs buildx from GitHub (https://github.com/docker/buildx) as a Docker cli plugin
  • Registers qemu binaries to emulate whatever platform you request
  • Builds the images for the requested platforms
  • Pushes resulting images up to the Gitlab Docker Registry

Unlike the linked to posts I also had to add in a docker buildx inspect --bootstrap to make things work properly. Without this the new context was never active and the builds would fail.

The example .gitlab-ci.yml builds multiple architectures. You can request what architectures to build using the --platform flag. This command, docker buildx build --push --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -t ${CI_REGISTRY_URL}:${CI_COMMIT_SHORT_SHA} . will cause images to be build for the listed architectures. If you need a list of available architectures you can target you can add docker buildx ls right before the build command to see a list of supported architectures.

Once the build has completed you can validate everything using docker manifest inspect. Most likely you will need to enable experimental features for your client. Your command will look similar to this DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect <REGISTRY_URL>/drue/buildx-example:9ae6e4fb. Be sure to replace the path to the image with your image. Your output will look similar to this if everything worked properly:

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:611e6c65d9b4da5ce9f2b1cd0922f7cf8b5ef78b8f7d6d7c02f793c97251ce6b",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:6a85417fda08d90b7e3e58630e5281a6737703651270fa59e99fdc8c50a0d2e5",
         "platform": {
            "architecture": "arm64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:30c58a067e691c51e91b801348905a724c59fecead96e645693b561456c0a1a8",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v7"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:3243e1f1e55934547d74803804fe3d595f121dd7f09b7c87053384d516c1816a",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      }
   ]
}

You should see multiple architectures listed.

I hope this is enough to get you up and running building multi-arch Docker images. If you have any questions please open an issue on Github and I’ll try to get it answered.

Not too long ago I wrote about using Packer to build VM templates for Proxmox and created a Github project with the files. In the end I provided basic information on how to setup cloud-init within the Proxmox GUI. This time we’re going to dive a bit deeper into using cloud-init within Proxmox and customize it as needed.

First, lets quickly cover what cloud-init is. Cloud-init is a system for configuring an operating system on first boot. It is always used on cloud based systems like AWS, Azure, OpenStack and can be used on non-cloud based systems like Proxmox, VirtualBox or any system where you can present the info as a CD-ROM. Using cloud-init you can pass in instance meta-data information, network configuration and user information. As part of the user information you can also provide commands to be run. It is the ability to run commands on initial boot that we’re going to tap into.

Out of the box, Proxmox provides a basic cloud-init system that you can enable through the web interface that works well if all you need is to create a user with an SSH key and configure the network. But if you want to customize it you will need to ensure you have snippets enabled and visit the cli of your Proxmox system.

Continue reading

Once in awhile I like to read about what kind of software and utilities other people are using on their system to make their lives easier. It’s always interesting to see what mix of tools people are using and often times I learn about a new tool I hadn’t heard of before. Today I thought I’d do the same as I’ve started using a number of new tools on a regular basis just in the past six months.

As a systems engineer that is also familiar with programming I have what may be a unique mix of software and tools on my computer. Let’s take a look.

Operating System(s)

I have been using macOS full time since about 2008. I use macOS because it is a mix Unix and a GUI (NeXT if you’re keeping score) which gives me a familiar and robust command line environment with an excellent desktop environment.

I also use Linux heavily but almost never as a desktop or workstation. I have a laptop that I can dual boot between Linux and macOS for testing. I also run multiple Linux systems to run Proxmox for virtualization. Proxmox is a great way to get use out of otherwise retired computers. In fact, my Proxmox cluster is an older HP desktop with a quad core processor mixed with a pair of old MacBooks. I have written about Proxmox before and you can find it here.

I have one Windows PC that exists mostly because of games but also some business software.

Software Tools

When it comes to software these are the tools I use most frequently.

  • Code Editing and Runtimes/Languages
  • DevOps Type Stuff
  • Kuberenetes
    • kubectx/kubens for easy cluster and namespace switching
    • k9s for a text based UI to Kubernetes
  • Utilities
    • Brew
    • Patterns tool for working with regular expressions. Been using it for years but several tools now exist like it
    • iTerm 2 superior to the default terminal available in macOS
  • Other
    • Spotify for music
    • VirtualBox for testing Ansible roles
    • Twitter client
    • Mail.app
    • RamBox for chat
    • Bear for notes

Quick list of software tools that I find make using Kubernetes even better. I consider these tools must haves.

In this post I’m going to quickly describe the process of getting Gitlab’s Kubernetes Integration connected to a k3s based Kubernetes setup. Once connected you can use k3s for your build pipelines or deployments as you see fit.

Continue reading

When Docker first came out it was a real mind bender of an experience for me. I simply couldn’t wrap my head around what a Docker image was, how it was different from a virtual machine and so on. “Why not just install the software from rpm?” I said.

I also struggled with how the app in the container was running inside of something and didn’t have access to anything. At the time I saw this just as a silly hurdle that made it more difficult than it should be to get something running rather than a core benefit of using containers.

Over time I got to know Docker and containers better. I gained an understanding of now images are created, how they could be given restricted resources, easily shared and so on. I started creating my own containers to further understand the process, got to know multi stage builds and so in.

Although I had gained a better understanding of the container itself I still couldn’t find a good use case for containers in my line of work. I was too used to creating VMs that ran a static set of services that rarely changed. Docker containers still seemed like another packaging format that has few additional advantages. It wasn’t until I started playing with container orchestration that things really started to click.

With container orchestration, and in particular Kubernetes, the power and convenience of containers becomes much harder to ignore. Orchestration was definitely the missing piece of the puzzle for me that sealed the deal. This is because orchestration solves a number common issues with running larger software infrastructure. One of the biggest issues that Kubernetes solves is how to swap out the running application with little fuss. By simply declaring that a running workload should update the Docker image in use Kubernetes will go through the process of starting the new container, waiting for it to be ready, adding it to the load balancer and then draining connections from the old container. While it’s true you can achieve all of that with a traditional setup it requires a lot more effort. This feature alone is what sold me on using Kubernetes at all and from there my current state of container acceptance.

With Kubernetes revealing the huge potential of containers I’ve since come back to exploring them for other uses outside of orchestration. Now, core features of containers that once bothered me are seen as advantages. I still see containers as a packaging format but one that works equally well on macOS and Windows as well as it does on Linux or in Kubernetes. As an “expert” I can provide a container to a user that has everything installed for some to tool. Previously this may have required me to write extensive documentation detailing the requirements, installation process and finally the configuration of whatever software it took to meet the user’s needs. A process that may end up failing or not work at all because the end user is using a different operating system or because of some other environment specific reason. With containers, if it works for me there is a much greater chance it will work for someone else as well.

Today I find myself building more and more containers for use in CI/CD pipelines. I see them as little utilities that I can chain together to create a larger solution. Similar to the Unix philosophy, I am creating containers that do one thing and do it well. These small containers are easy to maintain, easy to document and easy to use. And this, I believe, is one of the core strengths of containers. They encapsulate a solution into something that is easier to understand. Even though a container is technically more bloated because it contains not only the application itself but also all of its requirements, the end result is something that is ultimately easier to understand. Like writing code, you can write the most incredible for loop ever devised but if the next person can’t understand it is it still a good solution?

Throughout my career I’ve always enjoyed trying out new things to see how I can apply them to everyday problems or how they can be used to create great new opportunities. Docker was one the first things that I really struggled to understand and initially I thought “this is it, this is the tech my kids will understand that I won’t.” Today, however, I can see what a game changer containers are. When properly constructed containers are easier to understand, easier to share with others and easier to document. These are powerful reasons too use containers. There are new hurdles to overcome, like how to maintain them for security, but all things have tradeoffs and it’s up to us to decide which ones are worth it.