From the draft archives. This is a post I started over ten years ago but never got around to finishing. It discusses my reaction to someone telling me the web was dead and that mobile was taking over. Their argument was that apps would replace websites. I disagreed. I have left the majority untouched, cleaning up the language a bit. I left some final thoughts at the end.

Someone told me recent that the web is dead and that the future is mobile. What they really meant was that browsing the web with a traditional web browser is dead. But they’re wrong, all that has really happened is that mobile devices have just now become viable options for accessing the vast amount of information and resources available on the Internet. The web isn’t dead, mobile devices just don’t suck anymore.

Thanks to the iPhone there has been a major shift in how people think about the web and how mobile devices fit in. The mobile web experience is no longer limited to a simple list of links and no images. It’s fuller and more capable. It’s rich with images, audio and even video. People care about ensuring their information is fully accessible to people on the go and looks great while using small devices. And if a site can’t be massaged to work with the iPhone then a specialized app can be created to ensure the end user has a great experience.

Of course, Apple is no longer the only vendor out there trying to create a great end user experience. The most notable competitor to iPhone is nearly any Android based phone. Android is incredibly young as far as mobile OSs go but already it’s a worthy competitor to Apple’s iOS. Either device is capable of providing a full web experience.

Mobile devices won’t replace the web experience we all know today. They simply extend it. They are extensions to our desktop computers, devices we can use while on the go to keep up on all of the information available to us. The key is to ensure that end users are able to access the information they want in a convenient manner. If that means creating a template for your site or even creating a specific app.

My original post from June 8, 2010

While I don’t believe (and continue to not to believe in 2020) mobile devices will completely replace computers, I do think they will become the primary device for a lot of people.

As I continue to mess around with various ways of installing and running Kubernetes in my home lab using Rancher I keep coming up with different ways to solve similar problems. Each time I set it up using different host OSs I learn a bit more which my primary goal. The latest iteration uses CentOS 8 and allows for iSCSI based persistent storage to work properly. I want to use CentOS 8 because it includes a newer kernel required for doing buildx based multi-arch builds. In this post, I’d like to go through the process of setting up CentOS 8 with Docker and what utilities to install to support NFS and iSCSI based persistent storage so that it works properly with Rancher.

Continue reading

I keep doing more multi-architecture builds using buildx and continue to find good information out there to help refine the process. Here is a post I found I thought I’d share that discusses how to build multi-architecture using AWS Graviton2 based instances which are ARM based. https://www.smartling.com/resources/product/building-multi-architecture-docker-images-on-arm-64-bit-aws-graviton2/. I haven’t officially tried this yet but the same process should also work on a Pi4 with the 64bit PiOS installed.

Under some conditions, you may find that your Docker in Docker builds will hang our stall out, especially when you combine DIND based builds and Kubernetes. The fix for this isn’t always obvious because it doesn’t exactly announce itself. After a bit of searching, I came across a post that described the issue in great detail located at https://medium.com/@liejuntao001/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5.

As described, the issue is actually due to the MTU the DIND service uses when it starts. By default, it uses 1500. Unfortunately, a lot of Kubernetes overlay networks will set a smaller MTU of around 1450. Since DIND is a service running on an overlay network it needs to use an MTU equal to or smaller than the overlay network in order to work properly. If your build process happens to download a file that is larger than the Maximum Transmission Unit then it will wait indefinitely for data that will never arrive. This is because DIND, and the app using it, thinks the MTU is 1500 when it is actually 1450.

Anyway, this isn’t about what MTU is or how it works, it’s about how to configure a Gitlab based job that is using the DIND service with a smaller MTU. Thankfully it’s easy to do.

In your .gitlab-ci.yml file where you enable the dind service add a command or parameter to pass to Gitlab, like this:

Build Image:
  image: docker
  services:
    - name: docker:dind
      command: ["--mtu 1000"]
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    DOCKER_HOST: tcp://localhost:2375

This example shown will work if you are using a Kubernetes based Gitlab Runner. With this added, you should find that your build stalls go away and everything works as expected.

Successful connection test

In this post I’m going to review how I installed Rundeck on Kubernetes and then configured a node source. I’ll cover the installation of Rundeck using the available helm chart, configuration of persistent storage, ingress, node definitions and key storage. In a later post I’ll discuss how I setup a backup job to perform a backup of the server hosting this site.

For this to work you must have a Kubernetes cluster that allows for ingress and persistent storage. In my cluster I am using nginx-ingress-controller for ingress and freenas-iscsi-provisioner. The freenas-iscsi-provisioner is connected to my FreeNAS server and creates iSCSI based storage volumes. It is set as my default storage class. You will also need helm 3 installed.

With the prerequisites out of the way we can get started. First, add the helm chart repository by following the directions on located on https://hub.helm.sh/charts/incubator/rundeck. Once added, perform the following to get the values file so we can edit it:

helm show values incubator/rundeck > rundeck.yaml
Continue reading

Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.

Continue reading

If you work with AWS using CLI tools I highly recommend aws-vault to help keep your AWS keys secure. Be sure to visit the usage guide for full details on setup. I configured my copy to be unlocked when I am actively using my computer. It’s also a good idea to ensure your storage is encrypted.

A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.

diagram showing how this site is hosted

A co-worker recently discovered a fun project called diagrams that allows you to create diagrams from code. Documentation and how to install diagrams is available at https://diagrams.mingrammer.com. The image you see above was generated with some simple code. The code used to generate the graph looks like this:

from diagrams import Diagram, Cluster
from diagrams.oci.edge import Cdn
from diagrams.onprem.network import Nginx
from diagrams.onprem.compute import Server
from diagrams.onprem.database import Mariadb
from diagrams.onprem.inmemory import Memcached
from diagrams.onprem.client import Users

with Diagram("blog.dustinrue.com", show=False):
  cloudflare = Cdn("CloudFlare")
  users = Users("users")

  with Cluster("web server"):
    nginx = Nginx("nginx")
    php = Server("php")

  with Cluster("database server"):
    mariadb = Mariadb("mariadb")
    memcached = Memcached("memcached")
    
  users - cloudflare
  cloudflare - nginx
  nginx - php
  php - mariadb
  php - memcached

Using diagrams is an easy way to quickly create and track changes to diagrams.

RancherOS, available at https://rancher.com/rancher-os/, is a lightweight container operating system. It is easy to install and easy to configure but a bit light on documentation for some specific use cases. Here, I will describe how I setup RancherOS (1.5.5 as of this writing) for use with my locally installed Rancher 2.x based bare metal cluster. I will also touch on using cloud-config to configure RancherOS at boot to include the iSCSI subsystem and auto join my cluster.

I run my nodes on a Proxmox based hypervisor and have FreeNAS based storage providing NFS and iSCSI. I’m not going to cover the installation of Rancher, Proxmox or FreeNAS but just focus on basic configuration of RancherOS.

RancherOS itself is able to accept configure information using a cloud-config file. Using a cloud-config file allows you to configure a number of things during the first boot up. I take advantage of this to configure some persistent volumes, add my ssh key, enable the iSCSI subsystem and even automatically join my cluster. Here is what the file looks like, with some values removed/shortened:

# cloud-config

# create an rc.local which will cause this system to join the cluster. Replace required values for your server URL and your token
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      wait-for-docker
      if [ ! -f /opt/init-done ]; then
        docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.5 --server <your rancher server url> --token <your rancher token> --worker --node-name $(ip ro | grep default | awk '{print $7}')
        touch /opt/init-done
      fi

rancher:
  # in my setup I use iSCSI to provide block storage to pods, for this to work on RancherOS the iSCSI subsystem must be enabled
  services_include:
    open-iscsi: true
  # setup some local persistent storage for a few important volumes
  # this ensures Kubernetes works properly across reboots
  services:
    user-volumes:
      volumes:
        - /home:/home
        - /opt:/opt
        - /var/lib/kubelet:/var/lib/kubelet
        - /etc/kubernetes:/etc/kubernetes
ssh_authorized_keys:
  - <paste your ssh public key here>

For my setup I saved this file onto a web host accessible within my network. Below you will see how we tell RancherOS about the file during the setup process. You can find more configuration options at https://rancher.com/docs/os/v1.x/en/installation/configuration/.

Please note that the most important settings are the persistent mount options. You should at least use those if you plan to connect the RancherOS instance to a Rancher based Kubernetes cluster.

With the cloud-config file created we can now install RancherOS. There are a few options for installing RancherOS but for my setup I am simply using the basic iso file. For my target machine, a unibody 2008 MacBook, I had to burn the image to CDR. I booted the ISO and waited for it to finish the boot process. Once it was ready, I entered my install command:

sudo ros install -d /dev/sda -c http://<hostname>/rancheros.yaml

This command will instruct the installer to download the config file specified, save it locally (into /var/lib/rancher/conf) and then get everything ready on /dev/sda. I answer y to the reboot question and the system reboots into RancherOS. After a while the system will join your cluster and be ready for use.

That’s it. Your RancherOS node should now be ready to for use and will support iSCSI based block storage. In future posts I will try to discuss setting up other aspects of a bare metal Kubernetes cluster (where bare metal basically refers to running it anywhere but some cloud provider). If you have questions please reach out to me via Twitter.

References:
Using iscsi on RancherOS https://docs.openebs.io/docs/next/prerequisites.html