For ages, I’ve had an issue with my Mac mini where it wouldn’t shut down properly after some unknown “event” occurs. Either it was running for a certain period of time, I connected to some network share or something else. Whatever it was I could never figure out the exact cause.

What felt like every month or so I’d do a search to see if any body had figured out the issue yet and recently I found this https://apple.stackexchange.com/a/412649. Incredibly the outlined solution seems to have solved the shut down issues I was having. As it turns out, a few years ago I had implemented a “fix” for slow cli apps that was caused by the code signing subsystem of macOS. This fix, in newer releases, caused some kind of issue that prevented the system from shutting down properly. By removing terminal as described my Mac mini has been able to reboot and shut down without issue.

In the post is a link to this site explaining the issue a bit more deeply – https://sigpipe.macromates.com/2020/macos-catalina-slow-by-design/.

If you run Plex at home and access it externally you may want to limit the amount of bandwidth remote access is allowed to use. Not limiting the bandwidth Plex uses will affect other users on the same Internet connection and those playing online games or doing video conferencing will be affected the most. The reason for this is Plex is going to utilize your upload bandwidth, and all of it if you allow it to. Most households have asynchronous connections meaning one direction is slower than the other and typically it is the upload speeds that are drastically slower. This makes sense as most people download content rather than push it to the Internet. Plex, however, turns that around and does push data to the Internet. Since uploads speeds are usually slower than the download speed outside your home you will quickly use up, or saturate, your upload speed. Once the upload capacity from your home is at its limits everything else will suffer in some way. Online video games will get laggy, drop packets, and will feel awful. Video conferencing will become glitchy and even downloads speeds can drop.

Luckily, Plex offers two ways to limit how much bandwidth it will use (though you probably only need to tap into one of them). The first way, and the way you can probably skip, is to set the Plex client itself to be a good citizen in “Quality” section of the settings screen. It looks like this:

Set the “Internet Streaming” video quality to Maximum unless your system can’t handle full quality

Most of the time you can leave this to “Maximum”. If you find your player is still stuttering you can modify this. This is will usually happen if the download speed of your connection is slow or if you just want to save bandwidth.

The more beneficial setting is located in the server settings section under “Remote Access”. The settings on this page will affect your Plex server globally, to all clients. In my home, my upload speed is about 11mbit. To ensure that others in the home have adequate upload capacity I set my upload speed and video quality to 4Mbit, as you can see here:

Set the Internet upload speed to some fraction of your total upload speed

By configuring Plex with a lower value than your total upload you will force the entire server to use less than your total upload speed, regardless of how many streams are coming off of it. This will leave room for other applications on your network if they need it.

Keep in mind that the limit does apply to all remote streams, and if there are enough of them, the setting could be too low and cause stuttering on their side as they need to pause and buffer the content. The setting also applies to downloads so even if someone downloads content to offline it in high quality they will be limited to whatever value you put here.

A while ago I wanted to improve my audio setup in my office. I ordered some bookshelf speakers and hooked them up to an old receiver I had sitting around doing nothing. To help improve the setup further I decided to use a USB-C to optical adapter so that I could feed pure digital into my receiver. The receiver has a better DAC than my Mac and it reduced the noise a bit as well. Once I did this however I noticed that sound effects from the system and the first little bit of music would not play. This was because macOS does it’s best to save power and will power off, or whatever it is doing, the sound card. In turn the receiver would see no signal and sort of forget what sound format it was receiving. When audio started to play then it would determine what time of audio it was getting and then output the audio. Of course, this takes enough time that most system sounds won’t play and instead I’d got a small pop sound in the speakers as the receiver “came online”.

To combat this I found this little helper – https://github.com/mttrb/antipopd. This app will send what amounts to a null (in the background it is the say utility saying “space”). This keeps the audio chain alive and prevents any pops or delays in audio output. I highly recommend it.

From the draft archives. This is a post I started over ten years ago but never got around to finishing. It discusses my reaction to someone telling me the web was dead and that mobile was taking over. Their argument was that apps would replace websites. I disagreed. I have left the majority untouched, cleaning up the language a bit. I left some final thoughts at the end.

Someone told me recent that the web is dead and that the future is mobile. What they really meant was that browsing the web with a traditional web browser is dead. But they’re wrong, all that has really happened is that mobile devices have just now become viable options for accessing the vast amount of information and resources available on the Internet. The web isn’t dead, mobile devices just don’t suck anymore.

Thanks to the iPhone there has been a major shift in how people think about the web and how mobile devices fit in. The mobile web experience is no longer limited to a simple list of links and no images. It’s fuller and more capable. It’s rich with images, audio and even video. People care about ensuring their information is fully accessible to people on the go and looks great while using small devices. And if a site can’t be massaged to work with the iPhone then a specialized app can be created to ensure the end user has a great experience.

Of course, Apple is no longer the only vendor out there trying to create a great end user experience. The most notable competitor to iPhone is nearly any Android based phone. Android is incredibly young as far as mobile OSs go but already it’s a worthy competitor to Apple’s iOS. Either device is capable of providing a full web experience.

Mobile devices won’t replace the web experience we all know today. They simply extend it. They are extensions to our desktop computers, devices we can use while on the go to keep up on all of the information available to us. The key is to ensure that end users are able to access the information they want in a convenient manner. If that means creating a template for your site or even creating a specific app.

My original post from June 8, 2010

While I don’t believe (and continue to not to believe in 2020) mobile devices will completely replace computers, I do think they will become the primary device for a lot of people.

As I continue to mess around with various ways of installing and running Kubernetes in my home lab using Rancher I keep coming up with different ways to solve similar problems. Each time I set it up using different host OSs I learn a bit more which my primary goal. The latest iteration uses CentOS 8 and allows for iSCSI based persistent storage to work properly. I want to use CentOS 8 because it includes a newer kernel required for doing buildx based multi-arch builds. In this post, I’d like to go through the process of setting up CentOS 8 with Docker and what utilities to install to support NFS and iSCSI based persistent storage so that it works properly with Rancher.

Continue reading

I keep doing more multi-architecture builds using buildx and continue to find good information out there to help refine the process. Here is a post I found I thought I’d share that discusses how to build multi-architecture using AWS Graviton2 based instances which are ARM based. https://www.smartling.com/resources/product/building-multi-architecture-docker-images-on-arm-64-bit-aws-graviton2/. I haven’t officially tried this yet but the same process should also work on a Pi4 with the 64bit PiOS installed.

Under some conditions, you may find that your Docker in Docker builds will hang our stall out, especially when you combine DIND based builds and Kubernetes. The fix for this isn’t always obvious because it doesn’t exactly announce itself. After a bit of searching, I came across a post that described the issue in great detail located at https://medium.com/@liejuntao001/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5.

As described, the issue is actually due to the MTU the DIND service uses when it starts. By default, it uses 1500. Unfortunately, a lot of Kubernetes overlay networks will set a smaller MTU of around 1450. Since DIND is a service running on an overlay network it needs to use an MTU equal to or smaller than the overlay network in order to work properly. If your build process happens to download a file that is larger than the Maximum Transmission Unit then it will wait indefinitely for data that will never arrive. This is because DIND, and the app using it, thinks the MTU is 1500 when it is actually 1450.

Anyway, this isn’t about what MTU is or how it works, it’s about how to configure a Gitlab based job that is using the DIND service with a smaller MTU. Thankfully it’s easy to do.

In your .gitlab-ci.yml file where you enable the dind service add a command or parameter to pass to Gitlab, like this:

Build Image:
  image: docker
  services:
    - name: docker:dind
      command: ["--mtu 1000"]
  variables:
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
    DOCKER_HOST: tcp://localhost:2375

This example shown will work if you are using a Kubernetes based Gitlab Runner. With this added, you should find that your build stalls go away and everything works as expected.

Successful connection test

In this post I’m going to review how I installed Rundeck on Kubernetes and then configured a node source. I’ll cover the installation of Rundeck using the available helm chart, configuration of persistent storage, ingress, node definitions and key storage. In a later post I’ll discuss how I setup a backup job to perform a backup of the server hosting this site.

For this to work you must have a Kubernetes cluster that allows for ingress and persistent storage. In my cluster I am using nginx-ingress-controller for ingress and freenas-iscsi-provisioner. The freenas-iscsi-provisioner is connected to my FreeNAS server and creates iSCSI based storage volumes. It is set as my default storage class. You will also need helm 3 installed.

With the prerequisites out of the way we can get started. First, add the helm chart repository by following the directions on located on https://hub.helm.sh/charts/incubator/rundeck. Once added, perform the following to get the values file so we can edit it:

helm show values incubator/rundeck > rundeck.yaml
Continue reading

Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.

Continue reading

If you work with AWS using CLI tools I highly recommend aws-vault to help keep your AWS keys secure. Be sure to visit the usage guide for full details on setup. I configured my copy to be unlocked when I am actively using my computer. It’s also a good idea to ensure your storage is encrypted.