This one is, primarily, for all the people responsible for ensuring a WordPress site remains available and running well. “Systems” people if we must name them. If you’re a WordPress developer you might want to ride along on this one as well so you and the systems or DevOps team can be speaking a common language when things go bad. Often times, systems people will immediately blame developers for writing bad code but the two disciplines must cooperate to keep things running smoothly, especially at scale. It’s important for systems AND developers to understand how code works and scales on servers.

What I’m about to cover is some common performance issues that I see come up and then be misdiagnosed or “fixed” incorrectly. They’re the kind of thing that causes a WordPress site to become completely unresponsive or very slow. What I cover may seem obvious to some, and they are certainly very generalized, but I’ve seen enough bad calls to know there are a number of people out there that get tripped up by these situations. None of the issues are necessarily code related nor are they strictly WordPress related, they apply to many PHP based apps; it’s all about how sites behave at scale. I am going to explore WordPress site performance issues since that’s where my talents are currently focused.

In all scenarios I am expecting that you are running something getting a decent amount of traffic at the server(s). I am assume you are running a LEMP stack consisting of Linux, Nginx, PHP-FPM and MySQL. Maybe you even have a caching layer like Memcached or Redis (and you really should). I’m also assuming you have basic levels of visibility into the app using something like New Relic.

Let’s get started.

Continue reading

Wanted to take a moment to expand on something that Chris Wiegman wrote over on his site about how to use parameters in a make target. This post expands on a previous post of his called, “Automating WordPress Development with Make.” I was really excited about his initial post because make is a tool that I love putting to use. While Make is an older build chain tool, originally release back in 1976, it is just as useful today as ever.

In addition to the method described by Chris, there exists another method that feels a touch more natural than using ENV vars to pass parameters. It looks like this:

# If the first argument is good pass the rest of the line to the target
ifeq (good,$(firstword $(MAKECMDGOALS)))
  # use the rest as arguments for "good"
  RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
  # ...and turn them into do-nothing targets
  $(eval $(RUN_ARGS):;@:)
endif

.PHONY: bad good

bad:
        @echo "$(MAKECMDGOALS)"

good:
        @echo "$(RUN_ARGS)"

In this example, if we run make bad hello world you will get an error that looks like this:

make bad hello world
bad hello world
make: *** No rule to make target 'hello'.  Stop.

However, if we run make good hello world then the extra parameters are simply passed to the the command in the target. The output looks like this:

make good hello world
hello world

The magic is, of course, coming from the ifeq section of the Makefile. It checks to see if the first word of the Make command goals is the target keyword. If it is, then it removes all of the trailing words and turns them into do-nothing targets allowing them to be used elsewhere in the file.

For reference, I found this trick on https://stackoverflow.com/a/14061796 awhile back and have put it to use in some internal tools where I want to wrap some command with additional steps.

Makefiles are a great tool partly because they allow you to create documentation that remains similar across projects yet allows you to customize what is actually happening behind the scenes. This makes your project much more approachable for new people since you can simply document “run make” and take care of the difficult stuff for them. Advanced users can still get into the Makefile to see what is going on or even customize it.

I hope you find this tip useful!

While building multi-arch images I noticed that it was really unreliable when done in my CI/CD pipelines. After a bit of research, I found that the buildx and the Qemu emulation system aren’t quite stable when used with Docker in Docker. Although I can retry the job until it finishes I decided to look into other ways of doing multi-arch builds with buildx.

Continue reading

For ages, I’ve had an issue with my Mac mini where it wouldn’t shut down properly after some unknown “event” occurs. Either it was running for a certain period of time, I connected to some network share or something else. Whatever it was I could never figure out the exact cause.

What felt like every month or so I’d do a search to see if any body had figured out the issue yet and recently I found this https://apple.stackexchange.com/a/412649. Incredibly the outlined solution seems to have solved the shut down issues I was having. As it turns out, a few years ago I had implemented a “fix” for slow cli apps that was caused by the code signing subsystem of macOS. This fix, in newer releases, caused some kind of issue that prevented the system from shutting down properly. By removing terminal as described my Mac mini has been able to reboot and shut down without issue.

In the post is a link to this site explaining the issue a bit more deeply – https://sigpipe.macromates.com/2020/macos-catalina-slow-by-design/.

If you run Plex at home and access it externally you may want to limit the amount of bandwidth remote access is allowed to use. Not limiting the bandwidth Plex uses will affect other users on the same Internet connection and those playing online games or doing video conferencing will be affected the most. The reason for this is Plex is going to utilize your upload bandwidth, and all of it if you allow it to. Most households have asynchronous connections meaning one direction is slower than the other and typically it is the upload speeds that are drastically slower. This makes sense as most people download content rather than push it to the Internet. Plex, however, turns that around and does push data to the Internet. Since uploads speeds are usually slower than the download speed outside your home you will quickly use up, or saturate, your upload speed. Once the upload capacity from your home is at its limits everything else will suffer in some way. Online video games will get laggy, drop packets, and will feel awful. Video conferencing will become glitchy and even downloads speeds can drop.

Luckily, Plex offers two ways to limit how much bandwidth it will use (though you probably only need to tap into one of them). The first way, and the way you can probably skip, is to set the Plex client itself to be a good citizen in “Quality” section of the settings screen. It looks like this:

Set the “Internet Streaming” video quality to Maximum unless your system can’t handle full quality

Most of the time you can leave this to “Maximum”. If you find your player is still stuttering you can modify this. This is will usually happen if the download speed of your connection is slow or if you just want to save bandwidth.

The more beneficial setting is located in the server settings section under “Remote Access”. The settings on this page will affect your Plex server globally, to all clients. In my home, my upload speed is about 11mbit. To ensure that others in the home have adequate upload capacity I set my upload speed and video quality to 4Mbit, as you can see here:

Set the Internet upload speed to some fraction of your total upload speed

By configuring Plex with a lower value than your total upload you will force the entire server to use less than your total upload speed, regardless of how many streams are coming off of it. This will leave room for other applications on your network if they need it.

Keep in mind that the limit does apply to all remote streams, and if there are enough of them, the setting could be too low and cause stuttering on their side as they need to pause and buffer the content. The setting also applies to downloads so even if someone downloads content to offline it in high quality they will be limited to whatever value you put here.

For a long time, I’ve wanted a Spotify-like experience but with music I’ve collected over the years. I appreciate Spotify because I can stream music on my desktop and phone anywhere I am but I can also download music locally to my phone to save on bandwidth. That said, there are some things that aren’t on Spotify like video game soundtracks or ocremix content that I also want to have with me. While Spotify can download local content to my phone the process is a bit cumbersome and the interface is, honestly, not great for that.

Enter Plex and Plexamp. Plex, probably better known as the tool of choice for those heavily into piracy, is actually a great way to store, catalog, and play your music. But Plexamp (and a paid Plex Pass) as a client really takes the experience up a notch by offering an experience that is entirely focused on music playback including hi-res formats and gapless playback. It also has some really clever genre, mood, and artist-based automatic playlists that are super slick. Combine this with the ability to stream the music from outside your home and offline it (at any bit rate you want including FLAC) and you really have a winner.

While I could do a video showing it off I don’t think I could a better job than this one – https://www.youtube.com/watch?v=oLI6S1rncu0.

A while ago I wanted to improve my audio setup in my office. I ordered some bookshelf speakers and hooked them up to an old receiver I had sitting around doing nothing. To help improve the setup further I decided to use a USB-C to optical adapter so that I could feed pure digital into my receiver. The receiver has a better DAC than my Mac and it reduced the noise a bit as well. Once I did this however I noticed that sound effects from the system and the first little bit of music would not play. This was because macOS does it’s best to save power and will power off, or whatever it is doing, the sound card. In turn the receiver would see no signal and sort of forget what sound format it was receiving. When audio started to play then it would determine what type of audio it was getting and then output the audio. Of course, this takes enough time that most system sounds won’t play and instead I’d got a small pop sound in the speakers as the receiver “came online”.

To combat this I found this little helper – https://github.com/mttrb/antipopd. This app will send what amounts to a null (in the background it is the say utility saying “space”). This keeps the audio chain alive and prevents any pops or delays in audio output. I highly recommend it.

Working from home doesn’t mean I’m always working from home. Sometimes I am out and about with my laptop. However, I’m also a person who just prefers to use to use a desktop whenever possible. I find the process of disconnecting or reconnecting all the external devices I use tedious so I avoid it as much as possible. This is why my main system is a personal Mac mini from late 2018 and my portable system is a company-provided MacBook Pro from 2015. Since the mini is my primary system, most of what I’m working with is located on that system and I will either ssh into the mini from the portable to run some commands or use SMB to mount the files (over a VPN of course) so I can edit them using VS Code.

One tool I make heavy use of is aws-vault. This tool, which I’ve written about previously, allows you to put your AWS credentials into macOS’s keychain system. Using macOS’s keychain keeps the information off of the file system as plain text and allows me to sync the data between Macs (and iPhone). When sitting at a Mac your keychain will be unlocked when you enter your password. However, when accessing a Mac remotely using ssh the keychain will remain locked which makes using aws-vault and some other tools a bit more difficult to use. Luckily, there is a way to unlock the keychain so you can use it properly.

Once you have a remote shell into your Mac you can issue security list to view the keychains that can be unlocked. In my case, I want to unlock the aws-vault keychain. To do so I issue security unlock /Users/dustin/Library/Keychains/aws-vault.keychain-db. After pressing enter you are asked for your system password. Enter this, press enter again and the keychain will be unlocked. To unlock your default keychain simply issue security unlock.

With your keychain unlocked tools that depend on keychain will begin to work properly.

Back in 2008, I bought my second Mac, a unibody MacBook, to give me a more capable and portable system than my existing Mac mini. The mini was a great little introduction to the Mac world but wasn’t portable. The MacBook got used for several years until software got too heavy for it. Rather than getting rid of it, I kept the machine around to run Linux. Eventually, I introduced it as part of my home lab. In my home lab, I use Proxmox as a virtualization system. Proxmox can be set up as a cluster with shared storage so VMs and LXC containers can be migrated between physical hosts as needed. For a while I had Linux installed onto the MacBook and it was part of the Proxmox setup just so I could play around with VM migration.

Eventually, though, the limitations of the hardware were making the hassle of keeping the system running and updated less worthwhile and I removed it from the cluster. Still not wanting to get rid of it, I decided to introduce it into my HiFi system as a way to play music using its built-in optical out (a feature that has been removed from recent Macs) to my receiver. Using optical into the receiver allows me to utilize the DAC that is present in the receiver rather than whatever my current solution is using. In theory, it should sound better. Anyway, this started my adventure in getting macOS running on an older Mac again, which was harder than I had anticipated.

Usually, installing macOS on a Mac is a straight forward affair, at least when the hardware is new. When using older hardware there are a few extra steps you may need to take to get things going. Installing El Capitan on my old MacBook required the following:

  • External USB drive to install macOS onto
  • USB flash drive to hold the installer files
  • Carbon Copy Cloner
  • Another Mac
  • Install ISO
  • Patience

The first issue I ran into is how to actually get an older version of macOS that runs on the machine. I no longer have the restore CD/DVD for the system, normally I keep these but for some reason, I’m missing the disc for this particular system. Since I had previous experience installing El Capitan on this Mac I knew there would be issued I’d need to overcome. To make it easier on myself I installed an even older version that I could then upgrade from. I also installed the OS onto an external drive so that I could complete a portion of the install using a different machine.

It is generally agreed upon that Mountain Lion was the last version of macOS (then called OS X) that was not intended to be installed on SSD based systems. Mountain Lion also not signed in a way that prevents it from being installed in 2020, an important issue as you’ll later see. After some searching, I found this as a source for the ISO file I needed to install Mountain Lion. Keep in mind that I am installing on a system with a blank hard drive, I needed to download the fully bootable ISO. The file I downloaded is specifically this one – https://sundryfiles.com/31KE. After downloading the file and using Etcher to copy the ISO to a USB flash drive, I was able to install Mountain Lion without any issues. With a fully working, if outdated, system up and running I moved on to tackling the El Capitan installation.

With the system running I took the necessary steps to get signed into the App Store. This alone is a small challenge because the App Store installed with Mountain Lion doesn’t know how to natively deal with the extra account protections Apple has introduced in recent years. Pay attention to the messaging on screen and it’ll tell you how to login (it amounts to putting your password plus the security code that appears on your phone or second Mac). Once logged in I downloaded the El Capitan installer to the disk.

After getting the installer I had to deal with the first issue. Which is, the installer will fail if there is no battery installed! The battery in my MacBook has been removed because it was beginning to swell. To be safe I removed it so it could be recycled rather than allow it to become a spicy pillow and burn down my house. If you attempt to install El Capitan to a Mac laptop with a battery installed you’ll get a cryptic error about a missing or invalid node. To fix this I removed the external drive from the machine and attached it to another Mac laptop I have that does have a battery. For safety, I also disconnected the internal hard drive prior to finishing the upgrade process.

The next issue I had to deal with was the fact that, while El Capitan is the newest version of macOS that will run on a 2008 MacBook, it is still from 2015. Being fully signed, it will fail to install in 2020 because the certificate used to sign the packages has since expired! To deal with this issue I followed the steps outlined at https://techsparx.com/computer-hardware/apple/macosx/install-osx-when-you-cant.html. Setting the date back worked great and I was able to finish the upgrade using the second Mac. Once the upgrade was done I moved to the external drive back to my 2008 MacBook and performed the final step.

The final step of the process is to move the installation from the external drive to the internal drive. My MacBook still has the original 256GB HD that was included with the system. It is very slow by today’s standards but will be just fine for its new use case. For this task, I turned to the excellent Carbon Copy Cloner. After cloning the external drive to the internal drive my installation of El Capitan was complete. I was then able to connect the laptop to my receiver using an optical cable and enjoy music!

Do a Google search for “2018 mac mini bluetooth issues” and you’ll get a lot of hits. The Bluetooth issues with the 2018 Mac mini are well documented. What isn’t as well documented is how to work around the issue. I say work around because I have yet to find a proper solution to the issue.

To be fair, the issue isn’t unique to the Mac mini itself. The system just seems to suffer from it more easily than others. As it turns out, USB 3 will cause interference in the 2.4-2.5Ghz frequencies. This is the same frequency that Bluetooth operates.

Let’s take a look at how the issue manifests itself. If you are using Bluetooth devices like a wireless mouse, keyboard, AirPods or any combination thereof and you are using the type A USB 3 ports on the back of the system, you will most likely experience periods of missed keystrokes, poor mouse tracking or stuttering audio.

To work around the issue I found a few references in my Google searches referencing the USB 3 ports. As it turns out, not using the USB 3 ports really is the key to avoiding the issue. Instead, get yourself a USB-C based hub that features USB 3 ports or simply an adapter to convert USB-C to type A USB 3 connector. With this in place, I have eliminated all of the connectivity issues I had been having.

This is an unfortunate hack that removes an otherwise useful feature of the Mac mini. While you can still get full speed using a USB-C adapter it would be better if you didn’t have to lose functionality or ports in order to work around what is an unfortunate coincidence between USB 3 and Bluetooth. There are potentially other ways to solve this using properly shielded cables or ferrite cores. I’d like to test these options in the future and if I do I’ll try to report my findings.