In late August of 2021 the company behind Docker Desktop announced their plans to change the licensing model of their popular Docker solution for Mac and Windows. This announcement means many companies who have been using Docker Desktop would now need to pay for the privilege. Thankfully, the open source community is working to create a replacement.

For those not aware, Docker doesn’t run natively on Mac. The Docker Desktop system is actually a small Linux VM running real Docker inside of it and then Docker Desktop does a bunch of magic to make it look and feel like it is running natively on your system. It is for this reason that Docker Desktop users get to enjoy abysmal volume mount performance, the process of shuffling files (especially small ones) requires too much metadata passing to be efficient. Any solution for running Docker on a Mac will need to behave the same way and will inherit the same limitations.

Colima is a command line tool that builds on top of lima to provide a more convenient and complete feeling Docker Desktop replacement and it already shows a lot of promise. Getting started with colima is very simple as long as you already have brew and Xcode command line tools installed. Simply run brew install colima docker kubectl and wait for the process to finish. You don’t need Docker Desktop installed, in fact you should not have it running. Once it is complete you can start it with:

colima start

This will launch a default VM with the docker runtime enabled and configure docker for you. Once it completes you will then have a working installation. That’s literally it! Commands like docker run --rm -ti hello-world will work without issue. You can also build and push images. It’s can do anything you used Docker Desktop for in the past.

Mounting Volumes

Out of the box colima will mount your entire home directory as a read only volume within the colima VM which makes it easily accessible to Docker. Colima is not immune, however, to the performance issues that Docker Desktop struggled with but the read only option does seem to provide reasonable performance.

If, for any reason, you need to have the volumes you mount as read/write you can do that when you start colima. Add --mount <path on the host>:<path visible to Docker>[:w]. For example:

colima start --mount $HOME/project:/project:w

This will mount $HOME/project as /project within the Docker container and it will be writeable. As of this writing the ability to mount a directory read/write is considered alpha quality so you are discouraged from mounting important directories like, your home directory.

In my testing I found that mounting volumes read/write was in fact very slow. This is definitely an area that I hope some magic solution can be found to bring it closer to what Docker Desktop was able to achieve which still wasn’t great for large projects.

Running Kubernetes

Colima also supports k3s based Kubernetes. To get it started issue colima stop and then colima start --with-kubernetes. This will launch colima’s virtual machine, start k3s and then configure kubectl to work against your new, local k3s cluster (this may fail if you have an advanced kubeconfig arrangement).

With Kubernetes running locally you are now free to install apps however you like.

Customizing the VM

You may find the default VM to be a bit on the small side, especially if you decide to run Kubernetes as well. To give your VM more resources stop colima and then start it again with colima start --cpu 6 --memory 6. This will dedicate 6 CPU cores to your colima VM as well as 6GB of memory. You can get a full list of options by simply running colima and pressing enter.

What to expect

This is a very young project that already shows great potential. A lot is changing and currently in the code base is the ability to create additional colima VMs that run under different architectures. For example, you can run arm64 Docker images on your amd64 based Mac or vice versa.

Conclusion

Colima is a young but promising project that can be used to easily replace Docker Desktop and if you are a Docker user I highly recommend giving it a try and providing feedback if you are so inclined. It has the ability to run Docker containers, docker-compose based apps, Kubernetes and build images. With some effort you can also do multi-arch builds (which I’ll cover in a later post). You will find the project at https://github.com/abiosoft/colima.

Working from home doesn’t mean I’m always working from home. Sometimes I am out and about with my laptop. However, I’m also a person who just prefers to use to use a desktop whenever possible. I find the process of disconnecting or reconnecting all the external devices I use tedious so I avoid it as much as possible. This is why my main system is a personal Mac mini from late 2018 and my portable system is a company-provided MacBook Pro from 2015. Since the mini is my primary system, most of what I’m working with is located on that system and I will either ssh into the mini from the portable to run some commands or use SMB to mount the files (over a VPN of course) so I can edit them using VS Code.

One tool I make heavy use of is aws-vault. This tool, which I’ve written about previously, allows you to put your AWS credentials into macOS’s keychain system. Using macOS’s keychain keeps the information off of the file system as plain text and allows me to sync the data between Macs (and iPhone). When sitting at a Mac your keychain will be unlocked when you enter your password. However, when accessing a Mac remotely using ssh the keychain will remain locked which makes using aws-vault and some other tools a bit more difficult to use. Luckily, there is a way to unlock the keychain so you can use it properly.

Once you have a remote shell into your Mac you can issue security list to view the keychains that can be unlocked. In my case, I want to unlock the aws-vault keychain. To do so I issue security unlock /Users/dustin/Library/Keychains/aws-vault.keychain-db. After pressing enter you are asked for your system password. Enter this, press enter again and the keychain will be unlocked. To unlock your default keychain simply issue security unlock.

With your keychain unlocked tools that depend on keychain will begin to work properly.

Back in 2008, I bought my second Mac, a unibody MacBook, to give me a more capable and portable system than my existing Mac mini. The mini was a great little introduction to the Mac world but wasn’t portable. The MacBook got used for several years until software got too heavy for it. Rather than getting rid of it, I kept the machine around to run Linux. Eventually, I introduced it as part of my home lab. In my home lab, I use Proxmox as a virtualization system. Proxmox can be set up as a cluster with shared storage so VMs and LXC containers can be migrated between physical hosts as needed. For a while I had Linux installed onto the MacBook and it was part of the Proxmox setup just so I could play around with VM migration.

Eventually, though, the limitations of the hardware were making the hassle of keeping the system running and updated less worthwhile and I removed it from the cluster. Still not wanting to get rid of it, I decided to introduce it into my HiFi system as a way to play music using its built-in optical out (a feature that has been removed from recent Macs) to my receiver. Using optical into the receiver allows me to utilize the DAC that is present in the receiver rather than whatever my current solution is using. In theory, it should sound better. Anyway, this started my adventure in getting macOS running on an older Mac again, which was harder than I had anticipated.

Usually, installing macOS on a Mac is a straight forward affair, at least when the hardware is new. When using older hardware there are a few extra steps you may need to take to get things going. Installing El Capitan on my old MacBook required the following:

  • External USB drive to install macOS onto
  • USB flash drive to hold the installer files
  • Carbon Copy Cloner
  • Another Mac
  • Install ISO
  • Patience

The first issue I ran into is how to actually get an older version of macOS that runs on the machine. I no longer have the restore CD/DVD for the system, normally I keep these but for some reason, I’m missing the disc for this particular system. Since I had previous experience installing El Capitan on this Mac I knew there would be issued I’d need to overcome. To make it easier on myself I installed an even older version that I could then upgrade from. I also installed the OS onto an external drive so that I could complete a portion of the install using a different machine.

It is generally agreed upon that Mountain Lion was the last version of macOS (then called OS X) that was not intended to be installed on SSD based systems. Mountain Lion also not signed in a way that prevents it from being installed in 2020, an important issue as you’ll later see. After some searching, I found this as a source for the ISO file I needed to install Mountain Lion. Keep in mind that I am installing on a system with a blank hard drive, I needed to download the fully bootable ISO. The file I downloaded is specifically this one – https://sundryfiles.com/31KE. After downloading the file and using Etcher to copy the ISO to a USB flash drive, I was able to install Mountain Lion without any issues. With a fully working, if outdated, system up and running I moved on to tackling the El Capitan installation.

With the system running I took the necessary steps to get signed into the App Store. This alone is a small challenge because the App Store installed with Mountain Lion doesn’t know how to natively deal with the extra account protections Apple has introduced in recent years. Pay attention to the messaging on screen and it’ll tell you how to login (it amounts to putting your password plus the security code that appears on your phone or second Mac). Once logged in I downloaded the El Capitan installer to the disk.

After getting the installer I had to deal with the first issue. Which is, the installer will fail if there is no battery installed! The battery in my MacBook has been removed because it was beginning to swell. To be safe I removed it so it could be recycled rather than allow it to become a spicy pillow and burn down my house. If you attempt to install El Capitan to a Mac laptop with a battery installed you’ll get a cryptic error about a missing or invalid node. To fix this I removed the external drive from the machine and attached it to another Mac laptop I have that does have a battery. For safety, I also disconnected the internal hard drive prior to finishing the upgrade process.

The next issue I had to deal with was the fact that, while El Capitan is the newest version of macOS that will run on a 2008 MacBook, it is still from 2015. Being fully signed, it will fail to install in 2020 because the certificate used to sign the packages has since expired! To deal with this issue I followed the steps outlined at https://techsparx.com/computer-hardware/apple/macosx/install-osx-when-you-cant.html. Setting the date back worked great and I was able to finish the upgrade using the second Mac. Once the upgrade was done I moved to the external drive back to my 2008 MacBook and performed the final step.

The final step of the process is to move the installation from the external drive to the internal drive. My MacBook still has the original 256GB HD that was included with the system. It is very slow by today’s standards but will be just fine for its new use case. For this task, I turned to the excellent Carbon Copy Cloner. After cloning the external drive to the internal drive my installation of El Capitan was complete. I was then able to connect the laptop to my receiver using an optical cable and enjoy music!

macOS Big Sur is set to change a lot about how the interface looks by, primarily, bringing in a lot of elements from iOS. Some changes include updates to notification windows, the inclusion of Control Center into the menu bar and an overall unification of the design language used for app icons and the dock. App icons now sport the same rounded square look that iOS has used for years and the dock itself is very similar to what you see on iPad. The changes help freshen up the look of macOS and bring a sort of familiarity and consistency that didn’t exist before between the two operating systems. Like a certain rug, it really ties things together.

Other changes, however, feel really off or don’t come across as well and I’m holding out hope that future iterations of the beta will adjust these items or even revert to the previous design before we see the full release of Big Sur.

Lets start with the menu bar:

The new menu bar design is now almost entirely transparent. Because of the transparency, the chosen background comes through loud and clear. So much so that dark backgrounds will make the traditionally black lettering of the menu bar impossible to see. To combat this, the text is rendered in white when the background crosses some threshold so that the text remains legible regardless if the background.

This has a couple of undesirable side effects.

For starters, the new design completely ignores your light versus dark mode preference. Got a dark background? Your menu bar now appears as if you’ve selected dark mode even though the rest of your display is set to light. Of course, the opposite applies if you pick a light background but prefer dark mode. While it possible to disable the transparency by selecting “Reduced transparency” in the Accessibility options the option also affects the otherwise excellent looking dock.

Less serious an issue, the lack of any delineation between where your apps live and where the menu bar starts creates a general sense of awkwardness where you just have this floating text. In the previous (and long-standing) design the menu bar was an obvious feature of the overall desktop. Now, it’s just some floating stuff that doesn’t match my light versus dark mode preference.

Jumping over to Notification Center we’re greeted with additional changes. To be honest, it’s not immediately clear to me why notifications are changing as they’ve been nearly perfect in the past two revisions of the OS. The changes don’t really feel in any way connected to their iOS counterparts and how could they be made to be when macOS lacks the contextual swipe options that iOS has? Anyway, the changes really feel like change for the sake of change and they offer horrible UX for the end-user.

Take the following screenshot taken when the mouse is over the notification:

When the mouse is over the notification you get additional ways to interact with it. This is similar to previous versions of macOS except now some of your options are hidden away in a small submenu. This small submenu, unlike the previous buttons, is much more difficult to interact with quickly. In the previous design, the right side of the notification was split into two parts that were easy to hit with the mouse with little though. The new design requires a bit more finesse in order to hit the intended target. Not impossible, of course, but something that can take you out of “the zone” and gets annoying if you interact with notifications often. The little menu also interferes with the text of the notification in a way that feels sloppy.

The operating system itself is not the only thing seeing changes to look more like iOS. Many, if not all, of the core Apple apps are also receiving modifications to make the apps look more like they might if they were on iOS. Safari and Mail, for example, both look much more like their iOS counterparts than ever before. All of them also lost nearly all contrast making everything blend together. This makes some tasks that were once easy, like determining which tab was active in Safari, nearly impossible without really paying attention:

Activity monitor just never looks like anything is active, that everything is either not available or somehow not active at all:

Especially when compared to the battery preferences window:

It’s entirely fair to say that Big Sur is still in beta and a lot of this could still change before to the full release. I sincerely hope that changes do happen to at least the elements I’ve featured here as they are the most glaring items I’ve seen so far. They are also items you interact with on a daily basis so they have to be right. Apple has always been about sweating the details and nailing the user experience. It is a huge reason I’ve been running macOS since they switched to the Intel platform years ago but Big Sur really feels like a step in the wrong direction. There are a lot of things to like about Big Sur but they can easily be negated by missing the mark on the bits we interact with the most.

Whatever your use case may be, it is possible on at least macOS 10.15+ to modify the number of CPU cores that are currently online or available in realtime. The available CPU core count can be modified using a utility called cpuctl.

To get a list of currently active CPU cores issue sudo cpuctl list. This will output which cores are currently active. Here is the output on my two core, four thread MBP:

sudo cpuctl list
Password:
CPU0: online type=7,8 master=1
CPU1: online type=7,8 master=0
CPU2: online type=7,8 master=0
CPU3: online type=7,8 master=0

To limit core count to just two issue sudo cpuctl offline 2 3. Now a list will show the following:

CPU0: online type=7,8 master=1
CPU1: online type=7,8 master=0
CPU2: offline type=7,8 master=0
CPU3: offline type=7,8 master=0

To bring them back online a simple online operation can be done – sudo cpuctl online 2 3. Now the listing has returned the normal:

CPU0: online type=7,8 master=1
CPU1: online type=7,8 master=0
CPU2: online type=7,8 master=0
CPU3: online type=7,8 master=0

Keep in mind you should not offline the CPU marked as “master”, doing so will cause your system to become unresponsive even if you leave others running.

For detailed information take a look at the man page for it (man cpuctl).

MariaDB log output showing encryption is working

Utilizing encryption at rest to protect the database data living on your hard drive is a smart choice, especially when dealing with sensitive customer data. Encryption at rest protects the data files by encrypting the actual files the MySQL/MariaDB server reads and writes to on the file system. Although they are binary files they can still be read relatively easily using standard tools as well as being “imported” into a different MySQL/MariaDB server. Encryption at rest is just one part of a total solution and this post is going to cover what it takes to get it running using the AWS Key Management System for key control on macOS using Homebrew.

Continue reading