In this post I’m going to go over some of the things I have picked up about Mastodon over the past few weeks. Mastodon is better described right from the source but in very basic terms, it is a sort of distributed, better known as “federated“, Twitter like system comprised of any number of hosts communicating together using a standard set of protocols. Users of Mastodon exist on separate installations called instances that each have their own core topic or target audience. For example, I am currently on https://fosstodon.org which is focused on bringing people interested in Fee Open Source Software together. Being on an instance does not mean you are only able to interact with people on that instance or go off topic. No, every instance (unless blocked by the instance admin, more on that later) is connected to other instances by way of users following each other. What this means is if I follow someone on a different instance, the instance I am on becomes aware of it and will begin to draw in posts form that instance. In a way, the more people cross follow each other across instances the stronger the bond and the more data flows across all of them.

Larger instances with a lot of users will have the most diverse feeds available to you as the user. So there is a certain amount of advantage to being part of a large community as the pool of potential people you can interact with is naturally larger. As each instance has multiple feeds, consisting of your local feed of people you have chosen to follow, the local feed consisting of everyone on your instance and the federated feed containing everything the server has discovered from its users, you can change your social media experience on the fly. In addition to the multiple feeds, you have powerful self moderation tools including the ones you would expect like being able to mute or outright block users but you can also block entire instances with a couple of clicks. Just not into people posting Waifu stuff? Block entire instances dedicated to that material. Even though it may not violate any acceptable use policies you are able to quickly block entire types of content from your feed.

The most recent release of Mastodon allows you to follow hashtags allowing you to easily follow topics you like, so long as users tag their posts properly. This is an extra layer of awesomeness that is the Mastodon system. Which brings me to one of my last points about using Mastodon, there is no “algorithm” pushing content towards you. Gone are posts that rise to the top because someone paid money to put it there. No more toxic, hot take garbage from bots filling your feed and drowning what you’re really looking for.

Owners of Mastodon instances are encouraged to commit to the Mastodon Server Covenant. This covenant is an agreement to certain terms in an effort to give users confidence in the Mastodon network and the instances they join. An instance that commits to the Mastodon Server Covenant agree to, among other things, moderate content, backup the service and provide users with at least a three month warning prior to stopping services. Committing to this covenant is optional but in exchange the instance will be listed as having committed to the covenant giving them much better visibility to potential new users than instances that haven’t.

Moving on to the more technical, behind the scenes side of Mastodon, is just as interesting. The most well known and visible portion of Mastodon is called ActivityPub. ActivityPub is not unique to Mastodon, in fact it can be implemented by any number of systems and indeed there are a number of federated services that use it. ActivityPub is the portion that pushes content around between users and other instances. There is even a WordPress plugin, which this site uses, to allow people to subscribe to a feed of my blog posts. The additional services using ActivityPub are beyond the scope of this post.

Unlike Twitter, you can host your own Mastodon instance using the code available at https://github.com/mastodon/mastodon by following the directions at https://docs.joinmastodon.org/admin/prerequisites/. While I have no interest in hosting my own instance long term for a variety of reasons, I did want to understand how the software works as a whole and what it took to host and scale it. As you can imagine, hosting a large instance is definitely a technical challenge let alone the challenges of cost and moderation.

In an effort to learn how to run Mastodon, I created my own Helm chart available at https://github.com/dustinrue/mastodon-helm-chart. My helm chart is functional but really only for a specific setup and is not currently ready for wide use. However, what it taught me was a lot about the moving parts that make up a full Mastodon instance. Designed to be highly scaleable, Mastodon requires just two external services (ignoring file asset storage) including Redis and PostgreSQL. Redis is used for caching various pieces of data while PostgreSQL of course stores data that must persist including user accounts, their settings, posts and so on. You do also provide storage and this can be done using local storage or object storage like S3. Mastodon itself is shipped with everything else you need which consists of Sidekiq with various queues, a web socket based streaming application an a web app. The web app is what users see when they access your instance, the streaming portion feeds data to your browser as it comes in and Sidekiq handles a bunch of different background tasks including shipping your posts to subscribed instances, ingesting updates from other instances, getting images and videos ready (and pushed into S3 if you are using that). All of the Sidekiq queues can be broken out into individual processes (which is absolutely necessary for large installations) so that more work can be done in parallel. It is very clear that the designers of Mastodon have considered each aspect of hosting a large installation and taken care to ensure it is possible. In fact, there is a sort of “brain dump” style of “this is what it takes to scale Mastodon” from one of the primary stakeholders of the project at https://gist.github.com/Gargron/aa9341a49dc91d5a721019d9e0c9fd11.

In my short time with Mastodon, it has become one of the top three most exciting pieces of technology I have interacted with in my career, the other two being Linux and Kubernetes. The fact that I can run Mastodon on the other two makes it even better. Mastodon, and the protocols that power it, are the real “web 3.0” because it gives power back to the users. Come to think of it, Mastodon is like web 1.0 where its users hold the power and are in control. It was because of the Twitter shakeup that I discovered Mastodon and I hope that the current social media climate continues to bring greater awareness to the Mastodon network and it is able to continue growing. You can find me on fosstodon.org as @[email protected].

For a number of years I have self-hosted a few things out of my house. This has always meant that I need a way to allow public access to resources hosted on my home lab. In the past this has meant having a single system on my network that could act as a sort of gateway to everything else. This system runs Nginx and a number of vhost configurations to route traffic to the correct VM running whatever it is I’m looking for.

For this to work, I have to insert port forwards into my router so that traffic destined for port 80 or 443 is forwarded to the Nginx proxy. In addition to this, I have to run a script on the proxy that checks my current public IP address and if it has changed, update a DNS record within Cloudflare. This has served me well for years but I have always wondered if there was another option.

As it turns out, there is and it is called Cloudflare Tunnels. As someone who has used Cloudflare for years as a free CDN in front of my blog, I was super happy when Cloudflare made most of their Zero Trust functionality free for small users. Their Zero Trust platform consists of a number of elements but today I’m going to concentrate on just the Tunnel piece.

With the sudden increase in interest in Mastodon as a social media platform I decided to play around with the software and running it myself. Even though I don’t plan on running it on my own any time soon and am totally happy throwing money at some instance admins (which you should consider doing https://hub.fosstodon.org/support/), I thought it would be a fun exercise. Anyway, once I had the software running I needed a way to get it accessible to the public. This is where I turned to Cloudflare Tunnels.

Of course, for any of this to work you will need a Cloudflare account with a domain configured (which is free…and I am in no way connected with Cloudflare other than I am a customer). A Cloudflare tunnel works using an agent running on a host inside of your internal network that is then authenticated with your Cloudflare account. Creating the tunnel is simple:

  • Sign into your Cloudflare account
  • Click Zero Trust
  • Access and then Tunnels
  • Click the Create Tunnel button
  • Give the tunnel a name and click Save
  • Choose your operating system from the list and then paste in the commands you see. If you are using a Raspberry Pi you may need to get the latest release from https://github.com/cloudflare/cloudflared/releases. Once installed use the second command for if you have an existing installation of cloudflared. Advanced users can simply add the tunnel themselves.
  • Once cloudflared is running on your system it will show up as connected
  • Click Next

On the next screen you will be given the option to connect a hostname to the tunnel. For this post I will describe how I setup access to my mastodon.dustinrue.com instance. You will be presented with the following screen:

For my mastodon.dustinrue.com instance, I entered mastodon as the subdomain and selected my dustinrue.com domain. In the service section I selected https and gave it the IP address of my internal k3s system. My final config looked similar to this:
I use HTTPS because my internal k3s system’s ingress controller uses cert-manager to create certificates. From here I needed to add some additional application settings to ensure everything worked correctly. Note: you don’t want to use an existing subdomain here (unless you really know what you are doing) because Cloudflare is going to want to insert a new DNS record to make the tunnel work.

Since I am using an SSL service type, I need to specify the name to send as part of the request so that the ingress controller knows what resources is being accessed. Also, although I do use cert-manager I didn’t setup a TLS record for cert-manager to use so it just applied a self-signed cert. This is fine in this case because the public will see the certificate that Cloudflare provides, not what my k3s cluster has. For these reasons, my TLS section is configured like this:

The Origin Server Name and No TLS Verify are the important options.

In addition to the TLS settings, the regular HTTP section also needs some attention. For this, my settings look like this:

From here, click Save. Cloudflare will insert the correct DNS record for your domain and the connection from the public to your resource will be made. If everything is configured correctly then you will be able to access your resource at the subdomain you provided. Remember that all of the same rules apply here, everything in the path needs to be aware of whatever vhost you are using. The trick here is that there are no port forwards in your router and there is a protected path between Cloudflare into your home network, your home network remains unexposed to the public.

I hope this quick guide is enough to get you going on using Cloudflare tunnels. In a future post I will describe how to use the same system but for SSH connections. This is a powerful tool when combined with Cloudflare Zero Trust because it allows you to define who is allowed to access systems. Zero Trust can also be used to protect specific routes on a site (or the entire site) if you want. For my blog, I use Zero Trust to protect the admin and login pages.

In a previous post I quickly mentioned that this site now has pfefferle’s ActivityPub plugin installed. This plugin implements enough of the ActivityPub and associated protocols to allow a WordPress site to look and behave a bit like a user on an ActivityPub compatible platform including Mastodon and more. By installing the plugin, you can search for an author of a site and then follow them so you can see a stream of their content whenever they post it. From there you can comment on the post and interact with it from within your favorite Fediverse platform.

In this post I quickly describe how to get started with the plugin. To start, install the plugin (https://wordpress.org/plugins/activitypub/) using your usual method for installing plugins. For me, that means adding it to a composer.json file, for you it might mean simply searching for the plugin in your WordPress admin -> plugins screen. Once installed, activate the plugin. That’s it! Your site is now ready to be followed by anyone within the Fediverse network.

The plugin implements the bare minimum required, it seems, so it can be a bit confusing when there are no immediately obvious visual changes to anything. Most of what the plugin does is in the background, inserting routes that are necessary to make the webfinger and ActivityPub protocols work on your site. Don’t worry though, the plugin is working!

To get the search string people need to use to follow your blog posts or pages visit your user profile page. You should see something similar to this near the bottom of the page:

These profile identifiers can be pasted into the search bar of an instance and from there you can follow the author. Simply take the @[email protected] portion that you see and paste it into the search bar of your Mastodon instance. For simplicity, I put this into my Fosstodon profile https://fosstodon.org/@[email protected]. This allows people to easily see my profile.

This final step was not immediately clear to me but I found this in my profile it was super easy to then follow myself. Your followers will appear in Users -> Followers. Anyone that replies to a post on the Fediverse will be added as a comment on your site. On my setup, using Akismet, incoming comments were put into spam for some reason.

I have installed this plugin https://wordpress.org/plugins/activitypub/ and activated it which means you can now follow my blog as if it were any other user on the Fediverse network. Pasting @[email protected] into the search bar of your favorite Mastadon (and more) instance will allow you to see posts when I publish them. I will soon stop posting directly on my profile whenever I post content in favor of letting people choose when to get spammed with my content.

Sometimes you need to run or build containers on a different architecture than you are using natively. While you can tap into buildx for building containers, running containers built for a different architecture other than yours requires Docker Desktop with its magic or the image itself needs to have been built in a specific way. This rarely happens.

Using Colima’s built in CPU architecture emulation it is possible to create a Colima instance, or profile, for either arm64 (aarch64) or amd64 (x86_64) on both types of Mac, the M series or an Intel series. This means M series Macs can run x86_64 containers and Intel Macs can run arm64 based images and the containers won’t be aware that they aren’t running on native hardware. Containers running under emulation will run more slowly than they would if run on native hardware but having the ability to run them at all is really useful at times.

Here is what you do to setup a Colima profile running a different CPU architecture. I’m starting with an M1 based system with no Colima profiles created. You can see the current profiles by running colima list.

colima list
WARN[0000] No instance found. Run colima start to create an instance.
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS

From here I can create a new profile and tell it to emulate x86_64 using colima start --profile amd64 -a x86_64 -c 4 -m 6. This command will create a Colima profile called “amd64” using architecture x86_64, 4 CPU cores and 6GB of memory. This Colima profile will take some time to start and will not have Kubernetes enabled. Give this some time to start up and then check your available Docker contexts using docker context ls. You will get output similar to this:

docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
colima-amd64 * moby colima [profile=amd64] unix:///Users/dustin/.colima/amd64/docker.sock
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock

From here I can run a container. I’ll start with alpine container by running docker run --rm -ti alpine uname -a to check what architecture it is running under. You should get this in return if you are on an M series Mac – Linux 526bf44161d6 5.15.68-0-virt #1-Alpine SMP Fri, 16 Sep 2022 06:29:31 +0000 x86_64 Linux. Of course, you can run any container you need that is maybe x86_64 only.

Next I am going to run Nginx as an x86_64 container by running docker run --rm -tid -p 80:80 --name nginx_amd64 nginx. Once Nginx is running I will demonstrate how to connect to it from another container running on a different architecture. This is super useful if you are testing different pieces of software together but one isn’t available natively for your platform.

Now I’ll create a native instance of Colima using colima start --profile arm64 -c 4 -m 6. When this completes, docker context ls will now show a new context that it has switched to. Running docker ps will also show there is nothing running. You can switch between contexts using docker context use followed by the name of the context you want to use.

With the new context available I will start a copy of Alpine Linux again and add the curl package using apk add curl. With curl available, running curl host.docker.internal will show a response from Nginx!

Now that I am done testing I can remove the emulated profile using colima delete amd64 and the profile will be removed and cleaned up. Easy.

For the past few years I have been using Kubernetes to host a number of services including custom code, WordPress and all manner of other publicly available projects. In this time I have come to rely on a few, what I call, base services that make the experience of running software in Kubernetes just a bit nicer. In this post I’m going to go through what base services I install and a bit on why.

All of the services listed below are installed using helm. I consider Helm the only method for managing applications running in a Kubernetes cluster. Nothing else is able to manage software as well as helm. If a service I want to run in Kubernetes doesn’t have a helm chart I will create one for it.

Almost every Kubernetes setup I use needs to actually service requests from users and this is almost always done using the Ingress system. My preferred ingress controller is the community maintained ingress-nginx. Do not confuse this controller with nginx-ingress, which is put out by nginx.com. I prefer this fully open source controller for its straight forward feature set and configuration system. It has a large number of features and works equally well in both home lab and cloud environments. As an Nginx user anyway I find the configuration very familiar. To install ingress-nginx, I add their repo using helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx. You will find additional information at https://kubernetes.github.io/ingress-nginx/deploy/.

SSL is all but a necessity these days and I have found no better way than to use cert-manager in the cluster. Nearly all of my use cases allow for the usage of a cluster wide, DNS based resolver that allows me to get SSL certs for resources that are not yet publicly accessible or are internal only. By leveraging DNS services from AWS or Cloudflare (or any supported DNS provider) I am able to automatically create and update certificates with very little intervention. To install cert-manager I use the the official helm chart provided by the project using helm repo add jetstack https://charts.jetstack.io. Additional installation directions are available at https://cert-manager.io/docs/installation/helm/.

Speaking of DNS, in clusters where I need to have DNS records pointed towards the cluster I use external-dns. This service looks for ingress entries and manages records in your DNS provider pointing the desired hostname towards your cluster or its external load balancer. I install external-dns using the helm chart by Bitnami. Learn more at https://github.com/bitnami/charts.

Getting logs out of a production cluster is important and assuming you have some place to accept the logs, you won’t generally do better than using fluent-bit. Installation and configuration of fluent-bit is highly dependent on what your logging system is so I recommend reading their documentation on how to get going. Fluent-bit is quite popular and it is usually easy to find examples for whatever your logging system is.

Used by a number of other services, metrics-server gathers basic utilization data from pods and nodes in your cluster. This service is so essential many small Kubernetes systems, like k3s, automatically install this service. I install this service again using Bitnami’s charts available at https://github.com/bitnami/charts.

For managed Kubernetes instances in public clouds I find cluster-autoscaler to be an essential service. When configured correctly, and when combined with metrics-server and properly configured resource settings, cluster-autoscaler will automatically add and remove worker nodes. Information about how to add the cluster-autoscaler helm chart can be found at https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler.

These services make Kubernetes much easier and automatic and for that reason I find them to be essential in almost every cluster. What services do you find essential?

If you find yourself in the business of creating and testing Helm charts, or you simply want to try one out, then Colima with its built in Kubernetes functionality may be for you. In this post I am going to walk through how to quickly get going with Colima’s Kubernetes integration and an ingress controller for basic Helm chart testing.

I assume you already have Colima and Helm installed and are familiar with the tools and Kubernetes itself. If this is you then continue reading!

For this post I am using Colima 0.4.4, k3s v1.23.6+k3s1, helm 3.9.3 and ingress-nginx 4.2.0. I often find myself creating Helm charts and I want to test my modifications locally before committing my changes. Once in a while I also want to quickly test an available helm chart without messing up an existing Kubernetes installation. In these cases I will create a Colima instance with Kubernetes enabled and install my preferred ingress controller, nginx-ingress.

To get started, ensure that no other colima instances are running using colima list followed by a colima stop <name of profile> for any running instances. You should also ensure that there are no other services running on your system that are opening ports, especially 80, 443 and 3306. This helps ensure your test instance doesn’t interfere with any existing colima instances or other services. Then, issue colima start helm-test --kubernetes -m4 to start a colima instance with 4GB memory and Kubernetes enabled. Once colima has finished creating the instance you can add the ingress-nginx helm repository if you don’t already have it with helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx followed by helm repo update. You can now install ingress-nginx using helm install -n ingress-nginx --create-namespace --set controller.ingressClassResource.default=true ingress-nginx ingress-nginx/ingress-nginx. This command will install ingress-nginx and set it as the default ingress class for the cluster. At this point you have basic installation of Kubernetes with an ingress controller which will allow you to test most Helm charts.

As a test, you could now create a brand new helm chart with helm create nginx. Edit the resulting values.yaml file and enable ingress then install the chart into your new test cluster. You should see that it is able to download and install the default nginx image and create the proper ingress rule automatically. For my test I used helm install -n default nginx .. Before long you should see this as an ingress record:

kubectl get ingress nginx
NAME    CLASS   HOSTS                 ADDRESS        PORTS   AGE
nginx   nginx   chart-example.local   192.168.5.15   80      54s

Despite what the Address column says, the chart is now available at 127.0.0.1. Create a hosts entry and you will be able to get the default nginx page.

Of course, you can use or test other charts too. Here I will install bitnami’s MySQL chart with the following settings in a yaml file

## MySQL Authentication parameters
##
auth:
  ## MySQL root password
  ## ref: https://github.com/bitnami/bitnami-docker-mysql#setting-the-root-password-on-first-run
  ##
  rootPassword: "password"
  ## MySQL custom user and database
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-on-first-run
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-user-on-first-run
  ##
  database: "blog"
  username: "wordpress"
  password: "password"
##
primary:
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
  service:
    ## @param primary.service.type MySQL Primary K8s service type
    ##
    type: LoadBalancer
##
secondary:
  ## Number of MySQL Secondary replicas to deploy
  ##
  replicaCount: 0

I install the bitnami repo using helm repo add https://charts.bitnami.com/bitnami followed by helm repo update to ensure I have the latest info. To install a copy of MySQL with my settings file I use helm install -f mysql.yaml mysql bitnami/mysql. After a short while MySQL will be installed and also available on localhost through k3s’ built in LoadBalancer system. Notice in the mysql.yml file I specified I asked the chart to install the primary instance of MySQL with a LoadBalancer based service instead of the default ClusterIP.

When you are finished testing a simple colima delete helm-test will remove your testing environment and free up resources.

Hopefully you see now how quickly and easily you can get going with Colima and its Kubernetes integration to get a local Kubernetes cluster up and running for testing. The Kubernetes integration Colima uses is very capable and well suited to learning and testing. Enjoy!

Many audio formats have come and gone over the years with some of them being better than others. Music on physical formats, particularly vinyl, have been increasing in popularity year over year. In addition to vinyl records, I personally have been adding to my CD collection. In order to actually play CDs I had to buy a CD player (or two) because I have either gotten rid of the players I had or they failed. To date, I have a total of three CD players. Two Sony 5 disc changes and a Sharp DX-200 single disc player. These units all needed some amount of effort in order to get them back into shape.

Not content with just CD and vinyl, I decided it was time to get into one of the few formats I had never owned or even used before. Minidisc. As described by Wikipedia, Minidisc is an erasable magneto-optical disc format that allows users to record sound either in real time or, using specific equipment, transfer data using a USB connection. The discs can be reused like a cassette but unlike a cassette they are digital and provide near CD quality. Minidisc uses a proprietary, lossy compression system that allows it to fix 60-80 minutes of audio onto a single disc. Unlike CD+/-R, Minidiscs can be modified, on the player, after the fact with text info, track arrangement and even editing of where tracks are split. All in, the Minidisc format feels incredibly ahead of its time even today. There is no other format that is remotely close to what Minidisc can do outside of sitting at a computer and fiddling in software. While it is true that Minidisc’s features are superseded by music software like Apple’s Music or Spotify there is a real and undeniable charm to the format that makes it fun to use even today. Later iterations of the Minidisc format provided additional capabilities which you can read about at https://www.minidisc.wiki//technology/start.

Minidisc recorders and players come in a number of form factors including portal players that are barely larger than the discs themselves (aside from thickness) to full HiFi component sized units. While the HiFi component size is my preference there are few that support NetMD, or a USB connection for interfacing with a computer. Many portal players, even earlier models, support NetMD for quick transfer for music from a computer to the Minidisc.

The player I picked up is a Sony MDS-JE500. This model, from 1997, has issues with its loading mechanism that causes it to continuously attempt to eject the disc even after ejecting the disc. A known and common problem, this unit will need to have some microswitches replaced in the near future. I picked this model because it comes an era when Sony was producing devices with my favorite design language. This device is able to record both analog and digital directly to Minidisc and allows for manually setting analog recording levels. Overall a really nice and functional device.

Minidisc, despite being so feature rich, ultimately failed for numerous reasons including high initial cost and the rise in digital formats and players like the iPod. In addition to this, commercially licensed music outside of Sony properties was far and few between. Even today you will not find many Minidiscs and those that you do find fetch a high price.

If you are interested in learning more or getting into the format I highly recommend taking a peek at https://www.minidisc.wiki/. This community run wiki provides a growing collection of articles related to the Minidisc format including where to buy devices and minidiscs, device features and even repair information.

Minidisc is not a format I expect to find a lot of prerecorded music for but I am enjoying transferring my vinyl records to minidisc or creating custom mixes. It’s true there are more modern ways to do the same thing, including just skipping the process entirely and creating playlists in Spotify, it is still an interesting departure from the norm and, most of all, is fun.

Since 2021 I’ve been using a combination of tools to handle my music collection. Today I’m going to talk about the tools I’m using to manage my collection including how I catalog, import, serve and listen to it.

Although I do subscribe to a music streaming service I have taken an interest in expanding my physical collection as well. My collection consists largely of CDs with some vinyl records mixed in. While I appreciate the convenience of digital stream I also enjoy the process and experience of playing physical media, which I’ve written about before. That said, I like to also take my collection with me in digital formats and enjoy knowing that it comes from my own personal collection. Before we get into how I copy my CDs to digital lets first discuss how I catalog and keep track of my collection.

Cataloging

A couple of years ago I learned about a site called discogs.com. In their words Discogs is “a platform for music discovery and collection” and this is exactly how I use it. You can search for and add to your collection each piece of physical music media you own or are interested in owning and add it to your collection or wishlist, respectively. The database contains user submitted and curated information about most releases available with surprising detail. You can choose to be super detailed about how you add items to your collection by selecting the exact release or more simply add the first item you find. How you use Discogs is ultimately up to you but it is an incredibly handy way to track what you already own, find new stuff you’d like to own and so on. Using Discogs allows me to track the state of my media (some of it is damaged and needs to be replaced, for example) as well as ensure I don’t buy the same item twice.

Importing

I import all of my CDs using a tool called XLD, available at https://tmkk.undo.jp/xld/index_e.html. Using an external DVD drive to my Mac, XLD is able to look up what CD is in the drive, grab metadata about it and take care of copying the music off of it and onto my NAS. The metadata ensures that the folders are named properly as well as the track titles. I stick to the FLAC format for the files as it ensures the best quality and compatibility with playback software. Whenever I sync music to my phone for offline play in the car I opt to have the songs encoded on the fly to a smaller format.

Some vinyl records also include digital files that you can download from a site. For these I will typically add them to an appropriate folder of either MP3 encoded music or FLAC encoded music.

Storage

All of my music is stored on a TrueNAS based storage system and then shared out to a virtual machine that is running Plex. TrueNAS exports the data using Samba so it is easy for my Mac and the virtual machine to access without issue. TrueNAS stores the files on a raidz set for redundancy and I periodically back the data up to another disk.

Playback

Once the music is imported and stored on TrueNAS I add it in Plex. Plex is a convenient way to manage music as it detects the music you have added and downloads additional metadata about it, like album reviews. Recent releases of Plex allow you to “sonically fingerprint” music so that it can better find similar music in our collection for building better mixes.

Although Plex is the server part of the music system the actual software I use is called Plexamp. Plexamp is an app that is dedicated to music playback offering a slick interface, ability to download music locally from Plex and provides gapless playback. If you’ve ever listened to an album and wondered why there were gaps between tracks that sound like they should flow together, gapless is what you’re looking for. In addition to gapless, when playing a mix you can optionally have Plexamp fade between songs and I find that this works extremely well. Overall, Plex and Plexamp are my favorite tools for listening to music.

The actual hardware I listen on varies depending on where I am. While working and at my desk then I will be using the setup detailed on my audio system page. While out and about it will be through my iPhone connected to headphones or my car.

Conclusion

I’ve long listened to music but only recently have I gotten back into the general process of collecting it and paying attention to the process of listening to it. I enjoy my physical formats but I’m also not blind to the convenience of digital formats. How do you manage your music?