Semi-related to my previous post, this post quickly touches on the fact that having swap on your system is not always a bad thing. I have seen “disable swap” become a common “performance hack” suggested by a lot of people and it appears to be growing in popularity. I believe a lot of people are simply parroting something they heard once but don’t actually know when it makes sense to disable swap on a system. I have found that outright disabling swap has a detrimental effect on system performance.

The basic idea behind not using swap is sound, on the surface. The argument is that swap is both much much slower than system memory and that if you are hitting swap then you need more memory. To add to this, a lot of people don’t understand how memory works on Linux (and indeed all major operating systems). Linux wants to use as much memory as possible. If you give it 1TB of memory (or more) then it will do everything it can to eventually use all of it. However, how it uses this memory can be confusing. Looking at this output from free -m, it may not be obvious what is happening:

[root@web2 system]# free -m
              total        used        free      shared  buff/cache   available
Mem:            809         407         137          37         263         251
Swap:          1023         282         741

In the above example output from free -m you will see the columns total, used, free, shared, buff/cache and available. The values for each, respectively is 809, 407, 137, 37, 263 and 251.

In a lot of cases, the value most people will look at is “free.” Unfortunately, on a system that has been running for some time, this value will almost always give the impression that the system is low on memory. Like so many things, there is a lot more to it than what the free value shows. In reality, the value you want to pay attention to is available. This value represents the amount of free memory with memory that can be reclaimed at any time for other purposes added in. The “cache” portion of the buff/cache value is what can be reclaimed and it represents the amount of data from disks that is cached into memory. It is this cache that operating systems try to keep full in order to avoid expensive disk reads and is why a system with a lot of memory can potentially have very little free memory.

A system that is low in available memory will also not be able to cache a lot of disk reads (because remember that available is free+cache added together) which will lead to lower overall performance. Of course, loading an entire disk into memory won’t necessarily have a positive affect on overall performance either. If a file is read once and never used again, does it really need to be cached? Having a lot of memory can lead to things being needlessly cached. A system with 16GB of memory can perform just as well as a system with 32GB of memory if most of the 32GB memory is filled with files that are very rarely read again.

Getting to why having swap is not evil, some apps and portions of apps aren’t always being used, even if they are running. For this reason, having swap available on a system is beneficial because the operating system can page application memory to disk and free up memory for to use as a disk cache for more active applications. In instances, such as the web server hosting this site, having swap available is a necessity because it allows me to have a system with less memory while still maintaining proper performance in normal conditions. Services that are necessary but rarely used are swapped out leaving room in memory for application code to be kept there instead. WordPress is considered “hot data” where as systemd is not. Once the system has booted systemd, while necessary, is not actively doing anything and can be paged to disk without affecting performance in noticeable way. However, swap is an issue if you are dipping into it continuously. This will quickly become evident if you have a lack of available memory as well as a high usage of swap. In this case, you truly do need more memory in the system.

I hope this post helps clear up some of the confusion around memory usage on systems. Have anything to share? Did I get something wrong? Leave a comment!

In a previous post I mentioned that this site is hosted across two different hosts. One that is dedicated to running MySQL and Redis while the other runs Nginx and PHP. I use this arrangement for a few reasons. First, this is the cheapest way to get two real CPU cores on Digital Ocean. During a web request, multiple processes including Nginx, PHP, MySQL and Redis must run and share CPU time with each other. By using multiple machines, the work is spread across multiple physical CPUs which improves overall performance and throughput. Second, it allows me to configure MySQL to use most of the system memory without fear that it’ll be OOM killed. An OOM kill is what happens on a Linux system when it determines it is out of memory and the biggest user of memory needs to be removed (killed) in order to protect the system from a meltdown. In general, regular triggering of the OOM killer should be considered an error in configuration and capacity planning but know that it is there to protect the system.

In this post, I want to discuss a scenario where you want to host a common LAMP/LEMP stack on a single machine. In this kind of setup, multiple processes will be competing with each other for resources. Without getting too into the weeds about tuning software on this kind of setup, I’m going to assume that you will likely configure MySQL in such a way that it, as a single process, will consume the most memory of any process on the system. Indeed, most distributions when installing MySQL (or MariaDB) will have a default configuration that allows MySQL to use in excess of 1GB.

Unlike MySQL, the amount of memory that many other processes may use is relatively unknown. Looking at just PHP (using php-fpm) the amount of memory is fairly dynamic. It is unlikely that you will be able to tune your system to ensure PHP doesn’t use too much memory without sacrificing total throughput. Therefore, it is necessary to configure PHP in such a way that you over provision available memory in an effort to ensure you get the most performance you can most of the time. However, in this scenario it is likely that you will eventually face a situation where PHP is asking for a lot more memory than usual and the system will invoke the OOM killer to deal with the sudden shortage of memory. MySQL, being the single largest user of memory on the system, will almost always be selected by the kernel to be removed. Allowing MySQL to be OOM killed is far less ideal than killing a rogue PHP process or two because it will disrupt all requests rather than the problem requests. So, how do you avoid MySQL being selected?

Most modern systems ship with systemd. Portions of systemd are not well received but, at least in my opinion, the init system is excellent. Using systemd, we are able to customize the startup routines for MySQL (almost any service, actually) so that we can instruct the kernel’s OOM killer to select a different process when the system is low on memory. Here is how it is done:

  • Create a directory – /etc/systemd/system/mysql.service.d. The directory name must match an existing service. For MariaDB it would be mariadb.service.d. You can determine the name by running systemctl list-unit-files
  • In this directory, create a file called oomadjust.conf with the following in it:
    [Service]
    OOMScoreAdjust=-500
  • Run systemctl daemon-reload
  • Restart MySQL

To confirm your customization was picked up run systemctl status mysql. In the “Drop-In” section you should see your customization was picked up. It’ll look similar to this:

Screenshot showing the oomadjust.conf file was picked up by systemd

This setting adjusts the value the OOM killer will calculate when trying to determine what process is using the most memory. By forcing this value to be lowered for MySQL it is much less likely to be selected. Instead, a problem PHP process will likely be selected first and removed. This will save MySQL and the overall availability of your app. Of course, your mileage may vary and you will need to tune your configuration to reduce if not eliminate the need for the OOM killer.

If you would like to learn more about systemd drop-ins take a look at the documentation by flatcar Linux at https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/. Many things can be overridden without having to edit files provided by packages (which you should avoid).

Have you used systemd’s drop-in system before? Curious how else you might use it? Leave a comment!

CentOS 7 is now in full maintenance mode until 2024. This means it won’t get any updates except security fixes and some mission critical bugs. In addition to being in full maintenance mode, the OS is simply beginning to show its age. It’s still a great OS, just that a lot of packages are very far behind “state of the art.” Packages like git, bash and even the kernel are missing some features that I prefer to have available. With that in mind, and an abundance of time on a Saturday, I decided to upgrade the underlying operating system hosting the site.

The choice of what operating system was not as simple as it was just a year ago. In the past I would have simply spun up the next release of CentOS, which is based off of Red Hat Enterprise Linux, and configured it for whatever duty it was to perform. However, Red Hat had a different idea and decided to make CentOS 8 a rolling release that RHEL is based off of, rather than CentOS being a rebadged clone of RHEL. The history of CentOS is a surprisingly complex and you can read about it at https://en.wikipedia.org/wiki/CentOS.

Since the change, at least a few options are now available to give people, like me, access to a Linux distribution they know and can trust. Among those, Rocky Linux appears to be getting enough traction for me to adopt it as my next Linux distribution. My needs for Linux are pretty basic and more than anything I just want to know that I can install updates without issue and keep the system going for a number of years before I have to worry about it. Rocky Linux gives me that just like CentOS did before. As of this writing, the web server hosting this site is now running Rocky Linux 8 and I’ll upgrade the database server at a later time. So far it has proven to be identical to RHEL and very familiar to anyone who has used RHEL/CentOS in the past.

Nobody asked for this but today I’m going to discuss why I put a CD player back into my audio setup.

Before we get into that, I want to touch on one of my biggest pet peeves about macOS: the media controls. A few years ago a change was made to the keyboard media controls that allowed them to control more media, even media that is available on web pages like YouTube or the little video widgets on news sites. On the surface this seems like a welcome change but in practice it feels as if the feature was programmed to purposely do the wrong thing at all times. For example, let’s say you have Spotify open playing music in the background and you visit a site that as an auto play video. Then you get a phone call so you press pause on the keyboard and…the music doesn’t stop? What gives? Well, macOS decided that the keyboard controls should control the video on the webpage and not Spotify. Or, maybe you’re like me and you use multiple music apps like Spotify and Plexamp. You’re listening to music with Spotify in the foreground with Plexamp paused in the background. You press pause on the keyboard and now suddenly there is two songs playing because macOS decided that what you really meant was to unpause the inactive music app, not the one you are actively using!

While I certainly appreciate having access to an effectively unlimited supply of music at the click of a button the overall experience has degraded significantly over the years. I believe a major contributor to this is due to how powerful today’s computers are. We’ve added greater functionality and expectations to computers and in a sense they’ve become too capable and complex for their own good. It used to be that browsing the web while running Winamp was about as much as you could reasonably expect a computer to do. I’m not lamenting that computers are more capable but I am saying that it has come at the expense of some tasks that used to feel simple and straight forward.

Which brings me back to why I’m using a CD player. As I mentioned in my broader post about the state of my audio stack in 2022, I have put a CD player back into my audio setup partially because of the straight forward simplicity that it offers. I turn on my amplifier, CD player, turn the input knob to CD and then put in a CD. That’s it, that’s all it does. Since the device has but one function there is never a question of what pressing a button will do. If a CD is playing it will always pause it. If it paused then it will play it again. As Antoine de Saint-ExupĂ©ry Terre des Hommes once said, “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away” and I believe using a CD player is similar in a way. It’s incredibly refreshing to put down a device that can do anything well enough in favor of a device that does just one thing really well.

Of course, using music apps will always offer greater overall flexibility what with the huge selection to choose from, ability to take and play the music anywhere and all the other reasons CDs lost out to file based formats. But like reading an actual book, taking a CD out of its case, placing it onto the tray of a CD player and pressing play provides the sort of tactile experience not possible using digital files. For these reasons, at least for now, I am back to listening to CDs (along with my vinyl records) at least some of the time.

If you are in the business of creating software, no matter your role, then you owe it to yourself to take a consider David Farley’s Modern Software Engineering: Doing What Works to Build Better Software Faster. I’m not in any way affiliated with the offer and I’m not getting any sort of kickback on that link. I just think it’s a good book.

This book, along with what I consider a sort of companion to it Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations will likely get you to rethink how you are approaching software development. The Accelerate book provides information backed by data that shows that the processes defined in Modern Software Engineering do in fact work to improve the pace of software development, the quality of the software and improvements in developer/employee satisfaction.

The overarching message to take away from the books is that being fast is the key. The quicker you can write and release code into production so that you can then get feedback from it the better your code quality will ultimately be. Care should be taken to remove anything that prevents developers from getting their code into production quickly and with minimal roadblocks. This doesn’t mean you are careless, however! Putting a heavy emphasis on testing, the books paint a picture of the ideal system where tests are written first and then code to satisfy the tests. This process helps ensure that your code is divided up into parts that can be tested easily which will, in essence, indirectly force you to write better, more readable and more easily understood code. These tests then have the additional benefit of allowing developers to know that the changes they made either satisfy requirements or at least didn’t break existing functionality. The difference between “I think it works” and “I know it works” pays huge dividends in developer and team satisfaction. It also provides long term benefits as people are rotated out of teams because it codifies intended behavior. Well written and described tests, when they fail, will tell the developer what the intended outcome of a function is.

This may feel counter-intuitive but the Accelerate book does a great job of showing, with data, that these things are in fact true in most if not all cases. While reading the book there were a number of times where I stopped to consider how I was approaching things and realized some of the assumptions I had made were incorrect and need to be adjusted. Much of what Farley describes sounds difficult to implement and indeed everything he describes does require a certain amount of discipline amongst the team to ensure the work they do enforces the defined ideals.

If you’re looking for a good read that will make you think about how you are approaching software development, regardless of your role in it, I highly recommend Modern Software Engineering as well as Accelerate book.

Like a “what’s on my computer” post, I thought it would be fun to go through a list of what is in the audio system in my home office. Back in 2020 I started down a journey of upgrading my audio equipment. This post details what I’m currently using and a little bit about why.

Amplification

Late in 2021 I upgraded the heart of the system from my Sony STR-DE425 to a Denon PMA-600ne. During the last half of 2021 the Sony started to show some signs of old age where it would randomly half enable surround sound mode or start the test tone but in just one channel. To fix it I’d have to unplug the receiver for a bit and then plug it in again. For this reason, I decided it was finally time to step up the amp I use in my audio chain. I decided on the Denon PMA-600ne integrated amp because it provided a number of analog and digital inputs, a phono pre-amp while providing basic tone controls. It has more than enough power for my room and reviews very well. What I immediately noticed about this amp was how much brighter it sounded than the Sony. I’ve always thought the Sony receiver had what I could only describe as a sterile sound but what I didn’t realize was how rolled off it was on the top end. With the Denon in place there is a lot more detail on the top end.

Sources

Despite having a nearly limitless selection of music available to me through Spotify I sometimes like to engage in the experience a bit differently depending on my mood. For this reason, feeding the Denon is a mix of devices that I can select from.

About mid 2021 I picked up a Sony CDP-C245 so that I could have a CD player again. This 5 disc changer was a cheap find on Craigslist that got me listening to my CD collection again, even though most of it is ripped to the computer anyway. What I like about the CD player is that it is dedicated to the task of playing CDs, has its own unique sound signature and has a nice display. Using the CD player is a bit like picking out a skin for WinAmp years ago or selecting what software you want to manage your music collection in today. Like software, the interface on each device is different and unique. It has physical buttons for all of the functions that the device offers. I like the classic Sony CD player display with the calendar grid, the symbols for which disc is selected along with the track and timer display. All told, using the CD player adds a bit of nuance to the experience that is just satisfying. The player, being old and used, has its issues. The tray sometimes freaks out and needs to reset itself by opening and closing. It also lacks digital output. I may replace it with a slightly newer model that has digital output but I’ll definitely stick with the classic Sony design.

Since the Denon is not a receiver it doesn’t have a built in radio tuner…but I do listen to the radio sometimes. To solve this I am using the tuner in the Sony and simply outputting it to the Denon. I wasn’t really expecting this to sound as good as it does but the Denon does a great job here.

My computer, which I run Spotify and Plexamp on, is connected to the Denon using a Schiit Modi 3+. Prior to the Modi 3+, I connected my computer to the Sony STR-DE425 using plain a 3.5mm to RCA cable. Oddly this resulted in a bit of hum some of the inputs on the receiver. To resolve this I picked up the Modi 3+ so that I can could add an excellent DAC with digital inputs to the Sony, remove the hum and just improve the overall sound quality. The Denon does have digital inputs but unlike the Modi 3+ it doesn’t have a USB input. Rather than adding optical out from the computer I opted to just stick with the Modi 3+ and feed it into an analog input on the Denon. Also connected to the Schiit is my Xbox One X’s optical output.

I have a few other gaming systems in my office in addition to the Xbox One X. These systems are all HDMI based and for these devices I use an HDMI switch that includes digital outputs. The HDMI switch allows me to output all of the systems to a single HDMI input on my monitor and then route digital audio from the switch into a digital input on the Denon.

The last item connected to the Denon is my Audio Technica LP120x turntable. This is a well known and excellent turntable that also reviews very well. Since the Denon has a built in phono preamp I opted to use that instead of the one built into the LP120x. I can’t really say if one sounds better than the other but both are more than acceptable and any remaining differences would certainly fall within the realm of personal preference.

Speakers

The Denon is currently connected to a set of Polk T-15 bookshelf speakers. These speakers are a bit unique in that they aren’t really designed for direct, on axis listening like other speakers. Instead, they were engineered from the point of view that a lot of users aren’t able to create a dedicated listening space and would instead position the speakers in a less than ideal arrangement. For this reason, the speakers offer the best sound when you are about 20 degrees above or below the tweeter. “Luckily for me”, my desk design doesn’t really allow for ideal speaker placement and the T-15s, while inexpensive, sound great to me in this room. I may upgrade in 2022 but before I do I plan to put some acoustic treatments in the room.

To round out the sound, and give it a lot more heft, I also have an old subwoofer connected to the subwoofer output on the Denon. This Yamaha subwoofer is from a home theater kit that I bought to give me a little something while living in an apartment. It is…not great but provides some much needed bass extension that the T-15s lack. This is arguably the weakest link in the audio chain today and is the first thing I am looking to upgrade in 2022.

Software

As I said, my computer (a Mac mini) is one of the sources connected to the Denon using the Schiit Modi 3+’s USB interface. Using this connection, the Modi 3+ appears as an output audio device on my computer providing a direct path from my music software to the DAC which is then converted and fed into the Denon as an analog signal. The software I am using includes:

  • Spotify (with subscription) provides streaming audio
  • Plexamp (requires Plex Pass) allows me to play my ripped CDs from my Mac but also on my iPhone
  • Plex for some of the items in my collection that work better on Plex, like OCRemix tracks and some game sound tracks.

To help route audio on my computer I use Rogue Amoeba’s SoundSource. This app allows me to route audio from the above apps directly to the Modi 3+ while keeping other apps like system audio or Zoom routed elsewhere.

Conclusion

Thank you for joining me today as I go through my audio system as it currently stands. Putting this together has been a lot of fun and listening to it even more so! When it comes to audio, what do you use? What is your favorite piece or what are you looking to improve first? Leave a comment!

As a certified Xennial, I’ve seen and used almost every music format that has come into existence. When I was growing up, I can remember having multiple systems in the house that could play records (both 33 1/3 and 45s) and 8 tracks. For awhile we even had a large console system with a radio, record player with changer feature and an 8 track player. We had a collection of records sitting on the floor in a closet and a set of records would come out around the holidays. There were even a few specifically for us kids. We had tape players and eventually CD players too.

I was too young at the time to really appreciate what was in the record collection or even tell if the systems we listened on where of any quality. The records were beat up and if they were in a sleeve at all the sleeve was tattered at best. I can still remember all of the pops and crackles that I thought was just normal for the medium. Records were all we had at the time and I didn’t know anything different at that age. Eventually tapes replaced vinyl records and CDs replaced tapes. By the time I really got into owning my music it was only on CD and then digital formats took over. I still have a large number of the CDs I bought over twenty years ago and they still work perfectly fine.

Late 2020 I got into a conversation with my wife about how vinyl records were so popular again, how we both grew up with old Christmas records and how we missed listening to them. Being who I am, I couldn’t help myself and got caught up in the idea of giving records a try again. That year, I bought one of those inexpensive little suitcase players and a couple of Christmas albums. Even with this little inexpensive player I was intrigued with it all. The large artwork, the record itself spinning on the platter and the relative simplicity of it all producing sound. Naturally this meant I had to add a “proper” turntable to my wish list along with some records and hope for the best.

Suffice to say, a turntable was eventually delivered to the house. In January of 2021 I received an Audio Technica LP120x and I got to work hooking it up to my system. The AT LP120x requires some mild assembly to get started. I had to put on the counter weight and the headshell as well as the platter itself. Then I had to properly balance the tone arm and ensure everything was set just right to ensure it played records as well as it possibly could. Then I set the needle down on a record.

The first thing I noticed was just how good it sounded. It was nothing like I remembered at all. The sound was full and rich in a way that I did a double take. Was I really listening to a record and not a CD or stream? How did I not know that records sounded like this?

I’m not here to tell anyone that records are better than any other media format. They’re not. Records simply cannot compete with digital in terms of wow and flutter, jitter, dynamic range, noise floors and every other technical spec you can think of. Records cannot compete with the convenience and portability of digital files and streaming. What I am saying is that records can sound really good and change your listening experience in a way that you might find enjoyable. Vinyl records are an experience that is a bit like reading a book rather than watching a movie. The larger format is tangible and has weight. There is no mystery to how it is played, you can see it plainly spinning on the platter with a needle tracking a groove. Records are delicate and require care to prevent scratches and occasional cleaning to keep them sounding good. Unlike a CD, a scratch in a record is definitely something you’ll hear.

Records, to a greater degree than other formats, allow you to customize the sound a bit. The sound signature of CDs and CD players (as well as digital files) is largely the same between devices. I think most people would find it difficult to tell the difference between one player and the next. Record players, however, because of the physical properties of it will always have tradeoffs due to physical differences and limitations imposed by those differences. You can change the tonal characteristics of a record player by changing what type of stylus (needle) you use or the cartridge it is attached to. Maybe you want a warmer sound with more midrange and bass. Or maybe you prefer a brighter sound. Whatever your preference, you can work towards it in subtle ways.

All of this is to say that, listening to music on vinyl is more involved in a good way. You are more a part of the process and for some people this is perceived as a benefit to the medium. Being perfectly capable of excellent sound quality while adding in these other tangible bits really adds to the overall experience if you’re someone who is looking for just a bit more out of the experience it is an excellent direction to go towards.

Of course, records aren’t perfect. Because they are played by dragging a stylus through a groove, vibrating the stylus in order to reproduce sound, they will pick up anything extra in those grooves like dirt, dust and scratches and reproduce them as annoying clicks, pops and crackles. They’ll even happily playback static shocks that might occur if the air is particularly dry. This was one of the most difficult things for me to get used to when starting out because I was very much used to CDs and digital files being free of any pops and clicks. Records can also come with manufacturing defects with badly pressed grooves, grooves that are off center or even records that are warped. Most of the time these don’t ruin an entire album but I have at least once exchanged a record that I just couldn’t get cleaned properly. This is just part of the process.

https://www.newyorker.com/cartoon/a19180

Through all of 2021 I have been picking up new and old records, adding them to my collection and tracking them on discogs. I also picked up equipment to make cleaning records easier and as effective as I could so that I could reduce all of the pops and clicks as much as I could. For 2022, I don’t expect that I’ll completely stop adding to my collection but there is one more problem that is unique to vinyl records right now which is cost. Records are considered a “premium product” and carry a premium price. Even at the beginning of 2021 a lot of newly released records were priced between $20 and $30. At the end of 2021 I’ve been seeing a lot of records priced between $40 and $50, a significant increase that really makes me rethink a purchase unless it’s something I really want. I’m hoping market pressures will resolve the pricing issues in 2022.

Thank you for joining me today while I ramble on about my vinyl addiction. If you are also into vinyl or thinking about it leave a comment!

You sign into Facebook and you see some new friend notifications from people you know you are already friends with. You browse your feed and you see notes from the same people saying “don’t accept the friend request from me my account was hacked!” What’s actually happening here? Was their account hacked in the traditional sense? Why would someone do this? How can I avoid this happening to me?

To get into this we must first properly define what is happening in these cases. What a lot of people describe as “being hacked” isn’t quite right. Being hacked means someone actually broke into your account and you have now lost control of it. This would happen because you had a weak password on your account and you’re not using two factor authentication. I’ll discuss what this means further down. Most of the time what you’re seeing is known as “account cloning” where an attacker has take the publicly available information on your account and create a replica Facebook account and then try getting people to add them as their friend. You can read more about account cloning at https://connections.oasisnet.org/facebook-account-cloning-scam-what-to-do-when-you-get-a-friend-request-from-a-friend/.

Securing your account password

Ok, with some small clarifications out of the way let’s talk about what you can do to help prevent both types of attacks. Let’s start with preventing people from taking over account by guessing your password.

An important first step is to have a strong password. Passwords that contain symbols, differences in capitalization and numbers are stronger than those that don’t. You should avoid using common names and words as these are easily guessed using robotic tools that just continuously try combinations of words until it finds one that works. Once this happens, an attacker can easily take over an account and prevent you from ever getting it back. So, the first tip is to have a strong password that you don’t use anywhere else. You can change your password on Facebook at https://www.facebook.com/settings?tab=security by visiting the page and then clicking Edit for your password. What I find helps a lot is using the built in password saving feature of my browser so that I have a single password to unlock my browser which can then fill in passwords for the sites I visit.

The second tip that is equally, if not more, important is to use two factor authentication. This way, even if an attacker does guess your password they will, hopefully, not have access to your second factor of authentication which will typically be your phone. You can configure two factor authentication at https://www.facebook.com/security/2fac/settings. For simplicity I recommend having Facebook text a code to your phone number that you input into Facebook when required. For advanced users who are more comfortable with or already have an authentication app (like Google Authenticator) then using that is an even stronger choice.

Protecting yourself from account cloning

From the article (you read at least some of it right?) we know that attackers do this because they want to prey on your trust of family and friends to, usually, scam you out of money. It’s important to understand the difference between having your account taken over and your account simply being cloned.

You may not be aware of this but the default settings of Facebook allow anyone to see at least some information about you even if they are not friends with you or even signed into Facebook. Depending on how you configure your account security people can see your profile photo, background photo, some photos and your friends list. All of this is more than enough to allow an attacker to download a copy of those items and then create an account that looks just like it.

Below is what you can do to limit this type of attack. I used the website on my computer to set these settings. Many of these settings are probably available on the phone app as well but you’re on your own.

First, review your privacy settings which is located at https://www.facebook.com/privacy/checkup?source=settings. Click on “Who can see what you share” and then click continue. Scroll through the list and set each one so that it is something other than “Public.” Note that the trade off to setting these values as not public will make it harder for people to find you (even people who you might want to find you). Continue through this page, setting options as you desire.

Limiting these values go a long ways towards preventing people from getting enough information about you and creating a convincing clone of your account.

If you want to control who can post on your timeline, who can tag you and more visit https://www.facebook.com/settings?tab=timeline.

If you want to limit what people can do with your Public Posts visit https://www.facebook.com/settings?tab=followers.

The more options you set to “friends” or “friends of friends” the better.

One last thing about privacy

There is a saying that if a product does not charge then you are the product. Facebook is a tool for gathering your info and sharing it with advertisers so they can target you. Despite this, Facebook offers a decent number of controls for your privacy that you can leverage and I recommend you do that. This limits both their ability to track but also prevents account cloning. If you are an iPhone user with a newer phone (one that runs the latest versions of iOS) and use the Facebook app (or even if you don’t) I recommend visiting the settings of your phone and find Privacy. Tap this option. Find “Tracking”. On this screen you will find an option called “Allow Apps to Request to Track.” Ensure this option is disabled, like this:

These are just some of the steps you can take to help secure your account and reduce the amount of tracking of your information. There is a lot more you can do and if you’re interested then I recommend doing some searches on the web about ensuring Facebook and advertising privacy on your devices.

I have been running this blog on this domain for over ten years now but the “hardware” has changed a bit. I have always done a VPS but where it lives has changed over time. I started with Rackspace and then later moved to Digital Ocean back when they were the new kid on the block and offered SSD based VPS instances with unlimited bandwidth. I started on a $5 droplet and then upgraded to a pair of $5 droplets so that I could get better separation of concerns and increase the total amount of compute I had at my disposal. This setup has served me very well for the past five years or so. If you are interested in checking out Digital Ocean I have a referral code you can use – https://m.do.co/c/5016d3cc9b25

As of this writing, the site is hosted on two of the lowest level droplets Digital Ocean offers which cost $5 a month each. I use a pair of instances primarily because it is the cheapest way to get two vCPU worth of compute. I made the change to two instances back when I was running xboxrecord.us (XRU) as well as a NodeJS app. Xboxrecord.us and the associated NodeJS app (which also powered guardian.theater at the time), combined with MySQL, used more CPU than a single instance could provide. By adding a new instance and moving MySQL to it I was able to spread the load across the two instances quite well. I have since shutdown XRU and the NodeJS app but have kept the split server arrangement mostly because I haven’t wanted to spend the time moving it back to a single instance. Also, how I run WordPress is slightly different now because in addition to MySQL I am also running Redis. Four services (Nginx, PHP, Redis and MySQL) all competing for CPU time during requests is just a bit too much for a single core.

Making the dual server arrangement work is simple on Digital Ocean. The instance that runs MySQL also runs Redis for object and page caching for WordPress. This means Nginx and PHP gets its own CPU and MySQL and Redis get their own CPU for doing work. I am now effectively running a dual core system but with the added overhead, however small, of doing some work across the private network. Digital Ocean has offered private networking with no transfer fees between instances for awhile now so I utilize that move data between the two instances. Digital Ocean also has firewall functionality that I tap into to ensure the database server can only be reached by my web server. There is no public access to the database server at all.

The web server is, of course, publicly available. In front of this server is a floating IP, also provided by Digital Ocean. I use a floating IP so that I can create a new web server and then simply switch where the floating IP points so make it live. I don’t need to change any DNS and my cut overs are fairly clean. Floating IPs are free and I highly recommend always leverage floating IPs in front of an instance.

Although the server is publicly available, I don’t allow for direct access to the server. To help provide some level of protection I use Cloudflare in front of the site. I have used Cloudflare for almost as long as I’ve been on Digital Ocean and while I started out on their free plan I have since transitioned to using their Automatic Platform Optimization system for WordPress. This feature does cost $5 a month to enable but what it gives you, when combined with their plugin, is basically the perfect CDN solution for WordPress. I highly recommend this as well.

In all, hosting this site is about $15 a month. This is a bit steeper than some people may be willing to pay and I could certainly do it for less. That said, I have found this setup to be reliable and worry free. Digital Ocean is an excellent choice for hosting software and keeps getting better.

Running WordPress

WordPress, if you’re careful, is quite light weight by today’s standards. Out of the box it runs extremely quickly so I have always done what I could to ensure it stays that way so that I can keep the site as responsive as possible. While I do utilizing caching to keep things speedy you can never ignore uncached speeds. Uncached responsiveness will always be felt in the admin area and I don’t want a sluggish admin experience.

Keeping WordPress running smoothly is simple in theory and sometimes difficult in practice. In most cases, doing less is always the better option. For this reason I install and use as few plugins as necessary and use a pretty basic theme. My only requirement for the theme is that it looks reasonable while also being responsive (mobile friendly). Below is a listing of the plugins I use on this site.

Akismet

This plugin comes with WordPress. Many people know what this plugin is so I won’t get into it too much. It does what it can to detect and mark command spam as best it can and does a pretty good job of it these days.

Autoptimize

Autoptimize combines js and css files into single files as much as possible. This reduces the total number of requests required to load content. This fulfills my “less is more” requirement.

Autoshare for Twitter

Autoshare for Twitter is a plugin my current employer puts out. It does one thing and it does it extremely well. It shares new posts, when told to do so, directly to Twitter with the title of the post as well as a link to it. When I started I would do this manually. Autoshare for Twitter greatly simplifies this task. Twitter happens to be the only place I share new content to.

Batcache

Batcache is a simple page caching solution for WordPress for caching pages at the server. Pages that are served to anonymous users are stored in Redis, with memcache(d) also supported. Additional hits to server will be served out of the cache until the page expires. This may seem redundant since I have Cloudflare providing full page caching but caching at the server itself ensures that Cloudflare’s many points of presence get a consistent copy from the server.

Cloudflare

The Cloudflare plugin is good by itself but required if you are using their APO option for WordPress. With this plugin, API calls are made to Cloudlfare to clear the CDN cache when certain events happen in WordPress, like saving a new post.

Cookie Notice and Compliance

Cookie Notice and Compliance for that sweet GDPR compliance. Presents that annoying “we got cookies” notification.

Redis Object Cache

Redis Object Cache is my preferred object caching solution. I find Redis, combined with this plugin, to be the best object caching solution available for WordPress.

Site Kit by Google

Site Kit by Google, another plugin by my employer, is the best way to integrate some useful Google services, like Google Analytics and Google Adsense, into your WordPress site.

That is the complete set of plugins that are deployed and activated on my site. In addition to this smallish set of plugins I also employ another method to keep my site running as quickly as I can, which I described in Speed up WordPress with this one weird trick. These plugins, combined with the mentioned trick, ensure the backend remain as responsive as possible. New Relic reports that the typical, average response time of the site is under 200ms even if the traffic to the site is pretty low. This seems pretty good to me while using the most basic droplets Digital Ocean has to offer.

Do you host your own site? Leave a comment describing what your methods are for hosting your own site!

I have been working remote for about eight years now and I thought I’d write a bit about my experience with it. What is it like to work from home most of the time, how have I made it work and what problems have I had?

As I said, about eight years ago I made the transition from working in an office full time to working remotely full time. The transition came about after my wife and I agreed that the timing was right for us to move near her family so that she could fulfill a life long dream of opening her own business.

Luckily for me, the company I was at was at the time was receptive to the idea of me changing roles from that of a strictly systems administrator to more of a developer role. Where I was at, the systems I had to manage were in-house and remote work wouldn’t have been feasible (and cloud just wasn’t an option for the company yet). By changing roles from systems to development I was able to remove the requirement to be physically close to the systems that ran the software. Instead, I was able to hone my skills as a developer and dig into DevOps a lot.

Getting Settled In

I knew right away that if I was going to work remote that having a great Internet connection would be a must. I said early on that we can live anywhere as long as I could get milk from a grocery store late at night and reliable Internet with good speeds was available. Luckily we were able to agree on a town that was close enough to my wife’s parents that also met my requirements. At the time, 40 megabit Internet seemed like a solid idea (coming from the 5 I had before) and lucky for me it has since been upgraded multiple times. 200 megabit down is the normal, base package speed and has been perfect for me.

Upon moving into the new house the first thing I did pick a room in the house that could be used as a space dedicated to me. While working remotely means that I can basically work anywhere, I knew right away that having a dedicated space in the house would be important. The room I’m in isn’t necessarily dedicated to doing work, it is just a space that is fully mine and isn’t shared in any way. There is no family computer, TV or game consoles in the room. I know that I can be in this room at any time of day or night and it’ll be as I left it before and that I won’t disturb or be disturbed by anybody else in the house. When I am in this space I know right away that I can concentrate on whatever it is I’m doing. In addition to work, this is also my space to enjoy gaming, music or tinkering on projects.

Working remotely, especially from home, means that you are usually responsible for your office furniture. You may not realize it, but one of the perks for a company with remote workers is that they don’t have to buy very expensive office furniture. But you most certainly should for yourself. I picked up a corner style desk that is a bit more well built than you’ll usually fine and an office chair to match. A good chair is very important as it is something you will be sitting in for many hours a day. Get the best chair you can afford.

Building a Reliable Network

Having good Internet delivered to a home means nothing if you can’t distribute it reliably within the home. During the first few days in the house, after we had the furniture in place, I got to work wiring up as much of the house with ethernet as I could. I know from experience that hardwiring as much stuff as possible frees up valuable “air time” for WiFi devices. Not only does it ensure you get maximum throughput, it is also simply more reliable than WiFi. This is extra important on video calls where a laggy connection is much more noticeable than when you are browsing the web. My office has six total ethernet jacks that lead to a central area with a network switch. These six jacks are used for my main computer as well as a few other items like my Xbox and, depending on time of year, another PC or anything else I want to wire up.

WiFi is still important, of course, so the next thing I did was pick up additional WiFi access points to spread throughout the house. I connected these access points to my wired network (avoid wireless based backhauls if you can!) These access points would later be replaced by three Google WiFi access points. While I can service my house with a single WiFi access point the signal was too weak in some rooms to provide full throughput. I think a lot of people fall into a trap where they assume that, if they can get a WiFi connection at all then it is fine. This is not true. Any device that has a weak WiFi signal, ironically, uses more of the available WiFi resources. There are technical reasons for this that I won’t get into but trust me when I say that the most important thing you can do for WiFi performance is ensure everywhere you are using WiFi you are getting as close to a “perfect” signal as you can. This will ensure that your access points are able to use the most efficient methods available to transfer data between them and your devices.

Actually Working Remotely

In addition to having a dedicated space and a solid home network, working remotely takes some discipline. Without it, the work/life balance becomes very murky and difficult to maintain. I have found that keeping “normal” working hours is more effective than not. This means that I go to my office at 8(ish) in the morning and I consider my work day done at 5pm. I take an hour to myself around the lunch hour. This will surprise some co-workers when I say that I follow this routine every day but I find that helps set clear boundaries on when I am available and when I am not. Since I am always at home, that boundary is very easy to violate even for myself. It’s too easy to sit down in my office “after hours” and work on something. This will eventually lead to burn out and you need to actively avoid it.

In the beginning this was a bit more difficult. Smart phones were still fairly new and didn’t have ways to stop or filter notifications. This level of immediate connectivity meant I could be contacted at all times which made it easy to feel like you were never truly done and away from work. As remote work has caught on and evolved so too have the tools used to facility remote work. Software like Slack now allow you to silence notifications during certain periods of the day. MacOS and iOS now have a shared “focus” mode that you can use to prevent any apps you choose from issuing notifications during times you specify. This allows you to get notifications for things you care about while hiding work related ones that really don’t need your attention (but are hard to ignore).

Working remotely doesn’t have to mean you work from home. One of the freedoms of remote work is being able to literally work from anywhere when you feel like it. In a coffee shop? No problem. Want to try a coworking space? Do it! Working remotely means you aren’t limited on where you work. You will always have the tools you need to properly communicate with co-workers.

Sounds Great But…

Working remote is great but it is not for everyone. There are definitely some aspects of it that a person needs to be aware of before switching to remote work.

People. Most of the time I don’t mind working in my office, alone, because it allows me to concentrate without distractions. I can listen to music at whatever volume I choose and even sing along if I want. But there are days where I wish I could actually interact with a person, in person. There really is something to interacting with people in a local space and collaborating on some thing together. Something that just can’t quite be replicated over a Zoom meeting as easily though it does depend a bit on what type of work you are doing. Brainstorming on the design of something is for me, a bit blah over a Zoom session and I found that things just flow better when you’re in person.

One other thing that isn’t bad but can be challenging is timezones. While working locally at some place you can at least assume you’re all working within the same timezone. Maybe you have a team somewhere else but there is at least a core group of people who you work with daily that come and go on the same schedule as you.

And Yet

Working remote is not something I think I could trade. I really like having my own space and the overall flexibility that it affords. I feel that it is less of an issue if I need to do a midday errand like shuttling my kids around or even taking off a bit earlier in a day to watch them in after school activities. I can always easily make that time backup later if I need to. I also like knowing that where I work is not tied to where I live and that, if the opportunity presented itself, I could switch what I do without uprooting the family.

What to Avoid

If you are considering remote work there is at least one thing, in my mind, that you may want to avoid or very carefully evaluate. Hybrid workplaces, where some people are in the office and some are not, need very careful evaluation. Especially if they implemented remote work as an option during the pandemic. This arrangement can be made to work, in fact I did this for the first few years of my remote work life, but it comes at an additional cost. You as a remote worker will often be left out of discussions and decisions. If you are in a position or your duties are such that you mostly just “take orders” then this isn’t much of an issue. If you are part of a team designing products or where heavy collaboration is necessary then a company with remote first is more desirable.

Finishing Up

That about wraps up the thoughts I’m able to share about my remote work experience but I’m curious about you, dear reader, what are your thoughts? What has made remote work a success for you or what prevents you from working remotely?