Update – I have also posted an alternative method that preserves the ability to have full page caching enabled. Please find it at WordPress ActivityPub and Cloudflare.

In a previous post I discussed how to deal with the fact the ActivityPub plugin for WordPress must return author pages in a different format depending on the value of the Accept header. A browser hitting an author page is going to be looking for HTML to be returned, while Mastodon will expect a JSON instead. If you use any kind of caching system be it a CDN, special plugin or combination of the two then you may run into an issue where the wrong content is being cached for each Accept header type. You might see this in your site health report with:

Your author URL does not return valid JSON for application/activity+json. Please check if your hosting supports alternate Accept headers.

In this post I will discuss a method for dealing with this while not totally losing the ability to cache the response. This is useful for busy sites or as a way to help mitigate some forms of DoS attack. The example I provide here is meant for Nginx with php-fpm but you can apply this same sort of thinking anywhere else where you have enough control over the configuration to make it work.

Assuming you followed the previous post and have created an exception for your author URL in your CDN then it is on your server to render author pages each time a request is made. This is a waste of resources and doesn’t provide an ideal experience for end users. To enable caching on this endpoint, we will leverage Nginx’s built in caching capability while setting the cache key based on the Accept header.

To start, let’s setup basic Nginx caching. At the top of your configuration file, outside of the server{} block, (advanced users can adjust as desired) add the following:

fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=wordpress:100m inactive=10m max_size=100m;

You can adjust the path if you want but in essence we are defining a path of /etc/nginx/cache with a name of wordpress. We are limiting it to 100MB and saying delete anything older than 10 minutes if it hasn’t been accessed. /etc/nginx/cache must exist and must be owned by the same user that runs Nginx. If you have multiple servers know that this cache is unlikely to be shared so each server will have a unique cache.

Next, add a map that to define what Accept headers we want to Vary on:

map $http_accept $vary_key {
  default "default";
  "~application/activity\+json" "json";
}

This block will create a new variable we can use later called $vary_key. Notice here that we will only create a different cache entry when application/activity+json is sent included in the list.

Now inside the server{} block for your site, let’s add a nice header we can use to ensure our caching is working properly. Adding add_header X-Nginx-Cache $upstream_cache_status; to this section will cause Nginx to output a header we can see to know the cache status. It will be BYPASS, MISS or HIT in response headers.

Next, inside the location block that is handling PHP requests, add the following config options:

# cache key
fastcgi_cache_key "$vary_key$host$request_method$request_uri";

# matches keys_zone in fastcgi_cache_path
fastcgi_cache wordpress;

# don't cache pages defined earlier
fastcgi_no_cache $no_cache;

#defines the default cache time
fastcgi_cache_valid any 10m;

# misc additional settings
fastcgi_cache_use_stale updating error timeout invalid_header http_500;
fastcgi_cache_lock on;
fastcgi_cache_lock_timeout 10s;

The next settings depend on what you want to do. If you are using a CDN and are only exposing author pages then you can use the following settings

# Cache nothing by default
set $no_cache 1;

# Only cache author pages
if ($request_uri ~* "/author/") {
  set $no_cache 0;
}

If instead, you want to cache everything your CDN might miss then you can use this (this is what I use):

# Cache everything by default
set $no_cache 0;

# Don't cache logged in users or commenters
if ( $http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
  set $no_cache 1;
}

# Don't cache the following URLs
if ($request_uri ~* "/(wp-admin/|wp-login.php)") {
  set $no_cache 1;
}

If done correctly then hitting an author page will result in different results depending on the Accept header being used. To verify, take an author page and load it up in a browser. You should get a proper HTML page. Copy the URL out and, using curl, send the following:

curl -I https://dustinrue.com/author/ruedu/ -H "Accept: application/activity+json"

x-nginx-cache: MISS

Your Nginx cache status may already be HIT if someone recently searched for you. It should be HIT if you send the request again.

Debugging

It is important that you debug and properly resolve this endpoint. Failing to do so will result in failed searches of your author/user from ActivityPub clients. To be clear, the following must return different content:

curl -I https://dustinrue.com/author/ruedu
curl -I https://dustinrue.com/author/ruedu -H "Accept: application/activity+json"

Adjust these URLs for your author page URLs and ensure the first curl returns HTML content while the second one returns JSON content. While writing this post I noticed that the plugin is still outputting that the content type is text/html when it should say application/activity+json. Despite this inconsistency, clients will use the returned content.

If the curl calls are returning different content, next pay attention to the x-nginx-cache header to ensure that it is actually caching. You can add another utility header to your Nginx config to assist with this:

add_header x-accept $vary_key always;

This add_header will output what value the map landed on so you can ensure things are being picked up properly.

Conclusion

I hope this is enough to help guide you in improving your WordPress + ActivityPub experience.

I have been running this blog on this domain for over ten years now but the “hardware” has changed a bit. I have always done a VPS but where it lives has changed over time. I started with Rackspace and then later moved to Digital Ocean back when they were the new kid on the block and offered SSD based VPS instances with unlimited bandwidth. I started on a $5 droplet and then upgraded to a pair of $5 droplets so that I could get better separation of concerns and increase the total amount of compute I had at my disposal. This setup has served me very well for the past five years or so. If you are interested in checking out Digital Ocean I have a referral code you can use – https://m.do.co/c/5016d3cc9b25

As of this writing, the site is hosted on two of the lowest level droplets Digital Ocean offers which cost $5 a month each. I use a pair of instances primarily because it is the cheapest way to get two vCPU worth of compute. I made the change to two instances back when I was running xboxrecord.us (XRU) as well as a NodeJS app. Xboxrecord.us and the associated NodeJS app (which also powered guardian.theater at the time), combined with MySQL, used more CPU than a single instance could provide. By adding a new instance and moving MySQL to it I was able to spread the load across the two instances quite well. I have since shutdown XRU and the NodeJS app but have kept the split server arrangement mostly because I haven’t wanted to spend the time moving it back to a single instance. Also, how I run WordPress is slightly different now because in addition to MySQL I am also running Redis. Four services (Nginx, PHP, Redis and MySQL) all competing for CPU time during requests is just a bit too much for a single core.

Making the dual server arrangement work is simple on Digital Ocean. The instance that runs MySQL also runs Redis for object and page caching for WordPress. This means Nginx and PHP gets its own CPU and MySQL and Redis get their own CPU for doing work. I am now effectively running a dual core system but with the added overhead, however small, of doing some work across the private network. Digital Ocean has offered private networking with no transfer fees between instances for awhile now so I utilize that move data between the two instances. Digital Ocean also has firewall functionality that I tap into to ensure the database server can only be reached by my web server. There is no public access to the database server at all.

The web server is, of course, publicly available. In front of this server is a floating IP, also provided by Digital Ocean. I use a floating IP so that I can create a new web server and then simply switch where the floating IP points so make it live. I don’t need to change any DNS and my cut overs are fairly clean. Floating IPs are free and I highly recommend always leverage floating IPs in front of an instance.

Although the server is publicly available, I don’t allow for direct access to the server. To help provide some level of protection I use Cloudflare in front of the site. I have used Cloudflare for almost as long as I’ve been on Digital Ocean and while I started out on their free plan I have since transitioned to using their Automatic Platform Optimization system for WordPress. This feature does cost $5 a month to enable but what it gives you, when combined with their plugin, is basically the perfect CDN solution for WordPress. I highly recommend this as well.

In all, hosting this site is about $15 a month. This is a bit steeper than some people may be willing to pay and I could certainly do it for less. That said, I have found this setup to be reliable and worry free. Digital Ocean is an excellent choice for hosting software and keeps getting better.

Running WordPress

WordPress, if you’re careful, is quite light weight by today’s standards. Out of the box it runs extremely quickly so I have always done what I could to ensure it stays that way so that I can keep the site as responsive as possible. While I do utilizing caching to keep things speedy you can never ignore uncached speeds. Uncached responsiveness will always be felt in the admin area and I don’t want a sluggish admin experience.

Keeping WordPress running smoothly is simple in theory and sometimes difficult in practice. In most cases, doing less is always the better option. For this reason I install and use as few plugins as necessary and use a pretty basic theme. My only requirement for the theme is that it looks reasonable while also being responsive (mobile friendly). Below is a listing of the plugins I use on this site.

Akismet

This plugin comes with WordPress. Many people know what this plugin is so I won’t get into it too much. It does what it can to detect and mark command spam as best it can and does a pretty good job of it these days.

Autoptimize

Autoptimize combines js and css files into single files as much as possible. This reduces the total number of requests required to load content. This fulfills my “less is more” requirement.

Autoshare for Twitter

Autoshare for Twitter is a plugin my current employer puts out. It does one thing and it does it extremely well. It shares new posts, when told to do so, directly to Twitter with the title of the post as well as a link to it. When I started I would do this manually. Autoshare for Twitter greatly simplifies this task. Twitter happens to be the only place I share new content to.

Batcache

Batcache is a simple page caching solution for WordPress for caching pages at the server. Pages that are served to anonymous users are stored in Redis, with memcache(d) also supported. Additional hits to server will be served out of the cache until the page expires. This may seem redundant since I have Cloudflare providing full page caching but caching at the server itself ensures that Cloudflare’s many points of presence get a consistent copy from the server.

Cloudflare

The Cloudflare plugin is good by itself but required if you are using their APO option for WordPress. With this plugin, API calls are made to Cloudlfare to clear the CDN cache when certain events happen in WordPress, like saving a new post.

Cookie Notice and Compliance

Cookie Notice and Compliance for that sweet GDPR compliance. Presents that annoying “we got cookies” notification.

Redis Object Cache

Redis Object Cache is my preferred object caching solution. I find Redis, combined with this plugin, to be the best object caching solution available for WordPress.

Site Kit by Google

Site Kit by Google, another plugin by my employer, is the best way to integrate some useful Google services, like Google Analytics and Google Adsense, into your WordPress site.

That is the complete set of plugins that are deployed and activated on my site. In addition to this smallish set of plugins I also employ another method to keep my site running as quickly as I can, which I described in Speed up WordPress with this one weird trick. These plugins, combined with the mentioned trick, ensure the backend remain as responsive as possible. New Relic reports that the typical, average response time of the site is under 200ms even if the traffic to the site is pretty low. This seems pretty good to me while using the most basic droplets Digital Ocean has to offer.

Do you host your own site? Leave a comment describing what your methods are for hosting your own site!

This one is, primarily, for all the people responsible for ensuring a WordPress site remains available and running well. “Systems” people if we must name them. If you’re a WordPress developer you might want to ride along on this one as well so you and the systems or DevOps team can be speaking a common language when things go bad. Often times, systems people will immediately blame developers for writing bad code but the two disciplines must cooperate to keep things running smoothly, especially at scale. It’s important for systems AND developers to understand how code works and scales on servers.

What I’m about to cover is some common performance issues that I see come up and then be misdiagnosed or “fixed” incorrectly. They’re the kind of thing that causes a WordPress site to become completely unresponsive or very slow. What I cover may seem obvious to some, and they are certainly very generalized, but I’ve seen enough bad calls to know there are a number of people out there that get tripped up by these situations. None of the issues are necessarily code related nor are they strictly WordPress related, they apply to many PHP based apps; it’s all about how sites behave at scale. I am going to explore WordPress site performance issues since that’s where my talents are currently focused.

In all scenarios I am expecting that you are running something getting a decent amount of traffic at the server(s). I am assume you are running a LEMP stack consisting of Linux, Nginx, PHP-FPM and MySQL. Maybe you even have a caching layer like Memcached or Redis (and you really should). I’m also assuming you have basic levels of visibility into the app using something like New Relic.

Let’s get started.

Continue reading

Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.

Continue reading

Whatever your reason for placing an NGINX proxy in front of your Gitlab installation, you need to ensure you’re using the right configuration to support all of Gitlab’s features. I recently discovered that although my installation was mostly working I couldn’t get pipeline/build logs properly. I discovered that my proxy configuration was to blame. After some searching around I finally found that my config wasn’t quite right. To get the most out of Gitlab and ensure a smooth experience use configuration shown below as a template for your own. In my setup I use LetsEncrypt for SSL so if you’re not you can remove any of the SSL specific parts. The important configuration information is contained the the location block.

 

upstream gitlab {
  server <ip of your gitlab server>:<port>;
}

server {
    listen          443;
    server_name     <your gitlab server hostname;

    ssl on;
    ssl_certificate <path to cert>;
    ssl_certificate_key <path to key>;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    server_tokens off;


    gzip on;
    gzip_vary on;
    gzip_disable "msie6";
    gzip_types application/json;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;

    location / {
       client_max_body_size   0;
       proxy_set_header    Host                $http_host;
       proxy_set_header    X-Real-IP           $remote_addr;
       proxy_set_header    X-Forwarded-Ssl     on;
       proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
       proxy_set_header    X-Forwarded-Proto   $scheme;

      proxy_pass https://gitlab;
    }
}

This configuration will properly pass all requests through to your Gitlab server as well as allow CI/CD pipeline logs to pass through properly.