Sometimes you need to access Docker on a remote machine. The reasons vary, you just want to manage what is running on a remote system or maybe you want to build for a different architecture. One of the ways that Docker allows for remote access is using ssh. Using ssh is a convenient and secure way to access Docker on a remote machine. If you can ssh to a remote machine using key based authentication then you can access Docker (provided you have your user setup properly). To set this up read about it at https://docs.docker.com/engine/security/protect-access/.

In a previous post, I went over using remote systems to build multi-architecture images using native builders. This post is similar but doesn’t use k3s. Instead, we’ll leverage Docker’s built on context system to add multiple Docker endpoints that we can tie together to create a solution. In fact, for this I am going to use only remote Docker instances from my Mac to build an example image. I assume that you already have Docker installed on your system(s) so I won’t go through that part.

Like in the previous post, I will use the project located at https://github.com/dustinrue/buildx-example as the example project. As a quick note, I have both a Raspberry Pi4 running the 64bit version of PiOS as well as an Intel based system available to me on my local network. I will use both of them to build a very basic multi-architecture Docker image. Multi-architecture Docker images are very useful if you need to target both x86 and Arm based systems, like the Raspberry PI or AWS’s Graviton2 platform.

To get started, I create my first context to add the Intel based system. The command to create a new Docker context that connects to my Intel system looks like this:

docker context create amd64 --docker host=ssh://[email protected]

This creates a context called amd64. I can then use this context by issuing docker context use amd64. After that, all Docker commands I run will be run in that context, on that remote machine. Next, I add my pi4 with a similar command:

docker context create arm64 --docker host=ssh://[email protected]

We now have our two contexts. Next we can create a buildx builder that ties the two together so that we can target it for our multi-arch build. I use these commands to create the builder (note the optional –platform value which will mark that builder for the listed platforms):

docker buildx create --name multiarch-builder amd64 [--platform linux/amd64]
docker buildx create --name multiarch-builder --append arm64 [--platform linux/arm64]

We now have a single builder named multiarch-builder that we can use to build our image. When we ask buildx to build a multi-arch image, it will use the platform that most closely matches the target architecture to do the build. This ensures you get the quickest build times possible.

With the example project cloned, we now build an image that will work for 64bit arm, 32bit arm and 64bit x86 systems with this command:

docker buildx build --builder multiarch-builder -t dustinrue/buildx-example --platform linux/amd64,linux/arm64,linux/arm/v6 .

This command will build our Docker image. If you wish to push the image to a Docker registry, remember to tag the image correctly and add --push to your command. You cannot use --load to load the Docker image into your local Docker registry as that is not supported.

Using another Mac as a Docker context

It is possible to use another Mac as a Docker engine but when I did this I ran into an issue. The Docker command is not in a path that Docker will have available to it when it makes the remote connection. To overcome this, this post will help https://github.com/docker/for-mac/issues/4382#issuecomment-603031242.

I keep doing more multi-architecture builds using buildx and continue to find good information out there to help refine the process. Here is a post I found I thought I’d share that discusses how to build multi-architecture using AWS Graviton2 based instances which are ARM based. https://www.smartling.com/resources/product/building-multi-architecture-docker-images-on-arm-64-bit-aws-graviton2/. I haven’t officially tried this yet but the same process should also work on a Pi4 with the 64bit PiOS installed.

Came across this blog post by Corey Quinn over on lastweekinaws.com discussing the topic of vendor lock-in, specifically cloud vendors. Corey made some really excellent points but how you are probably already locked in without realizing it. The post reminded me that when I started using AWS after a job change that I was also in the camp of avoiding vendor lock in. Over time I realized, however, that there are some things you must embrace when it comes to a given cloud provider but that doesn’t mean you can’t smartly pick the services you use so that you might leverage some tools that are cloud provider agnostic.

Lets first talk about some additional ways that vendor lock in is inevitable. For starters, if you are not leveraging some of your cloud providers most integral features (speaking purely in AWS terms) like IAM policies and security groups you are almost certainly doing it wrong. Not using IAM policies for configuring an ec2 instance or allowing a CloudFront distribution to access an S3 bucket is usually the wrong way to go about things. You’re much better off just embracing these AWS only techniques in order to build a cleaner, more robust solution. These are the kinds of vendor specific things you should embrace.

However, there are times when you might want to stop and evaluate other options before moving forward. For example, AWS Systems Manager is a tool for managing your systems. Unlike IAM roles, policies and security groups there are other tools out there that provider similar functionality that may be better suited to your needs. Or, maybe you have configuration management that can build and assist in maintaining a database cluster on any provider.

Or maybe you’ve developed your own backup solution that works on any setup. In this case you might want to avoid using RDS unless you really need or want the ease of use that RDS can provide. Maybe the value of having the same tools that you are maintaining work across any cloud provider outweighs the benefits of RDS.

Services like RDS are much easier to cut ties with because your data is actually portable within reasonable limits. Given a normal MySQL RDS instance you can copy the data out and import into some other MySQL system. In these cases I don’t really see RDS as true vendor lock in the sense that you would need to rethink how your software works if you were to move it but rather that if the tooling you’ve built around it is AWS specific that’s where you can get into trouble.

Other services are certainly not that simple and this is where you must carefully consider the services that you use, what your sensitivity to being “locked-in” is and the value that the specific service offers. True vendor lock-in, in my mind, is all about the actual data. Lets say you are considering a video transcoding service that once the videos are transcoded cannot be transferred out or played with out a specific player. This is a great example of a service I would avoid if at all possible and go with some other service that simply accepted an input and provided you with some output to do with as you please.

At the end of the day, avoiding vendor lock-in is a game of determining if what you are looking at is true lock-in or an opportunity to use a platform well and correctly. Avoiding every cloud provider specific tool is almost always a mistake.

If you work with AWS using CLI tools I highly recommend aws-vault to help keep your AWS keys secure. Be sure to visit the usage guide for full details on setup. I configured my copy to be unlocked when I am actively using my computer. It’s also a good idea to ensure your storage is encrypted.