In this post I’m going to review how I installed Rundeck on Kubernetes and then configured a node source. I’ll cover the installation of Rundeck using the available helm chart, configuration of persistent storage, ingress, node definitions and key storage. In a later post I’ll discuss how I setup a backup job to perform a backup of the server hosting this site.
For this to work you must have a Kubernetes cluster that allows for ingress and persistent storage. In my cluster I am using nginx-ingress-controller for ingress and freenas-iscsi-provisioner. The freenas-iscsi-provisioner is connected to my FreeNAS server and creates iSCSI based storage volumes. It is set as my default storage class. You will also need helm 3 installed.
With the prerequisites out of the way we can get started. First, add the helm chart repository by following the directions on located on https://hub.helm.sh/charts/incubator/rundeck. Once added, perform the following to get the values file so we can edit it:
helm show values incubator/rundeck > rundeck.yaml
In the rundeck.yaml file I customized a few of the values used during installation. Since the rundeck.yaml file will override the default values for the helm chart I am free to remove items I don’t want to change. Starting from the top of the file I edited the image section so that it was using the most recent version of Rundeck (which you can find on hub.docker.com). I will link to the fully customized file later but my image section looks like this:
image: tag: 3.3.0
Next I edited the deployment section to change the update strategy from the default of “rolling update” to “recreate.” This is required when using persistent storage to prevent multiple Rundeck instances writing to the same volume, which could lead to corruption. My deployment section looks like this:
deployment: strategy: type: Recreate
Next I edited the rundeck section to change the RUNDECK_GRAILS_URL value to match the hostname I plan to use “http://rundeck.test”. Remember to set this to the URL you plan to use. In addition to changing the grails URL I also updated the extraConfigSecret value. I will use this secret so that I can define the nodes I want to allow Rundeck to use. My rundeck section looks like this:
rundeck: env: RUNDECK_GRAILS_URL: "http://rundeck.dustinrue.com" # Name of secret containing additional files to mount into Rundeck's ~/extra directory. # This can be useful for populating a file you reference with RUNDECK_TOKENS_FILE above. extraConfigSecret: "rundeck-extras"
Next, I enabled persistence by editing the persistence section. Persistence is required so that you don’t lose your configuration or installed plugins whenever you restart or upgrade Rundeck. Enabling persistence is as easy as setting the enabled value to true and specifying the remaining options. My persistence section looks like this:
persistence: enabled: true claim: create: true # storageClass: accessMode: ReadWriteOnce size: 1Gi
You may need to modify this section to fit your setup.
Next I setup my ingress. I plan to use http://rundeck.test (at least for this tutorial) so my ingress section looks like this:
ingress: enabled: true paths: - / hosts: - rundeck.test
And that’s it. I have now customized all of the values I want to customize for my Rundeck installation. My full config can be seen here.
Before I started Rundeck I created the node definition file I knew I would need later. Specifying the nodes is done using a Kubernetes Secret which will be made available as a file in an “extras” directory. When I specified a “extraConfigSecret” it is this Secret name that is used to populate the extras directory. To create the Secret I created a file called nodes.yaml with this in it:
apiVersion: v1 kind: Secret metadata: name: rundeck-extras type: Opaque stringData: nodes.yaml: |-
hostname: <ip address>
The ssh-key-storage-path value will become more clear later. I applied the node data using kubectl, like this:
kubectl apply -f nodes.yaml
With the Secret created I could now install Rundeck into my Kubernetes cluster. To do so I used the following command:
helm install -f rundeck.yaml rundeck incubator/rundeck
When helm is done you’ll get back some details on how to access your Rundeck installation. Kubernetes will also begin the process of getting everything running. I use Rancher in my home lab and you can see Rundeck being initialized via the web interface, it looks like this:
It takes some time for Rundeck to initialize so be patient. Once it is ready you can access it at the URL you specified. For my setup I simply added a hosts file entry to point rundeck.test to the exposed IP address. You should be presented with the login screen. You can login with the default user of admin/admin
You will then be presented with an informational page for Rundeck. With initial sign in complete I got started by creating a new project, defining nodes, ssh keys and then a job. The first thing I did was generate an ssh key that Rundeck will use and stored in Rundeck’s Key Storage system. Generating the ssh key looks like this:
ssh-keygen -m PEM -f ~/.ssh/rundeck
I then copied the contents of the resulting rundeck.pub file into the .ssh/authorized_keys on my target servers. This way Rundeck will be able to ssh directly to the target machines. I also copied the contents of
~/.ssh/rundeck to the Key Storage system under the gear menu. I entered the private key contents into the form and gave it a name. The form looks like this:
I clicked save. I can now create a project and define nodes for the project. To create a project I returned to the home page and clicked New Project. I only entered a name and description for the project and clicked Create. After clicking Create Rundeck will take you directly to the node source creation page. On this page I defined how to get node information for the project. After clicking Add new Node Source I selected the File option. I then filled out the form, which looks like this:
The File Path is specific to this type of installation. Remember in the rundeck.yaml file how I specified a value for extraConfigSecret called rundeck-extras? The extra directory will contain information stored in the Secret called rundeck-extras and will be mounted as files (if the Secret is created properly). From here, I clicked save and then save again to finalize the new Node Source. I can now browse to the Nodes screen and browse the nodes I have added:
To validate that Rundeck is able to communicate with the new node I clicked the Actions menu and chose to run a command on the one host. The output looked like this:
Excellent! From here to continue on to creating jobs to be performed against the target servers. In a future post I’ll detail how I created a job to perform a backup of the server hosting this site on a routine basis.