Livepatch and LXD

Introduction

In this tutorial we will deploy and configure the Livepatch on-premise server using LXD as our cloud provider.

We will be using LXD, Juju and the Livepatch on-prem charm/bundle.

For this how-to, you will not require any previous or advanced knowledge of LXD, Juju or Charmed Operators to proceed and deploy Livepatch on-premise.

Let us continue and set up the required snaps and obtain our Ubuntu Pro token.

If you’ve already deployed Livepatch before, and wish to keep your same configuration, we’ve rewritten our machine charm and the configuration has changed. Please see here for instructions on how to migrate.

JQ

JQ is a JSON processor, and we’ll use it within this tutorial to extract some values for later use. Install it like so:

sudo apt update
sudo apt install jq

LXD

LXD provides a unified user experience for managing system containers and virtual machines. And in this how-to, Juju will utilise LXD to spawn containers for the Livepatch on-premise services.

LXD can be installed locally via a snap. To install LXD, run:

sudo snap install lxd --channel=5.0/stable

Next, LXD must be initialised, run the following command and either accept the defaults or choose different options when prompted (you may also use the –auto flag):

lxd init --auto

Juju

Juju is an open source orchestration engine for software operators that enables the deployment, integration and lifecycle management of applications at any scale, on any infrastructure using charms.

Juju can be installed locally via a snap. To install Juju, run:

sudo snap install juju --channel=3.1/stable

Ubuntu Pro

Livepatch on-premise requires authorisation to the upstream hosted Livepatch by Canonical via the use of Ubuntu Pro tokens. To retrieve your Ubuntu Pro token please go here and save your token for later use.

Deployment Steps

1. Initialise Juju

Let us bootstrap a controller on LXD:

mkdir -p ~/.local/share
juju bootstrap lxd livepatch-onprem

After the controller is successfully bootstrapped, you will have an output [similar to]:

Next, we’ll see the available models and create a model to deploy Livepatch into.

We can check our existing models in the controller via:

juju models

Now we’ll create the model:

juju add-model livepatch

You may run the following to ensure your model has been created:

juju models

2. Deploying the bundle

Ensure you’re on the livepatch model via:

juju switch livepatch

And deploy the bundle like so:

juju deploy canonical-livepatch-onprem --channel=machine

You can watch the status of the deployment via:

juju status --watch 2s

After some time, your model will resemble the following:

Now we will run a database schema upgrade, using a charm action:

juju run livepatch/0 schema-upgrade

The output will look like:

And Livepatch will enter a running state:

You’ve successfully deployed Livepatch! But it requires a few more steps to get up and running!

3. Enabling Ubuntu Pro

Firstly, we’ll enable Ubuntu Pro on the machines for ESM (Expanded Security Maintenance) amongst other features Ubuntu Pro provides. Run:

juju config ubuntu-advantage token='<token>'

On a successful attach, you will see something similar to the follow in your status output:

4. Enabling Livepatch

Next, to enable Livepatch on-prem, we’ll run:

juju run livepatch/0 enable token='<token>'

You will see the following action output if successful:

Livepatch is now enabled! In the next segment, we’ll configure Livepatch to tell our clients where to download patches from.

5. Configuring Livepatch

URL Template

We’ll need to configure a charm config option called server.url-template.

The URL template specifies the URL where patch files can be downloaded from by Livepatch clients.

In an on-premise environment, this could be the server itself or any file server you have with patches ready to be served.

The URL template looks like so:

http(s)://domain/{filename}

The {filename} segment is a special variable which Livepatch will insert file names as-is to.

For example, if you had a bucket in AWS and wished for your clients to download patches from it (given you have configured your on-premise server to synchronise patches into S3), your URL template could look similar to:

https://s3-eu-west-2.amazonaws.com/livepatch/patches/{filename}

For this tutorial, we’ll use the server itself as the patch server, and the Livepatch server has a special endpoint under:

/v1/patches/:patch_name

To reach the server, we recommend going through HAProxy that is included in the bundle, HAProxy will act as a load-balancer, allowing you to scale the number of Livepatch server machines. You may use a DNS pointing to your HAProxy or if you wish to just test your deployment, you can use an address from one of your HAProxy units. To retrieve the HAProxy address you can run: (Notice {UNIT NUMBER}, where this is the unit number from juju status.

juju status --format json | jq -r '.applications.haproxy.units["haproxy/{UNIT NUMBER}"]["public-address"]'

With your DNS or HAProxy address ready, you must format the URL template exactly as so:

http(s)://<DNS/HAProxy IP>:<PORT>/v1/patches/{filename}

Note that is only required for non-default ports i.e. not 80/443.

For this tutorial, we’ll use our HAProxy address, so please run:

juju config livepatch server.url-template="http://10.197.77.80/v1/patches/{filename}"

You can confirm it was successfully set by running:

juju config livepatch server.url-template

Authorisation and Authentication

In order for the administrators of this Livepatch on-premise deployment to manage the server, they’ll require users. This can be setup with the following steps.

We’ll firstly enable basic authentication/authorization like so:

juju config livepatch auth.basic.enabled=true

Install the following for bcrypt utilities:

sudo apt-get install apache2-utils -y

Next, we’ll create a user and password like so:

htpasswd -bnBC 10 admin admin123
admin:$2y$10$jEmTFsxm7dpqxptch8u3UuilVbzzmT6HGTeu6kKMta5Gdqnj9cOHG

With the output as-is, i.e., user:password, run (note the single quotes, these are required to prevent escaping):

juju config livepatch auth.basic.users=’admin:$2y$10$jEmTFsxm7dpqxptch8u3UuilVbzzmT6HGTeu6kKMta5Gdqnj9cOHG’

If you wish to add more users, this is a comma-separated list of user:passwords.

Now Livepatch is configured to serve patches to clients and the administrator can login!

6. A brief introduction to the admin tool

Livepatch can be managed via our administrator tool.

You can download the admin tool via snap here.

To make things a little easier, we’ll create an alias like so:

Next, we’ll export an environment variable called LIVEPATCH_URL. It must point at your DNS/HAProxy unit as discussed previously in this tutorial.

export LIVEPATCH_URL=http(s)://<DNS/HAProxy IP>:<PORT>

Now, with one of your administrators, you can login:

livepatch-admin login -a admin:admin123

The final step before attaching client machines to the server is to download patches from Canonical servers.

This can be done via:

livepatch-admin sync trigger --wait

For further information on the admin tool, see the Administration tool topic.

Enabling machine status reporting

Each livepatch on-prem instance can optionally send information about the status of the machines it’s serving back to Canonical.

This functionality is opt-in.

The information sent back about each machine includes:

  • Kernel version

  • CPU model

  • Architecture

  • Boot time and uptime

  • Livepatch client version

  • Obfuscated machine ID

  • Status of the patch currently applied to the machine’s kernel

To enable this reporting, run the following juju command:

juju config livepatch patch-sync.send-machine-reports=true

This can be disabled at any time by setting the flag to false.

7. Cleaning up the deployment

Should you wish to clean up your deployment, you can do so via:

juju destroy-controller livepatch-onprem --destroy-all-models
1 Like