Getting started with the Livepatch Server Snap

Canonical Livepatch Server enables the delivery of Livepatch’s to Livepatch clients, allowing reboots of critical infrastructure to be scheduled at a convenient time.

In this tutorial we will setup the Livepatch Server snap.

Please note, the server snap is not designed for high-availability setups!

At minimum, the server requires a PostgreSQL (At least version 12) instance to persist data. For the sake of simplicity, we will use docker to illustrate this server setup. However, feel free to use an existing instance if you have one available to you!

Run the following to start a PostgreSQL instance in Docker:

 docker run \
 --name postgresql \
 -e POSTGRES_USER=livepatch \
 -p 5432:5432 \
 -d postgres:12.11

Installing the snap
To install the server snap, simply run:

 sudo snap install canonical-livepatch-server

Migrating the database
Within the snap is an internal tool used to migrate a PostgreSQL database with the Livepatch Server schema, to migrate your database run:

 canonical-livepatch-server.schema-tool \

Pointing Livepatch at your database
All of the configuration for the Livepatch Server snap is handled within the snap daemon, to update Livepatch to target the DSN of your PostgreSQL instance, run:

 sudo snap \
 set canonical-livepatch-server \

Validate the server is available
To check the server is running successfully, you may run the following:

 sudo snap logs \
 canonical-livepatch-server.livepatch -n 100

If you’re a customer of Ubuntu Pro and have access to Livepatch on-premise, you can enable on-premise within the snap the same as you would for the charm.

Obtain a Ubuntu Pro token
Given you are a customer of Ubuntu Pro, you will have Livepatch On-Premise available to you.

You can obtain your token from:

Updating Livepatch to use your Ubuntu Pro token
As previously stated, we can update the servers configuration through the snap daemon, so let’s update the server to use this token and enable Livepatch On-Premise:

 sudo snap set canonical-livepatch-server token=<Ubuntu Pro token>

Managing the server
To manage Livepatch, we have an administrator tool, also available as a snap. Install the administrator tool from:

 sudo snap install canonical-livepatch-server-admin

The administrator tool needs to know where your Livepatch server is hosted, in an all-in-one setup within a single machine, this is simply http://localhost:8080. Export an environment variable like so:

 export LIVEPATCH_URL=http://localhost:8080

Next, for the administrator tool to be able to login to the server, we will require some form of basic authentication, also set in the snap daemon. For the purpose of this tutorial, we have provided you one with the username as admin and password as admin123:

Please note, dollar signs must be escaped and the password is and must be bcrypt hashed

 sudo snap set canonical-livepatch-server \

If you would like to generate your own, you can do so as follows:

 sudo apt-get install apache2-utils
 htpasswd -bnBC 10 <username> <password>

Next, we need to manually enable basic authentication:

 sudo snap set canonical-livepatch-server lp.auth.basic.enabled=true

Finally, we can login with the administrator tool like so presuming you have used the example user and password:

 canonical-livepatch-server-admin.livepatch-admin login -a admin:admin123

Sychronising with hosted Livepatch
Now you have a running, fully configured On-Premise Livepatch server, we can synchronise patches from hosted Livepatch into your server. Run the following within your administrator tool:

 canonical-livepatch-server-admin.livepatch-admin sync trigger 

This configuration is set to store the patches on the local filesystem of the server, you can also view the patches on the filesystem here:
You are now ready to connect clients to your snap Livepatch instance! Currently the server is set (by

 ls /var/snap/canonical-livepatch-server/common/patches/

Final words
And now you have an On-Premise Livepatch server configured to synchronise with hosted Livepatch!

For further reading please consult the how-to guides!