Problem Description:
I am trying Ubuntu Pro. I want to convince my company to invest in Pro so we can get live patching. I have a small cluster of existing servers I attached to my 5 free licenses. When I do a “sudo pro enable livepatch” it says “Cannot install Livepatch on a container.” If I run “sudo pro status --all” it shows the livepatch status as “n/a”.
Screenshots or Error Messages:
sudo pro status --all
SERVICE ENTITLED STATUS DESCRIPTION
anbox-cloud yes disabled Scalable Android in the cloud
cc-eal yes n/a Common Criteria EAL2 Provisioning Packages
esm-apps yes enabled Expanded Security Maintenance for Applications
esm-infra yes enabled Expanded Security Maintenance for Infrastructure
fips yes n/a NIST-certified FIPS crypto packages
fips-preview yes n/a Preview of FIPS crypto packages undergoing certification with NIST
fips-updates yes n/a FIPS compliant crypto packages with stable security updates
landscape yes enabled Management and administration tool for Ubuntu
livepatch yes n/a Canonical Livepatch service
realtime-kernel yes n/a Ubuntu kernel with PREEMPT_RT patches integrated
├ generic yes n/a Generic version of the RT kernel (default)
├ intel-iotg yes n/a RT kernel optimized for Intel IOTG platform
└ raspi yes n/a 24.04 Real-time kernel optimised for Raspberry Pi
ros yes n/a Security Updates for the Robot Operating System
ros-updates yes n/a All Updates for the Robot Operating System
usg yes disabled Security compliance and audit tools
sudo pro enable livepatch
One moment, checking your subscription first
Cannot install Livepatch on a container.
Could not enable Livepatch.
What I’ve Tried: I tried the same steps on a fresh install of 24.04.2 LTS and it works correctly.
Are you saying that you installed Ubuntu 24.04 server, using the same installer image and the same install method, onto two different bare-metal machines, and that one thinks it’s in a container?
We’re going to ask you a lot of probing questions about how the two systems or the two installers might be different. You can save a lot of time by expounding upon those differences now.
I don’t keep track of which image was used to install a server. The servers which are being detected as containers were installed as far as I can tell using ubuntu-24.04.1-live-server-amd64.iso. The cluster I want to test / show live patching on runs an Openstack Cluster. The other server I installed on is just a plain install plus updates installed.
I did some troubleshooting. I found that pro uses systemd-detect-virt to determine if it is running in a container. In turn, systemd-detect-virt checks and finds the file “/run/.containerenv”, so it thinks that it is running in a container. If I delete that file, it returns after a reboot. Several containers deployed with Kolla Ansible seem to mount /run at /run in the container, so I assume that is where the file comes from.
Is it enough for me to just remove /run/.containerenv, and trick systemd-detect-virt and thus pro to enable the livepatch? Or does livepatch itself do the check?
It would be good if there was a parameter somewhere, to override the check, as I can imagine it could often be incorrect.
I read the whole discussion, it seems to me that checking for “/run/.containerenv” or “/.dockerenv”, should only be checked to determine which container runtime is used. Not to determine if you are running in a container. Determining if being run in a container, the process should check if the environment variable “container” exists and set to a value.
If I am reading that correctly then the virt.c from systemd has a bug, cause it doesn’t require the environment variable “container” to exist, only the file “/run/.containerenv”.