LXD 5.21.1 LTS has been released

Introduction

The LXD team would like to announce the release of LXD 5.21.1 LTS!

This is the first bugfix release for LXD 5.21 which is supported until June 2029.

Thank you to everyone who contributed to this release!

Bugfixes and improvements

Restricted metrics client certificate security regression fix

This release fixes a security regression introduced in LXD 5.21.0 that incorrectly converted existing restricted metrics client certificates to unrestricted metrics identities.

This allowed a client using a metrics certificate to access read-only metric information about all instances in a system when previously the client certificate may have been configured to only allowed access to metric information about instances in specific projects.

The fix will re-classify the converted unrestricted metrics identities to restricted identities, which means in some cases genuinely unrestricted metrics identities will need to be manually set back to unrestricted.

This can be done using the lxc config trust edit <fingerprint> command.

The 5.21.0 release was never pushed to any stable snap channels.

For those updating from pre-LXD 5.21.0 the previous database update has been amended to avoid incorrectly converting restricted metrics certificates to unrestricted ones.

New image server remote for non-Ubuntu images

There is now a new image server available (images.lxd.canonical.com) that provides non-Ubuntu images. This remote is now bundled in the lxc command by default under the remote name images.

To see a list of available images run:

lxc image list images:

List all storage volumes API and CLI support

A new API endpoint /1.0/storage_volumes and API extension storage_volumes_all was added to provide support for listing all storage volumes in a single API call. Support for this new functionality has been added to the lxc storage volume list command too such that specifying the pool name is now an optional argument and by default it will list all volumes in the project from all storage pools.

Supporting modifying permissions of existing files with lxc file push

A new API extension instances_files_modify_permissions has been added that adds support for detecting if the user has specified the --uid, --gid or --mode flags when using lxc file push, and if overwriting an existing file the file’s permissions and ownership are updates to those requested by the user.

Updated storage volume volatile.uuid database patch

The previous DB patch in LXD 5.21.0 that was supposed to add a volatile.uuid setting to all storage volume database records was not reliably doing so for remote storage volumes in cluster setups. A new patch has been added to address this issue.

Replaced UI X-Xss-Protection header for Content-Security-Policy

The X-Xss-Protection header is deprecated so this has now been replaced with a new Content-Security-Policy header when LXD serves the LXD UI.

Updated LXC and LXCFS versions

The LXD snap now comes with the LXC and LXCFS 6.0.0 LTS releases.

Complete changelog

Here is a complete list of all changes in this release:

Full commit list
  • test/suites/basic: check version number format (X.Y.Z for LTSes, X.Y otherwise)
  • lxd/storage/s3/miniod: Specify a port for minio --console-address
  • lxc: Add context to socket access errors
  • doc/devices/nic: add missing spaces
  • doc/devices/unix-*: add configuration examples
  • doc/explanation: Add authorization explanation page.
  • doc: Add instructions for OIDC clients post ‘access_management’ extension.
  • doc: Update authentication page for authorization.
  • doc: Add links to authorization page.
  • doc: Add IAM related words to wordlist.
  • lxd/auth: Remove no-op methods from authorizer interface.
  • lxd/instance/drivers: Remove authorizer calls to no-op methods.
  • lxd/storage: Remove authorizer calls to no-op methods.
  • lxd: Remove authorizer calls to no-op methods.
  • doc/devices: add CLI examples for more device types
  • doc: except commands from the spelling check
  • lxc: Correctly parse remote when listing permissions.
  • doc/devices/proxy: add CLI examples for proxy device
  • doc/devices/gpu: add configuration examples for gpu devices
  • lxd/patches: Add patchStorageSetVolumeUUIDV2
  • lxd/patches: Deactivate patchStorageSetVolumeUUID
  • lxd/storage/backend_lxd: Ensure new images have a volatile.UUID
  • lxd: Pre-check permissions when performing bulk state update.
  • scripts: Add bash completions for lxc auth
  • lxd: Improves efficiency of operation cancel with permission checker.
  • lxd: Update X-Xss-Protection (deprecated) for Content-Security-Policy
  • lxd: add explanations on the security headers provided for the UI responses.
  • lxd/storage/drivers/btrfs: Add createVolumeFromCopy for copy and refresh
  • lxd/storage/drivers/btrfs: Use createVolumeFromCopy when copying a volume
  • lxd/storage/drivers/btrfs: Use createVolumeFromCopy when refreshing a volume
  • shared/api: Implement xerrors.Unwrap for StatusError.
  • lxd/auth: Wrap errors in api.StatusErrorf.
  • lxd/response: Wrap errors in api.StatusErrorf.
  • lxd: Wrap errors in api.StatusErrorf.
  • lxc: Wrap errors in api.StatusErrorf.
  • lxd/auth: Return appropriate HTTP error codes when getting request details.
  • lxd/request: Add a CtxTrusted context key.
  • lxd/auth: Get authentication status from request.
  • lxd/auth: Handle untrusted requests in authorizer.
  • lxd: Add trusted value to context.
  • lxd: Remove checkTrustedClient method.
  • lxd: Update allowAuthenticated access handler.
  • lxd: Remove call to checkTrustedClient.
  • lxd: Handle certificate creation from untrusted users.
  • lxd: Remove Authenticate call from operation wait handler.
  • lxd: Remove isTrustedClient call from image export handler.
  • lxd: Remove isTrustedClient call from image alias get handler.
  • lxd: Remove isTrustedClient call from image get handler.
  • lxd: Remove isTrustedClient call from images get handler.
  • lxd: Remove isTrustedClient call from images post handler.
  • lxd/project: Update cluster target restriction tests.
  • build(deps): bump github.com/mdlayher/ndp from 1.0.1 to 1.1.0
  • lxc/file: Get owner mode only if --gid or --uid is unset
  • lxd/device/nic: fix default IP for routed NIC (ipv4.host_address)
  • lxdmetadata: update metadata
  • github: Add stable-5.21 branch to dependabot config
  • lxd: Add security response headers to documentation
  • lxd: enable server side gzip compression on all API routes
  • scripts/bash/lxd-client: use column to select the image alias
  • scripts/bash/lxd-client: fix lxc storage <TAB>
  • scripts/bash/lxd-client: add missing keys to lxc storage <TAB>
  • scripts/bash/lxd-client: show pool names on lxc storage info <TAB>
  • scripts/bash/lxd-client: Use long option names
  • lxd/instance/drivers/common: Clone the device config
  • scripts/bash/lxd-client: add missing args to lxc network completion
  • lxc: handle GetImage logic inside dereferenceAlias
  • i18n: update .pot files
  • doc/reference: reorder pages and update the landing page
  • doc/explanation: reorder pages and update the landing page
  • lxd/storage/drivers/btrfs: Clarify fallback in case UUID discovery times out
  • lxd/storage/drivers/btrfs: Move config modifications into FillConfig
  • doc/howto: reorder pages and update the landing pages
  • doc: update the start page and add links to sections
  • doc: fix exceptions for Markdown linter
  • lxd/patches: Add selectedPatchClusterMember for patch coordination
  • lxd/patches: Add patchStorageRenameCustomISOBlockVolumesV2
  • lxd/patches: Supersede patchStorageRenameCustomISOBlockVolumes
  • lxd/patches: Add patchStorageUnsetInvalidBlockSettingsV2
  • lxd/patches: Supersede patchStorageUnsetInvalidBlockSettings
  • instance/drivers/driver_lxc: do not set “soft” limit when hard limit is set
  • incusd/instance/qemu: Fix handling of > 64 limits.cpu
  • doc: workaround for undefined references
  • lxd/api: Revert gzip compression on API
  • build(deps): bump github.com/openfga/openfga from 1.5.0 to 1.5.1
  • lxd/storage/drivers/generic: Return cleanup hooks from genericVFSCopyVolume
  • lxd/storage/drivers/ceph: Use the revert pattern for local refreshes
  • lxd/storage/drivers/dir: Use cleanup hooks from genericVFSCopyVolume
  • lxd/storage/drivers/lvm: Use cleanup hooks from genericVFSCopyVolume
  • lxd/storage/drivers/powerflex: Use cleanup hooks from genericVFSCopyVolume
  • lxd/storage/drivers/zfs: Use cleanup hooks from genericVFSCopyVolume
  • lxd/storage/drivers/generic: Return cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/drivers/ceph: Use the revert pattern for migrations
  • lxd/storage/drivers/btrfs: Use cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/drivers/dir: Use cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/drivers/lvm: Use cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/drivers/powerflex: Use cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/drivers/zfs: Use cleanup hooks from genericVFSCreateVolumeFromMigration
  • lxd/storage/backend_lxd.go: remove unused parameters
  • lxd/api_internal.go: remove impossible conditions
  • lxd: Update instance types URL
  • lxd/shared/util: create function for applying device overrides
  • lxc/utils: create function for getting profile devices
  • lxd/api_internal: eliminate duplicated code
  • lxc/init: eliminate duplicated code
  • lxc/copy: apply profile expansion on device override
  • test: add test for device overriding on copy
  • i18n: update translations
  • grafana: connect nulls and use instant type where appropriate
  • grafana: add legend to stats
  • shared: Move ParseIPRange to shared/
  • lxd/network: Use shared.ParseIPRanges
  • doc: remove nesting for the tutorial
  • doc/server settings: change display of /etc/sysctl.conf settings
  • api: Add storage_volumes_all extension
  • shared/api: Add Pool field to api.StorageVolume
  • lxd: Remove uncecessary parameter from URL function
  • shared/api: Update call to URL function
  • lxd: Remove uncecessary parameter from storagePoolVolumeUsedByGet
  • lxd: Update storagePoolVolumeUsedByGet usage
  • lxd/db: Update get volume query
  • lxd: Add endpoints to list all volumes
  • client: Add functions to get all volumes
  • lxc/storage_volume.go: Update lxc storage volume list
  • test: Add tests for listing volumes from all pools
  • i18n: Update translations
  • doc: Run make update-api
  • doc/config options: update the config option index
  • doc/config options: link to config options where possible
  • instances: fix typo in config option
  • doc/api extensions: link to config options
  • shared: Ignore invalid uid/gid values and truncate mode to perm bits
  • lxd: Update uid/gid/mode API docs
  • doc: Run make update-api
  • gitignore: Ignore all pycache under doc/
  • shared/ioprogress: Support simple readers
  • lxd/storage/drivers/btrfs: Report migration progress from receiver
  • lxd/storage/drivers/btrfs: Use daemons shutdown context
  • test/lint/client-imports: rename godeps.list file
  • test/lint/client-imports: export LC_ALL for predictable sorting
  • test/lint: add lxd-agent-imports
  • gitignore: Ignore all .bak
  • shared/api: Fix typo
  • lxd/api_metrics: Check individual project permissions if set
  • lxd/metrics: Use label aware permission check when filtering samples
  • lxd/api_metrics: Filter metrics by looping only once
  • lxd/auth/driver_tls: Allow viewing metrics for unrestricted metrics certs
  • lxd/db/cluster: Add identityTypeCertificateMetricsRestricted and identityTypeCertificateMetricsUnrestricted
  • lxd/db/cluster/identities: Handle unrestricted metrics certificates
  • shared/api/auth: Replace IdentityTypeCertificateMetrics with a restricted and unrestricted type
  • lxd/daemon: Use IdentityTypeCertificateMetricsRestricted and IdentityTypeCertificateMetricsUnrestricted
  • lxd/db/cluster/certificates: Use IdentityTypeCertificateMetricsRestricted and IdentityTypeCertificateMetricsUnrestricted
  • lxd/identity: Use IdentityTypeCertificateMetricsRestricted and IdentityTypeCertificateMetricsUnrestricted
  • lxd/auth/openfga: Extend can_view_metrics entitlement to projects
  • lxd/db/cluster/update: Fix updateFromV69
  • test/suites/auth: Update test to account for can_view_metrics
  • test/suites/metrics: Add restricted and unrestricted certificate tests
  • shared: Return new structure from ParseLXDFileHeaders
  • lxd: Refactor calls to shared.ParseLXDFileHeaders
  • client: Refactor calls to shared.ParseLXDFileHeaders
  • api: Add instances_files_modify_permissions extension
  • shared: Parse X-LXD-modify-perm header
  • lxd: Allow setting permissions for existing files via API
  • client: Send X-LXD-modify-perm on file POST
  • lxc/file: Set ModifyExisting when --mode, --uid, or --gid are passed
  • doc: Run make update-api
  • gomod: Update dependencies
  • incusd/instance/qemu: Set auto-converge on all migrations
  • incusd/device/disk: Remove bad comment
  • lxc/config/default: Add images remote for images.lxd.canonical.com
  • Revert “driver_lxc: Include running state in metrics”
  • lxd/instance/drivers/lxc: default some metrics to 0 instead of -1
  • lxd/metrics: Replace lxd_containers and lxd_vms metrics by lxd_instances
  • lxd/api_metrics: Make lxd_instances and internal metrics visible
  • tests: Fix metrics tests

Downloads

The release tarballs can be found on our download page.

Binary builds are also available for:

  • Linux: snap install lxd
  • MacOS: brew install lxc
  • Windows: choco install lxc

Notes on upgrading when using ZFS on Ubuntu 18.04

If you are using LXD on Ubuntu 18.04 with ZFS and LXD does not start after upgrading, you may find this error in the /var/snap/lxd/common/lxd/logs/lxd.log log file:

Error: Required tool ‘zpool’ is missing

This is due to LXD 5.21.x requiring ZFS 2.1 or later in the kernel.

Because of database schema changes in LXD 5.21.x you will find that if you revert to a previously installed version LXD will still no longer start.

To resolve this we have started including ZFS 0.8 support in the 5.21/stable channel.

In order to add ZFS 0.8 support to an Ubuntu 18.04 system please upgrade the kernel to the Ubuntu Hardware Enablement kernel (HWE).

sudo apt-get install --install-recommends linux-generic-hwe-18.04

Please note that the latest/stable channel still requires ZFS 2.1 or higher and so is no longer compatible with Ubuntu 18.04 when using ZFS, even when using the HWE kernel.

At this time it is possible to switch from latest/stable to 5.21/stable channels as the DB schemas are the same. So if you are running seeking to switch to an LTS series, it is currently possible to do that using:

sudo snap refresh lxd --channel=5.21/stable

Going forward the default track for new LXD installs is 5.21 which means that new users in the future won’t inadvertently install from the rolling latest/stable channel where the minimum system requirements do change over time.

Documentation: Choose your release

3 Likes

LXD 5.21.1 LTS is now available in the new 5.21 LTS track in the snap channel 5.21/candidate (as well as latest/candidate) and will be rolled out to 5.21/stable and latest/stable channels soon.

Please refresh to the 5.21/* channels if you want switch to the new LTS series, as the latest/* channels will continue on to future feature releases (6.x) in the coming weeks.

1 Like

This is progressively rolling out to 5.21/stable channel now.

1 Like

Hey! We have put some of our hosts to track 5.21/stable because it’s going to be the new LTS! woohoo.
This morning we noticed that we couldn’t use one of our lxd cloud due to LXD unix socket not accessible.

When troubleshooting I noticed that the host has updated it self to 5.21.1-feaabe1 from 5.20-f3dd836, which is totally fine.

I can’t just start the services again without getting this error:

WARNING[2024-04-09T11:26:47Z]  - Couldn't find the CGroup blkio.weight, disk priority will be ignored 
WARNING[2024-04-09T11:26:47Z]  - Couldn't find the CGroup memory swap accounting, swap limits will be ignored 
ERROR  [2024-04-09T11:26:47Z] Unable to run feature checks during QEMU initialization: Unable to locate the file for firmware "OVMF_CODE.4MB.fd" 
WARNING[2024-04-09T11:26:47Z] Instance type not operational                 driver=qemu err="QEMU failed to run feature checks" type=virtual-machine
WARNING[2024-04-09T11:26:49Z] Failed to initialize fanotify, falling back on inotify  err="Failed to initialize fanotify: invalid argument"

Journal log when lxd stopped working:

2024-04-09T06:42:50Z systemd[1]: Stopping Service for snap application lxd.daemon...
2024-04-09T06:42:50Z lxd.daemon[1839705]: => Stop reason is: snap refresh
2024-04-09T06:42:50Z lxd.daemon[1839705]: => Stopping LXD
2024-04-09T06:42:50Z lxd.daemon[531348]: => LXD exited cleanly
2024-04-09T06:42:51Z lxd.daemon[1839705]: ==> Stopped LXD
2024-04-09T06:42:51Z systemd[1]: snap.lxd.daemon.service: Succeeded.
2024-04-09T06:42:51Z systemd[1]: Stopped Service for snap application lxd.daemon.
2024-04-09T06:43:27Z systemd[1]: Starting Service for snap application lxd.activate...
2024-04-09T06:43:27Z lxd.activate[1856399]: => Starting LXD activation
2024-04-09T06:43:27Z lxd.activate[1856399]: ==> Loading snap configuration
2024-04-09T06:43:27Z lxd.activate[1856399]: ==> Checking for socket activation support
2024-04-09T06:43:28Z lxd.activate[1856399]: ==> Setting LXD socket ownership
2024-04-09T06:43:28Z lxd.activate[1856399]: ==> Setting LXD user socket ownership
2024-04-09T06:43:28Z lxd.activate[1856399]: ==> Checking if LXD needs to be activated
2024-04-09T06:43:28Z systemd[1]: Started Service for snap application lxd.daemon.
2024-04-09T06:43:29Z lxd.daemon[1856662]: => Preparing the system (28057)
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Loading snap configuration
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Setting up mntns symlink (mnt:[4026532865])
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Setting up kmod wrapper
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Preparing /boot
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Preparing a clean copy of /run
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Preparing /run/bin
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Preparing a clean copy of /etc
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Preparing a clean copy of /usr/share/misc
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Setting up ceph configuration
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Setting up LVM configuration
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Setting up OVN configuration
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Rotating logs
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Unsupported ZFS version (0.8)
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Escaping the systemd cgroups
2024-04-09T06:43:29Z lxd.daemon[1856662]: ====> Detected cgroup V1
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Escaping the systemd process resource limits
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Exposing LXD documentation
2024-04-09T06:43:29Z lxd.daemon[1856662]: => Re-using existing LXCFS
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> snap base has changed, restart system to upgrade LXCFS
2024-04-09T06:43:29Z lxd.daemon[1856662]: ==> Cleaning up existing LXCFS namespace
2024-04-09T06:43:29Z lxd.daemon[1856662]: => Starting LXD
2024-04-09T06:43:29Z lxd.daemon[1857232]: time="2024-04-09T06:43:29Z" level=warning msg=" - Couldn't find the CGroup blkio.weight, disk priority will be ignored"
2024-04-09T06:43:29Z lxd.daemon[1857232]: time="2024-04-09T06:43:29Z" level=warning msg=" - Couldn't find the CGroup memory swap accounting, swap limits will be ignored"
2024-04-09T06:43:31Z lxd.daemon[1857232]: time="2024-04-09T06:43:31Z" level=error msg="Failed loading storage pool" err="Required tool 'zpool' is missing" pool=default
2024-04-09T06:43:31Z lxd.daemon[1857232]: time="2024-04-09T06:43:31Z" level=error msg="Failed loading storage pool" err="Required tool 'zpool' is missing" pool=juju-zfs
2024-04-09T06:43:31Z lxd.daemon[1857232]: time="2024-04-09T06:43:31Z" level=error msg="Failed to start the daemon" err="Failed applying patch \"storage_move_custom_iso_block_volumes_v2\": Failed loading pool \"juju-zfs\": Required tool 'zpool' is missing"
2024-04-09T06:43:31Z lxd.daemon[1857232]: Error: Failed applying patch "storage_move_custom_iso_block_volumes_v2": Failed loading pool "juju-zfs": Required tool 'zpool' is missing
2024-04-09T06:43:31Z lxd.activate[1856484]: Error: Get "http://unix.socket/1.0": EOF
2024-04-09T06:43:31Z lxd.activate[1856399]: ====> Activation check failed, forcing activation
2024-04-09T06:43:31Z systemd[1]: snap.lxd.activate.service: Succeeded.
2024-04-09T06:43:31Z systemd[1]: Finished Service for snap application lxd.activate.
2024-04-09T06:43:32Z lxd.daemon[1856662]: Killed
2024-04-09T06:43:32Z lxd.daemon[1856662]: => LXD failed to start
2024-04-09T06:43:32Z systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
2024-04-09T06:43:32Z systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.
2024-04-09T06:43:32Z systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 1.
2024-04-09T06:43:32Z systemd[1]: Stopped Service for snap application lxd.daemon.

ZFS is installed and working.

One strange thing is that the latest release for 5.21/stable is 5.20-f3dd836.
image

Current installed version (snap list)
image

Any idea what happen?

This due to snap’s progressive rollout. We originally put LXD 5.20 into the 5.21/stable channel before it was released so that Ubuntu could use that channel for its pre-release Noble testing.

Then when LXD 5.21.0 was released it was placed into the 5.21/candidate channel, but due to the metrics certificates issue it never made it to 5.21/stable, and now LXD 5.21.1 is released it has been promoted to 5.21/stable but because of snap’s progressive rollouts not all clients will see it yet.

If you use the following snap refresh invocation you should get it:

sudo snap refresh lxd --cohort="+" --channel=5.21/stable

See https://documentation.ubuntu.com/lxd/en/latest/howto/cluster_manage/#upgrade-cluster-members

With regard to this message, LXD is trying to apply a patch to one of your storage pools, but it is marked offline due to missing ZFS tooling:

2024-04-09T06:43:31Z lxd.daemon[1857232]: time="2024-04-09T06:43:31Z" level=error msg="Failed to start the daemon" err="Failed applying patch \"storage_move_custom_iso_block_volumes_v2\": Failed loading pool \"juju-zfs\": Required tool 'zpool' is missing"

This is because the LXD snap package has not detected a compatible ZFS kernel module in your system and so isn’t providing the ZFS tooling to LXD.

ZFS 2.1 or higher is required, see https://documentation.ubuntu.com/lxd/en/latest/requirements/#zfs

1 Like

Thanks @tomp for the fast answer!

Just wanted to let you know that are aware about tracking 5.21/stable was still using 5.20.

But we weren’t prepared for the zfs requirments. We’re still using Focal on some host’s and this was one of them. I have now put every focal host on hold until we have upgraded them to jammy.

Thanks again!

1 Like

It may be of interest that because LXD snap provides its own ZFS tooling we only need the kernel ZFS version to support ZFS 2.1 or higher. Not the OS userspace.

So it maybe that the Focal HWE kernel may support ZFS 2.1 as that should be similar to Jammy’s kernel.

1 Like

This is progressively rolling out to latest/stable channel now.

1 Like

Dear,

Something goes wrong this morming (2024-04-10 11h00):

Error: Required tool 'zpool' is missing

uname -a:

Linux ta3352215 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux`

snap list:

Name    Version         Rev    Tracking          Publisher   Notes
core22  20240111        1122   latest/stable     canonical#  base
lxd     5.21.1-43998c6  28155  latest/candidate  canonical#  -
snapd   2.61.2          21184  latest/stable     canonical#  snapd

modinfo zfs:

version:        0.8.3-1ubuntu12.17
srcversion:     3CA966C3C34DC2BFBA99C85
vermagic:       5.4.0-176-generic SMP mod_unload modversions 

So how to resolve this problem ?

Regards,

Error: Required tool ‘zpool’ is missing

Please see LXD 5.21.1 LTS has been released - #6 by tomp and LXD 5.21.1 LTS has been released - #8 by tomp

Hello,
I am posting in this thread as after the upgrade to 5.21.1 I am experiencing a weird bug.

I have an unprivileged container which can’t start if 2 specific devices are added in the config when the container is stopped, but if I add those devices while the container is running they work as expected.

The 2 devices are 2 folders under /mnt on the lxd host, which in turn are mount points for 2 CIFS folders.
This config is working since about 2019 and I’m tracking lxd latest/stable in snap with this container in there since the beginning, first on ubuntu 18.04 LTS, then 20.04 LTS and now on 22.04 LTS.

lxd-host $ ls -l /mnt/
total 0
drwxrwxr-x 2 1065534 1065534 0 Feb  3 00:47 dati_for_internal
drwxrwxr-x 2 1065534 1065534 0 Feb 12  2023 monitoring_for_internal

lxd-host $ mount | grep internal
systemd-1 on /mnt/dati_for_internal type autofs (rw,relatime,fd=54,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=581)
systemd-1 on /mnt/monitoring_for_internal type autofs (rw,relatime,fd=55,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=584)
//10.10.70.20/Dati on /mnt/dati_for_internal type cifs (rw,nosuid,nodev,noexec,relatime,vers=3.0,cache=strict,username=nucleo,uid=1065534,noforceuid,gid=1065534,noforcegid,addr=10.10.70.20,file_mode=0775,dir_mode=0775,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1,x-systemd.automount)
//10.10.70.20/Monitoring on /mnt/monitoring_for_internal type cifs (rw,nosuid,nodev,noexec,relatime,vers=3.0,cache=strict,username=monitoring,uid=1065534,noforceuid,gid=1065534,noforcegid,addr=10.10.70.20,file_mode=0775,dir_mode=0775,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1,x-systemd.automount)

This is the container config snip

lxd-host $ lxc config edit internal-lxd
<...>
devices:
  monitoring:
    path: /mnt/monitoring
    source: /mnt/monitoring_for_internal
    type: disk
  dati:
    path: /mnt/dati
    source: /mnt/dati_for_internal
    type: disk
<...>

When the config is present before starting the container I get

lxd-host $ sudo lxc info --show-log internal-lxd
Name: internal-lxd
Status: STOPPED
Type: container
Architecture: x86_64
Created: 2019/05/11 15:59 CEST
Last Used: 2024/04/10 19:22 CEST

Log:

lxc internal-lxd 20240410172253.865 WARN     idmap_utils - ../src/src/lxc/idmap_utils.c:lxc_map_ids:165 - newuidmap binary is missing
lxc internal-lxd 20240410172253.865 WARN     idmap_utils - ../src/src/lxc/idmap_utils.c:lxc_map_ids:171 - newgidmap binary is missing
lxc internal-lxd 20240410172253.866 WARN     idmap_utils - ../src/src/lxc/idmap_utils.c:lxc_map_ids:165 - newuidmap binary is missing
lxc internal-lxd 20240410172253.866 WARN     idmap_utils - ../src/src/lxc/idmap_utils.c:lxc_map_ids:171 - newgidmap binary is missing
lxc internal-lxd 20240410172253.930 ERROR    conf - ../src/src/lxc/conf.c:mount_entry:2262 - Operation not permitted - Failed to mount "/var/snap/lxd/common/lxd/devices/internal-lxd/disk.dati.mnt-dati" on "/var/snap/lxd/common/lxc//mnt/dati"
lxc internal-lxd 20240410172253.930 ERROR    conf - ../src/src/lxc/conf.c:lxc_setup:3915 - Failed to setup mount entries
lxc internal-lxd 20240410172253.930 ERROR    start - ../src/src/lxc/start.c:do_start:1273 - Failed to setup container "internal-lxd"
lxc internal-lxd 20240410172253.932 ERROR    sync - ../src/src/lxc/sync.c:sync_wait:34 - An error occurred in another process (expected sequence number 3)
lxc internal-lxd 20240410172253.946 WARN     network - ../src/src/lxc/network.c:lxc_delete_network_priv:3671 - Failed to rename interface with index 0 from "eth0" to its initial name "veth5244401e"
lxc internal-lxd 20240410172253.946 ERROR    lxccontainer - ../src/src/lxc/lxccontainer.c:wait_on_daemonized_start:837 - Received container state "ABORTING" instead of "RUNNING"
lxc internal-lxd 20240410172253.946 ERROR    start - ../src/src/lxc/start.c:__lxc_start:2114 - Failed to spawn container "internal-lxd"
lxc internal-lxd 20240410172253.946 WARN     start - ../src/src/lxc/start.c:lxc_abort:1037 - No such process - Failed to send SIGKILL via pidfd 17 for process 9795
lxc 20240410172254.415 ERROR    af_unix - ../src/src/lxc/af_unix.c:lxc_abstract_unix_recv_fds_iov:218 - Connection reset by peer - Failed to receive response
lxc 20240410172254.416 ERROR    commands - ../src/src/lxc/commands.c:lxc_cmd_rsp_recv_fds:128 - Failed to receive file descriptors for command "get_init_pid"

But if I remove the config, start the container, wait for it to boot, put the config back the 2 folders are correctly mounted in the container and all works as expected.

Can someone shed some light? At the next system update/reboot this container is going to fail and I’ll be without DNS once again :sweat_smile:

Thanks in advance

I’ve asked @amikhalitsyn to investigate this .

1 Like

Let me know how I can facilitate troubleshooting and fixing and I’ll follow up

the snap upgrade with the ZFS dependency has hit me. I’m on 20.04 LTS , and I can’t see a way to get to the required ZFS version. possibly something I’m missing? a roll back of LXD has then produced a DB version error.

Good call! I just installed the HWE kernel in focal, rebooted, and everything came back up normal.

Thanks!

1 Like

Hey there! There’s no need to rollback. All I had to do was install the HWE kernel per @tomp’s suggestion and I was back up and running. Give it a try.

Good luck!

1 Like

any good RTFM resources you could point me to ? Ubuntu kernel lifecycle and enablement stack | Ubuntu ?

You already got it but here’s the specific page for Focal:

https://ubuntu.com/kernel/lifecycle#installation-20-04

I did exactly as the guide says except for without the “–install-recommends”. Rebooted and voila!

1 Like

I’m back up and running. thankyou all. my fault for not paying enough attention to my snap config for prod and pre-prod/test.

2 Likes