LXD 5.20 has been released

LXD 5.20 is now available in the latest/candidate snap channel and will be rolled out to stable users in the new year.

2 Likes

It appears the snap is not showing the correct license: Incorrect license information for the LXD snap - snap - snapcraft.io

Spdx is designed to support complicated cases like LXD, so the snap’s license should be able to reflect this complexity.

At the very least, having the snap show the most restrictive license is a good start, imo.

Nice to see that, FINALLY, the open source community remembers the AGPL-v3.
MySQL could have been saved if this idea would have spread earlier…

LXD 5.20 is now being progressively rolled out to the last/stable channel.

Since the initial release notes above, it should be noted that due to the upstream EDK2 project dropping support for CSM mode then LXD security.csm mode now instead switches QEMU to boot via Seabios directly rather than through EDK2.

3 Likes

Just out of interest, this update has broken my server setup. I have

  • Ubuntu 22.04 LTS
  • ZFS for my storage, which is where I put the volumes for data for the containers.
  • LXD to run the containers.

LXD has upgraded to 5.20, which has removed shiftfs support. Ubuntu 22.04 LTS has ZFS 2.1.5 which doesn’t have idmapped mounts support. Some containers (which shared volumes) now fail to start with:

$ lxc start backups
Error: Failed to setup device mount "paperlessng": idmapping abilities are required but aren't supported on system
Try `lxc info --show-log backups` for more info

I suspect I can’t downgrade to 5.19 which has shiftfs support. Maybe I should have pinned it to 5.19, but then I had no idea this would happen and LXD also didn’t check whether my system supported it before it upgraded, which is user hostile.

What can I do to solve this problem, please?

As an interim thought, it might be useful to add shiftfs support back in for all the LTS users for the next 8 years?

Hi Alex,

As an interim thought, it might be useful to add shiftfs support back in for all the LTS users for the next 8 years?

The LXD 5.0.x LTS series is associated with the Ubuntu 22.04 release, and indeed this does not have shiftfs support removed for the reason you suggest. This is available in the 5.0/stable channel, and is supported until June 2027. Unfortunately you cannot switch back from latest/stable to 5.0/stable due to DB schema changes.

If you’re getting refreshed onto LXD 5.20 it suggests you are following the latest/stable which has a new release approximately once a month, with the previous version only receiving support until the next version is released. So we wouldn’t recommend pinning an unsupported monthly version on a server.

See here for more info on snap channels:
https://documentation.ubuntu.com/lxd/en/latest/support/#how-to-get-support

Historically if LXD was installed manually, it defaulted to the latest/stable track. Whereas the version pre-seeded into Ubuntu LTS releases was set to the associated LTS track.

I’d be interested to know if you made a conscious decision to upgrade to the latest/stable track or whether it occurred implicity via manual installation at some point.

Going forward for our next LTS, 5.21.x, we are going to make this the default track for manual installs to avoid this scenario in the future (although we will still make the non-LTS Ubuntu releases install the latest/stable track by default to show off the new features).

Let me try to reproduce the issue locally and ill come back to you with a workaround.

Hi Tom

Thanks for your reply:

Historically if LXD was installed manually, it defaulted to the latest/stable track. Whereas the version pre-seeded into Ubuntu LTS releases was set to the associated LTS track.

This server has been around since 18.04, when LXD was a package, rather than a snap. I guess it got auto-upgraded to the snap and ended up on latest/stable. I don’t think I ever changed it though. I would definitely have pinned it, though, had I understood the implications.

Let me try to reproduce the issue locally and ill come back to you with a workaround.

Yes, please. I’m pondering my options for this as currently bit of my server aren’t currently running!

Thanks.

If you need to temporarily pin to LXD 5.19 you can do this:

snap refresh lxd --channel=5.19/stable

I’ve tested and it seems to work for downgrading from LXD 5.20 (but wont work from LXD 5.21 as there will be schema changes that prevent it).

I just tried installing LXD 5.19, enabling shiftfs, creating a container on a ZFS pool, and then refreshing up to latest/stable and on next container restart the root filesystem was manually shifted (because shiftfs was unavailable) but it started fine.

So I suspect from your error you have a custom disk device attached to the instance with the shift setting enabled.

Please can you show output of lxc config show <instance> --expanded?

I think if you want to switch to the 5.0/stable channel you’ll need to export your instances and custom volumes to tarballs using lxc export and lxc storage volume export then setup a fresh installation of LXD, and then import them back in.

Please can you show output of lxc config show <instance> --expanded ?

This is the output; there are some volatile.idmap.* entries that may be relevant? The volume paperless-ng is shared with a paperless-ngx instance that did start, did some id remapping at start up, and it’s relevant items are shown below this:

$ lxc config show backups --expanded
architecture: x86_64
config:
  image.architecture: amd64
  image.description: ubuntu 20.04 LTS amd64 (release) (20210510)
  image.label: release
  image.os: ubuntu
  image.release: focal
  image.serial: "20210510"
  image.type: squashfs
  image.version: "20.04"
  user.network-config: |
    version: 1
    config:
      - type: physical
        name: eth0
        subnets:
          - type: static
            ipv4: true
            address: 172.16.1.3
            netmask: 255.255.255.0
            gateway: 172.16.1.1
            control: auto
      - type: nameserver
        address: 172.16.1.1
  user.user-data: |
    package_update: true
    package_upgrade: true
    package_reboot_if_required: true
  volatile.base_image: 52c9bf12cbd3b06d591c5f56f8d9a185aca4a9a7da4d6e9f26f0ba44f68867b7
  volatile.eth0.hwaddr: 00:16:3e:87:81:3c
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: STOPPED
  volatile.uuid: 20fba133-fdaa-46f5-aa76-d5433ec407dd
  volatile.uuid.generation: 20fba133-fdaa-46f5-aa76-d5433ec407dd
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  paperlessng:
    path: /srv/paperless-ng
    pool: default
    source: paperless-ng-data
    type: disk
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
- backups
stateful: false
description: ""

from paperless-ngx:

$ lxc config show paperless-ngx --expanded
architecture: x86_64                
config:
  ..
  ..
  volatile.idmap.base: "0"
  volatile.idmap.current: '[]'
  volatile.idmap.next: '[]'
  ..
  ..

Not sure if this helps?

re: fixing it by exporting, re-installing, and importing; that’s going to be fairly tricky for my syncthing container as it contains around 100GiB of data in the volume.

Thanks again.

Please can you show output of lxc storage show default and lxc storage volume show default paperless-ng-data thanks

$ lxc storage show default
config:
  source: pool4t/lxd
  volatile.initial_source: pool4t/lxd
  zfs.pool_name: pool4t/lxd
description: ""
name: default
driver: zfs
used_by:
- /1.0/images/97b9236df59497b28eebeb91eee7a2bd815e428613e49e478837ffa401d39da0
- /1.0/instances/backups
- /1.0/instances/focal-builder
- /1.0/instances/paperless-ngx
- /1.0/instances/plexd
- /1.0/instances/postgresql
- /1.0/instances/smb-timemachine
- /1.0/instances/syncthing
- /1.0/instances/taskd
- /1.0/profiles/backups
- /1.0/profiles/default
- /1.0/profiles/paperless-ngx
- /1.0/profiles/plexd
- /1.0/profiles/postgresql
- /1.0/profiles/smb-timemachine
- /1.0/profiles/syncthing
- /1.0/profiles/taskd
- /1.0/storage-pools/default/volumes/custom/paperless-ng-data
- /1.0/storage-pools/default/volumes/custom/paperless-ng-syncthing
- /1.0/storage-pools/default/volumes/custom/postgresql-data
- /1.0/storage-pools/default/volumes/custom/syncthing-folders
- /1.0/storage-pools/default/volumes/custom/taskd
status: Created
locations:
- none

and

$ lxc storage volume show default paperless-ng-data
config:
  security.shifted: "true"
  volatile.idmap.last: '[]'
  volatile.idmap.next: '[]'
description: ""
name: paperless-ng-data
type: custom
used_by:
- /1.0/profiles/backups
- /1.0/profiles/paperless-ngx
location: none
content_type: filesystem
project: default
created_at: 0001-01-01T00:00:00Z

Note that the paperless-ng-data volume has been ‘touched’ by LXD 5.20 and thus the idmap entries may have disappeared due to the idmapping exercise. CF with syncthing-folders which does have entries:

$ lxc storage volume show default syncthing-folders
config:
  volatile.idmap.last: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
description: ""
name: syncthing-folders
type: custom
used_by:
- /1.0/profiles/syncthing
location: none
content_type: filesystem
project: default
created_at: 0001-01-01T00:00:00Z

Again, this may be me misreading/misunderstanding the situation!

The the issue is with this setting, as that is no longer supported in LXD 5.20 unless you’re using a filesystem with a kernel that supports idmapped mounts.

Try setting it to false with lxc storage volume set default paperless-ng-data security.shifted false and see if that allows you to start the instance. You will likely find that the share is effectively read only in the container as the dynamic mappings won’t apply.

I need the r/w shared volumes. I’ve pinned LXD to 5.20/stable to ensure it doesn’t go any further whilst I try to sort this out.

How would you react to the idea of upgrading the kernel to HWE (to get Linux 6.2) and then upgrading ZFS to 2.2? This would enable VFS idmaps and then LXD 5.20 would work then be okay? My nervousness is around doing the ZFS upgrade and whether LXD would 5.20 would be happy with ZFS 2.2?

LXD 5.20 supports ZFS 2.2 in the kernel fine. It bundles several versions of ZFS userland tooling and switches to the correct version as needed.

So a bit of a surprising outcome. I upgrade the kernel to the hwe one, 6.5, and on a whim decided to start the containers just to see what happened. The zfs version on the host is still (AFAICT) 2.1.5. And the containers have started, and everything seems to be working okay. Everything seems to be rw. The only difference is that Linux 6.5 supports VFS idmaps.

1 Like

Yeah the lxd snap bundles several versions of the zfs user land including 2.2 so if the host kernel provides the right zfs module version it will use it.

Yeah the lxd snap bundles several versions of the zfs user land including 2.2 so if the host kernel provides the right zfs module version it will use it.

I may be being a bit dense here, but although my server has Linux 6.x on it with VFS idmap support, it only has ZFS 2.1.5, which doesn’t supposedly support idmapping? Yet, it is working (or at least it seems to be). I’m not sure what I’m missing!?

Your kernel has a newer version of ZFS than the userland tooling.
But because LXD bundles its own userland tooling in the snap, it can benefit from your newer kernel without you needing update the tooling on your system.