LXD recover fails with Invalid option for volume option "block.mount_options"

Hello,

I had LXD set up on a personal home server. I was trying to change the config from local-only to being accessible on my network. I’m not sure if it was already broken or I managed to stuff something up in that process, but I reached the point where lxc list just hung for ages. I tried a few things to get it going and ended up removing and reinstalling the snap. Now I don’t have any containers/VMs.

I stored the containers/VMs on ZFS, so I tried to follow the lxd recover process:

sudo lxd recover
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: lxd
Name of the storage backend (cephfs, cephobject, dir, lvm, powerflex, zfs, btrfs, ceph): zfs 
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): dozer/lxd
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "lxd" (backend="zfs", source="dozer/lxd")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "lxd" of type "zfs"
The following unknown volumes have been found:
 - Container "photoprism-docker" on pool "lxd" in project "default" (includes 1 snapshots)
 - Container "pi-hole" on pool "lxd" in project "default" (includes 0 snapshots)
 - Container "wireguard-server" on pool "lxd" in project "default" (includes 2 snapshots)
 - Virtual-Machine "pihole" on pool "lxd" in project "default" (includes 0 snapshots)
 - Virtual-Machine "wireguard-server-vm" on pool "lxd" in project "default" (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery...
Error: Failed import request: Failed importing instance "wireguard-server-vm" in project "default": Invalid option for volume "wireguard-server-vm" option "block.filesystem"

Any ideas how I can resolve this invalid option?

Hi @aaron-whitehouse

So the block.filesystem option was recently removed from being valid for VMs and there was a patch that was applied when upgrading to remove that key from affected instances. This is because it was only relevant for the VM’s config drive, which is an internally managed volume that should always be ext4.

I’m wondering if the VM in question was not started after the upgrade and thus didn’t get its on-disk config saved before the database was removed as part of the snap removal, which would explain why the old key is still being restored.

For future reference if LXD won’t start then getting the contents of /var/snap/lxd/common/lxd/logs/lxd.log for inspection is often useful.

For now though, do you know what version of LXD you were on previously, and please can you provide the output of snap info lxd.

Thanks
Tom

Thanks Tom,

Good tip, thanks.

$ snap info lxd
name:      lxd
summary:   LXD - container and VM manager
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact:   https://github.com/canonical/lxd/issues
license:   AGPL-3.0
description: |
  LXD is a system container and virtual machine manager.
  
  It offers a simple CLI and REST API to manage local or remote instances,
  uses an image based workflow and support for a variety of advanced features.
  
  Images are available for all Ubuntu releases and architectures as well
  as for a wide number of other Linux distributions. Existing
  integrations with many deployment and operation tools, makes it work
  just like a public cloud, except everything is under your control.
  
  LXD containers are lightweight, secure by default and a great
  alternative to virtual machines when running Linux on Linux.
  
  LXD virtual machines are modern and secure, using UEFI and secure-boot
  by default and a great choice when a different kernel or operating
  system is needed.
  
  With clustering, up to 50 LXD servers can be easily joined and managed
  together with the same tools and APIs and without needing any external
  dependencies.
  
  
  Supported configuration options for the snap (snap set lxd [<key>=<value>...]):
  
    - ceph.builtin: Use snap-specific Ceph configuration [default=false]
    - ceph.external: Use the system's ceph tools (ignores ceph.builtin) [default=false]
    - criu.enable: Enable experimental live-migration support [default=false]
    - daemon.debug: Increase logging to debug level [default=false]
    - daemon.group: Set group of users that have full control over LXD [default=lxd]
    - daemon.user.group: Set group of users that have restricted LXD access [default=lxd]
    - daemon.preseed: Pass a YAML configuration to `lxd init` on initial start
    - daemon.syslog: Send LXD log events to syslog [default=false]
    - daemon.verbose: Increase logging to verbose level [default=false]
    - lvm.external: Use the system's LVM tools [default=false]
    - lxcfs.pidfd: Start per-container process tracking [default=false]
    - lxcfs.loadavg: Start tracking per-container load average [default=false]
    - lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
    - lxcfs.debug: Increase logging to debug level [default=false]
    - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
    - openvswitch.external: Use the system's OVS tools (ignores openvswitch.builtin) [default=false]
    - ovn.builtin: Use snap-specific OVN configuration [default=false]
    - ui.enable: Enable the web interface [default=false]
  
  For system-wide configuration of the CLI, place your configuration in
  /var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:
  - lxd.buginfo
  - lxd.check-kernel
  - lxd.lxc
  - lxd
services:
  lxd.activate:    oneshot, enabled, inactive
  lxd.daemon:      simple, enabled, active
  lxd.user-daemon: simple, enabled, inactive
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     5.21/stable
refresh-date: yesterday at 22:16 BST
channels:
  5.21/stable:      5.21.1-d46c406 2024-04-29 (28460) 108MB -
  5.21/candidate:   5.21.1-d46c406 2024-04-26 (28460) 108MB -
  5.21/beta:        ↑                                       
  5.21/edge:        git-f1fea03    2024-04-29 (28503) 108MB -
  latest/stable:    5.21.1-2d13beb 2024-04-30 (28463) 107MB -
  latest/candidate: 5.21.1-2d13beb 2024-04-26 (28463) 107MB -
  latest/beta:      ↑                                       
  latest/edge:      git-89828eb    2024-04-30 (28526) 107MB -
  5.20/stable:      5.20-f3dd836   2024-02-09 (27049) 155MB -
  5.20/candidate:   ↑                                       
  5.20/beta:        ↑                                       
  5.20/edge:        ↑                                       
  5.19/stable:      5.19-8635f82   2024-01-29 (26200) 159MB -
  5.19/candidate:   ↑                                       
  5.19/beta:        ↑                                       
  5.19/edge:        ↑                                       
  5.0/stable:       5.0.3-d921d2e  2024-04-23 (28373)  91MB -
  5.0/candidate:    5.0.3-5e9b586  2024-04-26 (28461)  91MB -
  5.0/beta:         ↑                                       
  5.0/edge:         git-8cd0db9    2024-04-24 (28440) 117MB -
  4.0/stable:       4.0.9-a29c6f1  2022-12-04 (24061)  96MB -
  4.0/candidate:    4.0.9-a29c6f1  2022-12-02 (24061)  96MB -
  4.0/beta:         ↑                                       
  4.0/edge:         git-407205d    2022-11-22 (23988)  96MB -
  3.0/stable:       3.0.4          2019-10-10 (11348)  55MB -
  3.0/candidate:    3.0.4          2019-10-10 (11348)  55MB -
  3.0/beta:         ↑                                       
  3.0/edge:         git-81b81b9    2019-10-10 (11362)  55MB -
installed:          5.21.1-d46c406            (28460) 108MB -

The server has been powered on 24x7 for years and I haven’t changed the default update behaviour for the snap, so I would have expected it to have run all the recent versions. I haven’t used that Wireguard VM for ages, though – it may have been stopped since something like January 2023.

OK makes sense thanks.

You could see if there is a snapshot saved by snapd by looking at snap saved and then restoring it using snap restore. See Snapshots | Snapcraft documentation for more info.

The other option is to install apt install zfsutils-linux on the host and then mount the affected volume and modify the backup.yaml file in there to remove the extra block.filesystem setting and then lxd recover again.

Brilliant, thanks – good suggestion. I have done this and will ask about getting my LXD working again here:

1 Like