Recover instances after Dqlite error caused by upgrade from 5.21 to 6.6

Hi forum,

I struggle to recover my instances after having (dangerously; I know now) upgraded LXD from 5.21 to 6.6.
First my LXD wouldn’t start anymore, I found the issue described exactly as I experienced it in Unable to upgrade from 5.21 to 6.6: Assertion `header.wal_size == 0' failed · Issue #17174 · canonical/lxd · GitHub .

I then went on to back up my database folder, delete the corrupted one and LXD started again.
All my instances and configuration was lost though.
I read some other threads concerning and the docs concerning recovery of lost instances and tried lxd recover after recreating my pools (loop-backed zfs and btrfs) whose .img files are still present in the filesystem.

lxd recover then prompted me to also add a profile and scanning the pools for lost volumes it returned a list all missing instances.
Still when trying to recover them, they didn’t appear despite lxd recover seemingly returning without error. No error was visible in the logs either.

The output of lxd recover:

This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: docker
Name of the storage backend (powerflex, alletra, btrfs, ceph, cephfs, lvm, pure, zfs, cephobject, dir): btrfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /var/snap/lxd/common/lxd/disks/docker.img
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (powerflex, alletra, btrfs, ceph, cephfs, lvm, pure, zfs, cephobject, dir): zfs
Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /var/snap/lxd/common/lxd/disks/default.img
Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=default
Additional storage pool configuration property (KEY=VALUE, empty when done): 
Would you like to recover another storage pool? (yes/no) [default=no]: 
The recovery process will be scanning the following storage pools:
 - NEW: "docker" (backend="btrfs", source="/var/snap/lxd/common/lxd/disks/docker.img")
 - NEW: "default" (backend="zfs", source="/var/snap/lxd/common/lxd/disks/default.img")
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: 
Scanning for unknown volumes...
The following unknown storage pools have been found:
 - Storage pool "docker" of type "btrfs"
 - Storage pool "default" of type "zfs"
The following unknown volumes have been found:
 - Volume "appcon-strapi" on pool "docker" in project "default" (includes 0 snapshots)
 - Volume "yopass" on pool "docker" in project "default" (includes 0 snapshots)
 - Container "keycloak-tmp" on pool "" in project "default" (includes 0 snapshots)
 - Virtual-Machine "keycloak-vm" on pool "" in project "default" (includes 0 snapshots)
 - Container "appcon-hyapp" on pool "" in project "default" (includes 0 snapshots)
 - Container "proxycon-02" on pool "" in project "default" (includes 0 snapshots)
 - Container "appcon-strapi" on pool "" in project "default" (includes 0 snapshots)
 - Container "dbcon-main" on pool "" in project "default" (includes 0 snapshots)
 - Container "proxycon" on pool "" in project "default" (includes 1 snapshots)
 - Container "yopass" on pool "" in project "default" (includes 0 snapshots)
You are currently missing the following:
 - Network "lxdbr0" in project "default"
 - Profile "docker" in project "default"
Please create those missing entries and then hit ENTER: 

As you can see, for some reason the intsances on the default pool are not recognized as pertaining to it.

What can I do to tell the command to recover the instances? Should I try do get the data in question out of the storage volumes by manually mounting them instead?

Eagerly waiting for a solution & thanks for you support

Best regards

have you managed to fix this? Posts like this (and the one referenced) are the reason that I’m still scared to upgrade my cluster from 5.21 to 6