Failed mounting pool after upgrade 5.21 to 6

Hello,
After changed the snap channel from 5.21/stable to 6/stable, the storage pool failed to mount.

Ubuntu 24.04.4
ssd ext4 for system + 1 ssd btrfs for LXD.

I tried sudo lxd recover:

This LXD server currently has the following storage pools:

- Pool “data” using driver “btrfs”

Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:

Scanning for unknown volumes…

Error: Failed validation request: Failed mounting pool “data”: Failed to mount “/dev/nvme0n1” on “/var/snap/lxd/common/lxd/storage-pools/data” using “btrfs”: invalid argument

lxc storage show lxd-storage-pool:

Error: Storage pool not found

Thanks

Could you please share the output of: sudo blkid /dev/nvme0n1 and sudo fdisk -l /dev/nvme0n1?

Thanks for helping me.

It seems you’ve hit the nail on the head. Before, /dev/nvme0n1 was the ‘LXD’ disk, now it’s the system disk.

sudo blkid /dev/nvme0n1
/dev/nvme0n1: PTUUID=“8c215351-ed6b-4a19-9b09-eb06336f7a8a” PTTYPE=“gpt”

sudo fdisk -l /dev/nvme0n1
Disque /dev/nvme0n1 : 931,51 GiB, 1000204886016 octets, 1953525168 secteursModèle de disque : Samsung SSD 990 EVO Plus 1TBUnités : secteur de 1 × 512 = 512 octetsTaille de secteur (logique / physique) : 512 octets / 512 octetstaille d’E/S (minimale / optimale) : 512 octets / 512 octetsType d’étiquette de disque : gptIdentifiant de disque : 8C215351-ED6B-4A19-9B09-EB06336F7A8A
Périphérique     Début        Fin   Secteurs Taille Type/dev/nvme0n1p1    2048    2203647    2201600     1G Système EFI/dev/nvme0n1p2 2203648    6397951    4194304     2G Système de fichiers Linux/dev/nvme0n1p3 6397952 1953521663 1947123712 928,5G Système de fichiers Linux

sudo fdisk -l /dev/nvme1n1
Disque /dev/nvme1n1 : 1,82 TiB, 2000398934016 octets, 3907029168 secteurs
Modèle de disque : Samsung SSD 990 EVO Plus 2TB
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d’E/S (minimale / optimale) : 512 octets / 512 octets

That’s one of the reason why it’s recommended to use ID/UUID/label/etc (see /dev/disk/by-*/) instead of bare /dev paths as those are “enumeration order” dependent.

Now to get your setup back to functioning, I wonder if you can not just fix the source value with a lxc storage edit?

I can’t save the new config:

Config parsing error: Pool source cannot be changed when not in pending state

I have no container/vm in this pool (I have deleted the test container just before upgrade to 6.7). Maybe it’s easier to redo an init (if possible) ?

If you have no data to preserve, indeed, doing another init is likely the path of least resistance.

edit :

I used /dev/disk/by-id/ to create the storage.

Then change the default pool (with the UI)

Then delete the old pool (It’s a bit scary because it doesn’t say that it doesn’t wipe the data. And the pool was my system disk :slight_smile: )

Thank you very much for the help !

Sorry but I don’t see how to create a storage pool with a disk uuid.

If the disk is formatted:

ERROR: /dev/disk/by-uuid/4e4b850c-911d-4d46-b2cc-e1a152689ac8 appears to contain an existing filesystem (btrfs)

(response from init and from storage create pool)

If the disk is unformatted, the uuid doesn’t exist.

Indeed, suggesting the UUID approach was wrong, you are better off with /dev/disk/by-id/ like you did :slight_smile:

I’m not entirely clear if you’ve been able to fix the issue or not but just in case…

To clear the data: sudo blkdiscard /dev/.... This ideal as it lets the NVMe do some garbage collection while removing the data. If not supported by your device, sudo wipefs -a should work to remove the traces of the old btrfs FS and let LXD put a fresh filesystem on top.

You could use lxd sql global command to update the storage_pools_config table for the specific pool to the new location of the block device of the existing pool, rather than wiping and reinstalling.

1 Like