Hi Tom…
I didn’t want to wait to resolve this so after a lot of searching I eventually created a process that resolved this problemf or me.
Original Problem
I had a long standing Ubuntu LXD Host system (all disks 2TB BTRFS) installed on Disk1 (2TB m.2 NVME on motherboard) but during that original during:
$ sudo lxd init
I had configured the LXD storage pool to be on a removable SATA3 SSD Disk2 to make it easier to backup/restore since I could clone that Disk2 and save the clone offline.
However, after a failure of Disk1 I had to reinstall Ubuntu & LXD but wanted to re-use my original LXD Storage Pool on Disk2
Resolution Process that worked for me
Reinstall Ubuntu on Disk1.
ReiInstall LXD if Needed
$ sudo snap install lxd
IMPORTANT
- If reinstalling Ubuntu Host system, to re-use a previous STORAGE POOL on Disk2
do not create a new Storage Pool -(see following example)
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: no <== answer “NO”
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD/Incus to [default=844x]: <== If installing both LXD and Incus set one to 8443 and the other to 8444
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML “lxd init” preseed to be printed? (yes/no) [default=no]:
NOTE: The Sockets for LXD are not enabled by default.
Also, I’m not sure if doing this should be done before lxd init or after so I am repeating it here again…?
Enable socket with:
$ sudo systemctl enable --now lxd.socket
If you are Reinstalling Ubuntu & want to Reuse the Previous Storage Pools on Disk2
Create a MOUNT Point where LXD normally expect it’s DEFAULT Storage pool
sudo mkdir /var/snap/lxd/common/lxd/storage-pools/default
In the following, change /dev/sdXY to the device partition of the old/existing
“lxd-storage-pool” on Disk2 that we want to re-use
For LXD do this… to enter the LXD’s namespace using nsenter
(reference: see Stephane Graber’s post here)
nsenter --mount=/run/snapd/ns/lxd.mnt mount /dev/sdXY /var/snap/lxd/common/lxd/storage-pools/default
Edit the new LXD “Default” Profile
$ lxc profile edit default
Add the following to the new “Default” Profile created by the sudo lxd init above:
root:
path: /
pool: default
type: disk
save the edit.
Next, Recover the previous Storage Pool’s Containers & VM’s
$ sudo lxd recover
[ Note: the following is an example]
This LXD server currently has the following storage pools:
Would you like to recover another storage pool? (yes/no) [default=no]: yes
Name of the storage pool: default
Name of the storage backend (dir, lvm, powerflex, zfs, btrfs, ceph, cephfs, cephobject): btrfs
Source of the storage pool (block device, volume group, dataset, path, … as applicable): /dev/sdXY <== change this to match Physical Device the old Pool was on (ie Disk2)
Additional storage pool configuration property (KEY=VALUE, empty when done):
Would you like to recover another storage pool? (yes/no) [default=no]:
The recovery process will be scanning the following storage pools:
- NEW: “default” (backend=“btrfs”, source=“/dev/sdXY”)
Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]:
Scanning for unknown volumes…
The following unknown storage pools have been found:
- Storage pool “default” of type “btrfs”
The following unknown volumes have been found:
- Container “cn-test” on pool “default” in project “default” (includes 0 snapshots)
- Virtual-Machine “vm-test” on pool “default” in project “default” (includes 0 snapshots)
Would you like those to be recovered? (yes/no) [default=no]: yes
Starting recovery…
Last step is to Configure the Host Firewall to allow LXD traffic in/out
$ sudo ufw allow in on lxdbr0
$ sudo ufw route allow in on lxdbr0
$ sudo ufw route allow out on lxdbr0
When the above RECOVER is complete check to see that all the previous LXD Containers/VMs are back
$ lxc ls
Now you can use the Containers/VMs in your “old” storage pool that existed before you had to reinstall the HOST or LXD itself.
*NOTE: for Windows VMs only
For LXD Windows VMs you should increase the # cpu and Memory from the LXD default
$ lxd config set win11 limits.memory=8GB
$ lxd config set win11 limits.cpu=4
Maybe this will help someone else that uses or wants to use a separate Disk for LXD Storage.