Bootstrapping LXD to use existing zpool

Hi All.

I need my cluster to use an existing zpool because the preseed script doesn’t implement encryption.

lxc storage create lxd-zpool zfs source=lxd-zpool

responds with

Error: Storage pool directory “/var/snap/lxd/common/lxd/storage-pools/lxd-zpool” already exists

True, it does exist. I want to initialise and use the pool, not create it. Similar output when defining the existing pool in the yaml file I’m using for pre-seed. Is there a better way to do this?

@sdeziel1 IIRC you did this before right?

Prepare the encrypted pool:

root@v1:~# touch /root/zfs.key && chmod 0600 /root/zfs.key && dd if=/dev/urandom bs=32 count=1 of=/root/zfs.key
1+0 records in
1+0 records out
32 bytes copied, 8.7104e-05 s, 367 kB/s
root@v1:~# zpool create -O encryption=on -O keyformat=raw -O keylocation=file:///root/zfs.key default /dev/sdb

root@v1:~# zpool list
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
default  9.50G   218K  9.50G        -         -     0%     0%  1.00x    ONLINE  -
root@v1:~# zfs get encryption default
NAME     PROPERTY    VALUE        SOURCE
default  encryption  aes-256-gcm  -

Install LXD

root@v1:~# snap install lxd
2025-05-07T15:18:55Z INFO Waiting for automatic snapd restart...
lxd (5.21/stable) 5.21.3-c5ae129 from Canonical✓ installed

Initialize LXD

Note: the key part is to answer no when asked to Create a new ZFS pool? and then provide the name of the existing (and encrypted) zpool.

root@v1:~# lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (zfs, btrfs, ceph, dir, lvm, powerflex) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: no
Name of the existing ZFS pool or dataset: default
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 

Check that encryption applies to child FSes

root@v1:~# zfs get encryption -r default
NAME                              PROPERTY    VALUE        SOURCE
default                           encryption  aes-256-gcm  -
default/buckets                   encryption  aes-256-gcm  -
default/containers                encryption  aes-256-gcm  -
default/custom                    encryption  aes-256-gcm  -
default/deleted                   encryption  aes-256-gcm  -
default/deleted/buckets           encryption  aes-256-gcm  -
default/deleted/containers        encryption  aes-256-gcm  -
default/deleted/custom            encryption  aes-256-gcm  -
default/deleted/images            encryption  aes-256-gcm  -
default/deleted/virtual-machines  encryption  aes-256-gcm  -
default/images                    encryption  aes-256-gcm  -
default/virtual-machines          encryption  aes-256-gcm  -
1 Like

Thanks for the very thorough feedback but I was referring to building with the preseed. Sorry I didn’t make that very clear.

I’ve manage to get it working on the initial node as follows. The requirement was to create the storage in lxd before running the init.

storage_pools:

  • name: lxd-zpool
    driver: zfs
    profiles:
  • name: default
    devices:
    root:
    path: /
    pool: lxd-zpool
    type: disk

Same with the other nodes. If the existing storage was created then they picked it up and used it as is. I’m not sure why I had such issues with this. I had some ghost instances and had to delete everything to get this to be reliable.

2 Likes

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.