On a standalone Debian 12 server running LXD 5.02, I have a 600 GiB zfs partition on the ssd and I used all of it to create a pool for LXD to use called zfspool.
I just launched a fresh Debian 12 VM using the command below - hoping that it would give my instance a 25 GiB root.
lxc launch local:deb12-vm podman1 --vm --profile vlan88 --storage zfspool --device root,size=25GiB
When I check the disk size with the command below, it says root is 3.8G
lxc exec podman1 – df -h
Here’s the output of that command:
Filesystem |
Size |
Used |
Avail |
Use% |
Mounted on |
udev |
1.9G |
0 |
1.9G |
0% |
/dev |
tmpfs |
391M |
592K |
391M |
1% |
/run |
/dev/sda2 |
3.8G |
3.0G |
734M |
81% |
/ |
tmpfs |
2.0G |
84K |
2.0G |
1% |
/dev/shm |
tmpfs |
5.0M |
0 |
5.0M |
0% |
/run/lock |
tmpfs |
50M |
13M |
38M |
25% |
/run/lxd_agent |
/dev/sda1 |
99M |
12M |
87M |
12% |
/boot/efi |
tmpfs |
391M |
0 |
391M |
0% |
/run/user/1000 |
Any idea why / is already 81% when I specified a root of 25 GiB? Here is what I get when I run
lxc config show podman1
architecture: x86_64
config:
image.architecture: amd64
image.description: Debian bookworm amd64 (20230803_05:24)
image.os: Debian
image.release: bookworm
image.serial: “20230803_05:24”
image.type: disk-kvm.img
image.variant: default
limits.cpu: “2”
limits.memory: 4GiB
volatile.base_image: 3c74ef8c1fd80e90238aa4f0145ca399b8f0fb4b83fbc20ff7dd3d904b575c32
volatile.cloud-init.instance-id: 0be94a72-cf9e-40a7-af51-88cd018b473b
volatile.eth0.host_name: tap686ea8ab
volatile.eth0.hwaddr: 00:16:3e:e2:a8:5d
volatile.last_state.power: RUNNING
volatile.uuid: e0d7d796-4d11-49ec-857f-38aae1c87d1e
volatile.vsock_id: “10”
devices:
root:
path: /
pool: zfspool
size: 25GiB
type: disk
ephemeral: false
profiles:
- vlan88
stateful: false
description: “”
I don’t see where I am going wrong.
Just to add some further diagnostics to my post. Here’s the output of some further storage-related commands.
lxc storage info zfspool
info:
description: “”
driver: zfs
name: zfspool
space used: 6.89GiB
total space: 577.50GiB
used by:
images:
- 3c74ef8c1fd80e90238aa4f0145ca399b8f0fb4b83fbc20ff7dd3d904b575c32
instances:
- podman1
That clearly shows the pool is configured with nearly 600 GiB of capacity. Now, here’s where I am confused:
lxc storage volume info zfspool virtual-machine/podman1
Name: podman1
Type: virtual-machine
Content type: block
Usage: 2.85GiB
Okay, so my VM is “using” only 2.85GiB even though I specified 25 GiB when I created it. I don’t think the ZFS driver does thin-provisioning, does it? From inside the instance, I checked the disk size with fdisk.
root@podman1:~# fdisk -l
Disk /dev/sda: 25 GiB, 26843545600 bytes, 52428800 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: E1C3C436-CBC7-4E48-AFED-4B540A1A148C
Device |
Start |
End Sectors |
Size |
Type |
/dev/sda1 |
2048 |
206847 |
204800 |
100M |
/dev/sda2 |
206848 |
8388574 |
8181727 |
3.9G |
When I tried to create a large file within this storage volume to see if usage expands towards 25 GiB, it failed.
root@podman1:~# fallocate -l 1G ./data.img
fallocate: fallocate failed: No space left on device
Okay, I’ve solved this for now. Leaving a note in case it helps someone else.
Since I am not using the snap, I will assume for now that what I am experiencing is a Debian packaging issue. I was able to manually expand the root partition to a sufficient size using: the parted, resize2fs, and e2fsck commands.
I am now going to create three image templates that use 15GiB, 30GiB, and 50GiB root filesystems. Those will be my small, medium, and large VM templates and I’ll just use them from my local: repository.
UPDATE
After reading a bunch of forum topics from the past, I see this has come up many times and Tom has addressed it. So, I switched to cloud-init images, built some cloud-init configs and put them in my profiles. All is well.
1 Like