@didrocks Thanks for the clarification. I’m manually migrating the layout from desktop to server. Has the default server layout (in terms of persistent vs system datasets) been finalised? I’ve seen some documentation, but I’m not sure how final it is. E.g. /usr/local would be persistent or system, depending on where you look.
@didrocks I’m migrating the layout from desktop to server using zfs rename
. When trying to rename /var/log
to make it persistent I hit an issue:
umount: /var/log: target is busy.
cannot unmount '/var/log': umount failed
Seems like zfs rename needs to unmount the dataset even if the mountpoint doesn’t change. Any suggestion as to how to get around this? I checked with lsof
, and there’s a long list of processes having open files in /var/log
, including the kernel.
@didrocks, thank you for this article!
It helped me understand some things about swap on ZFS systems.
By the way, I am a little bit disappointed that we can’t easily and safely have a swap area > 2GiB. I’m using Ubuntu 20.04 with a laptop that has 8GiB of memory and Hibernate feature doesn’t work really well. I would have been happy to know this at the time I chose to install ZFS instead LVM partitions.
Anyway, is there a way to safely increase the current swap partition to 8 GiB?
And is there a technical reason for not letting the user set the size of swap partition during the Ubuntu 20.04 installation?
Thank you !
Regards
There is no technical reason, we just use the same default than the one with automated ubiquity partitioning. If one day, there is a manual partitioning option, this will be of course configurable. Until then, you can edit the zsys-setup
script before you start ubiquity: (as I explained in this post). Hoping that helps!
Thank you @didrocks !
I already checked that file but I’m no very comfortable with it…
Could you help me find the line where I can change the swap partition size?
Thank you for your help
[EDIT] After all, I think I got it.
I think I have to set the “SWAPSIZE” variable or the "“SWAPVOLSIZE” variable in the following part:
# Disable swap and get the swap volume size:
if [ -n "${SWAPFILE}" ]; then
SWAPSIZE=$(stat -c%s "${SWAPFILE}")
echo "I: Found swapfile with size ${SWAPSIZE}. Disabling"
swapoff "${SWAPFILE}"
fi
# Convert to MiB to align the size on the size of a block
SWAPVOLSIZE=$(( SWAPSIZE / 1024 / 1024 ))
Am I right?
Also, could you confirm me that there is no (safe) way right now to increase this swap partition size after the installation?
Thank you!
HI @didrocks and thanks for zsys!
I am curious about some snapshots of directories that seemingly don’t actually exist in /var/lib, and I’m hoping that even if zsys doesn’t have anything to do with them, that you might be able to help me figure out where they are coming from and what they are.
Here is an ls of my /var/lib directory:
chrism@thinkm:/var/lib$ ls
AccountsService containerd ghostscript nfs shim-signed udisks2
acpi-support dbus git openvpn smartmontools unattended-upgrades
alsa dhcp grub os-prober snapd update-manager
app-info dictionaries-common hp PackageKit snmp update-notifier
apport dkms initramfs-tools pam sudo upower
apt docker ispell plymouth systemd usb_modeswitch
aspell dpkg libreoffice polkit-1 tlp usbutils
avahi-autoipd emacsen-common libxml-sax-perl postfix tor vim
binfmts flatpak locales private tpm whoopsie
bluetooth fprint logrotate python ubiquity xfonts
boltd fwupd man-db redis ubuntu-advantage xkb
BrlAPI gconf misc rpm ubuntu-drivers-common xml-core
colord gdm3 mplayer samba ubuntu-release-upgrader
command-not-found geoclue NetworkManager sgml-base ucf
However, seemingly at times when zsys takes snapshots, I also get snapshots of datasets with long hexadecimal ids, seemingly of (currently-)nonexistent directories within /var/lib (such as rpool/ROOT/ubuntu_zb3uo/var/lib/94cd4fafd38200458bb3c385eea0907ac832ef47d3fc84dc974b2eb7ab8db0c7@autozsys_2g8zn0
). A listing with the uids elided a bit follows as an example (because otherwise the forum won’t let me code-highlight them):
chrism@thinkm:~$ zfs list -t snap -o name,creation|grep autozsys
...
rpool/ROOT/../var/lib/94cd4fafd382004....@autozsys_m1b09c Fri Sep 18 6:53 2020
rpool/ROOT/../var/lib/94cd4fafd382004....@autozsys_2g8zn0 Sat Sep 19 1:33 2020
rpool/ROOT/../var/lib/94cd4fafd382004....@autozsys_3phixj Sun Sep 20 3:16 2020
rpool/ROOT/../var/lib/94cd4fafd382004....@autozsys_7htrx0 Tue Sep 22 18:59 2020
...
Of course those dates line up with zsys states snapshotted from other runs (e.g. rpool/ROOT/ubuntu_zb3uoi/var/lib/dpkg@autozsys_m1b09c
).
I am not manually making the datasets for which these snapshots are made, but if I were, I wouldn’t expect that they would be managed by zsys (although maybe I’d be wrong about that). There are hundreds of such snapshots (there would be thousands but I think I deleted many by adding a /etc/zsys.conf that keeps very little and running zsysctl service gc --all
).
I’m not sure what’s in these snapshots. I’ve tried to mount them in a temporary location, but when I try, I get:
sudo mount -t zfs rpool/ROOT/ubuntu_zb3uoi/var/lib/e04e97....@autozsys_m1b09c /mnt/tmp
filesystem 'rpool/ROOT/ubuntu_zb3uoi/var/lib/e04e97...@autozsys_m1b09c' cannot be mounted at '/mnt/tmp' due to canonicalization error 2.
And:
chrism@thinkm:/var/lib$ sudo mount -t zfs rpool/ROOT/ubuntu_zb3uoi/var/lib/e04e97...
rpool/ROOT/ubuntu_zb3uoi/var/lib/e04e97...: can't find in /etc/fstab.
When I do a zfs list
, they do appear but as “legacy”:
rpool/ROOT/ubuntu_zb3uoi/var/lib/94cd4fafd382004.... 8K 683G 82.3M legacy
Is this a zsys thing or something else entirely? I’m attempting to make full backups of my system using zfs send
(actually syncoid
) and the presence of this many snapshots of this many directories can cause it to take hours, when it really should take seconds or minutes.
FWIW @tabulo and @mcamou I made a short video describing how to encrypt the Ubuntu root pool at 20.04 install time by editing zsys-setup:
Replying to myself after thinking enough to search for “hexadecimal directories in /var/lib zfs”, I realized the proliferation of these datasets are work of docker.
https://www.medo64.com/2020/02/zfs-guid-galore/
And after reading https://didrocks.fr/2020/06/16/zfs-focus-on-ubuntu-20.04-lts-zsys-dataset-layout/ docker is called out specifically here, embarrassingly, in the “Persistent Datasets” section. So I mv /var/lib/docker{,_aside}
made a zfs create -o mountpoint=/var/lib/docker rpool/var-lib-docker
dataset, moved all my docker_aside files into the newly created /var/lib/docker directory.
Then I computed the list of datasets I would have to rename (please forgive the terrible style, I’m in a hurry):
zfs list -o name| grep rpool/ROOT/ubuntu_zb3uoi/var/lib/[0-9a-fA-F][0-9a-fA-F][0-9a-fA-F][0-9a-fA-F]| cut -c 34- > torename
Then I did something like:
cat torename|while read dsid; do zfs rename "rpool/ROOT/ubuntu_zb3uoi/var/lib/$dsid" "rpool/var-lib-docker/$dsid"; done
Finally I deleted all the autozsys snapshots that were made in the past from those datasets:
zfs list -t snap -r rpool/var-lib-docker|cut -f1 -d' '|grep autozsys|xargs -n1 zfs destroy
And deleted my docker_aside directory. Haven’t fully tested it but should work. I am getting a little weirdness out of the zsys commands now, however:
Persistent Datasets:
- &{rpool/var-lib-docker %!!(MISSING)s(bool=false) {/var/lib/docker on %!!(MISSING)s(bool=true) %!!(MISSING)s(bool=false) %!!(MISSING)s(int=0) {local }} [%!!(MISSING)s(*zfs.Dataset=&{rpool/var-lib-docker/bb622231370ccb985a2e4d9b4ee44f69ed0ee143899efeb1638a148b8ffc745c false {legacy on false false 0 rpool/var-lib-docker/3e4bf9c27797c33a6592d603d3d2c2cb75e1e1a24a2a5c4f5226f29bbd8c7c13@187162287 {local }} [0xc000230100 0xc000230200] {0xc000227920}}) %!!(MISSING)s(*zfs.Dataset=&{rpool/var-lib-docker/2c243bda925b404884dfa27dc416ba95fc8cd86bd6ac4c7b946c36507001bb38 fal
Yes! Those are dockers that we discovered post-release (it seems Docker has a ZFS driver). There was not a lot of guidance about what to do upstream about those, hence the decision to write about create persistent datasets for it (and new Docker install from the ubuntu package does that).
This is just a printing bug that I have addressed yesterday and will be in next ZSys release.
Is it possible to disable Zsys automated-snapshots to take a manual snapshots and then add it to the grub menu?
If that made any sense?
Auto snapshot is triggered by apt hook. You need to disable the hook.
Sadly i’m not that skilled as a point and click user so it would be nice if it was added to Zsys as i can’t imagine i’m the only one that would rather use manual snapshots?
With Zsys manual snapshots it would also be nice if you could select a snapshot to be automatically chosen at every boot for a guest computer for friends and family to use that resets itself to a default state at every re-start.
I noticed that when I create a system account with a home directory no dataset is created in USERDATA. Is that by design or is it a bug?
So what I did: useradd -r -m -U newusername
A home directory is created but no corresponding dataset in USERDATA.
Indeed, this is by design. System accounts are accounted as part of the system, hence its snapshotting rules as system and no separate dataset created. (Nothing prevents you from creating them manually though)
Thanks for the reaction. Makes perfect sense; I just wanted to double check.
I recently installed Ubuntu 20.04 on zfs to replace 18.04 LTS. I have been using zfs for some time, and had existing user pools and datasets from my 18.04 system. These seem to have mounted very nicely in the correct places. I have a multi-boot UEFI system - which has been happily booting between Windows 10 and Ubuntu for over 12 months. Grub is not doing the multi-boot. I am using the UEFI boot menu. Grub resembles that for a single boot system.
I was very much looking forward to being able to roll-back my system, and try alternate configurations.
PROBLEM: I am not getting any ‘History for…’ menu entries in grub - even though package installs say they are firing them off, and zsysctl show
displays what seem to be appropriate lists of snapshots.
There does not seem to be any information on troubleshooting this problem. I would really appreciate some help here. If this is not the appropriate forum, I have not been able to find another - In that case, please suggest where I should ask.
Cheers,
–Peter G
Tried rebuilding grub. It seems to be trying to do the right thing:
% sudo update-grub
[sudo] password for peterg:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8119za
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8119za
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_2do2vz
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_2do2vz
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8cafek
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8cafek
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_xa4rql
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_xa4rql
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_695lh0
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_695lh0
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_y714dt
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_y714dt
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_oi1x0s
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_oi1x0s
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_uxu824
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_uxu824
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_jwqeg3
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_jwqeg3
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_wk6nbf
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_wk6nbf
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_7zqlik
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_7zqlik
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_9jl3q7
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_9jl3q7
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_a7siu4
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_a7siu4
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8lrc3j
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_8lrc3j
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_lobwmq
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_lobwmq
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_0tp1jx
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_0tp1jx
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_4ncc98
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_4ncc98
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_2bunvl
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_2bunvl
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_qjli6l
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_qjli6l
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_cvj24e
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_cvj24e
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_g0nn8n
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_g0nn8n
Found linux image: vmlinuz-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_crv845
Found initrd image: initrd.img-5.4.0-49-generic in rpool/ROOT/ubuntu_ccmaud@autozsys_crv845
device-mapper: reload ioctl on osprober-linux-sdb4 failed: Device or resource busy
Command failed.
device-mapper: reload ioctl on osprober-linux-sdc1 failed: Device or resource busy
Command failed.
device-mapper: reload ioctl on osprober-linux-sdd1 failed: Device or resource busy
Command failed.
Found Windows Boot Manager on /dev/nvme0n1p2@/EFI/Microsoft/Boot/bootmgfw.efi
Found Windows Boot Manager on /dev/sdb1@/efi/Microsoft/Boot/bootmgfw.efi
Adding boot menu entry for UEFI Firmware Settings
done
It seems that grub is being updated on the wrong drive - which is nvme, and not supported under my older UEFI HP BIOS. System is booting from SSD, and is running old grub configuration.
The solution is here: Move bootloader or remove efi partition in second drive
I successfully performed a very useful restore to dispose of an awful CUDA install :-).
Cheers…
Thanks for the feedback! Yeah, pointing to the wrong grub configuration seems to be the most obvious issue in your case, good catch!