Enhancing our ZFS support on Ubuntu 19.10 - an introduction

What you describe is probably the bug we mentioned (no kernel installed on the target). The casper revert has just been published and an image will soon build.

If you can reproduce it with debootstrap, please report a different bug (and ensure you have a kernel+initramfs in /boot)

Where can I find some info about zsys, that is longer then 1 paragraph.

I’ll write about it in the coming weeks, once we released (can’t do bug fixing, enhancements and writing at the same time ;))

OK bug fixing is more important, but is there no specification or design documentation?

And here you go http://cdimage.ubuntu.com/daily-live/pending/

I did a quick smoke test of the image in a VM and installation was successful (I could reboot and login, with zfs on root)

1 Like

I tried the new 20191009.2/eoan-desktop-amd64.iso and I’m still having the same issue when I use the graphical installer. I installed with the graphical installer, selected ZFS, reboot, I still have only one menu entry for grub, which is the System Firmware menu entry.

From there I boot back into the live cd (USB, really), import the pools, bind things like /dev, chroot to rpool, update-grub, and it still has kernel initrd issues. And yes, they do exist in /mnt/boot (since I’m in chroot).

Here is the output from terminal, this is after I installed eoan and rebooted back to the USB installer.

Also, when you receive the warning message that your drive is about to be wiped, it shows that it will use Ext4, not ZFS. I attached three pictures of this, one before the warning without any typos, one is the actual warning with the typo, and another instance where the installer claims it’s using Ext4 instead of ZFS.

I had to make another pastebin for the pictures, since I can only post two links in a single post as I am a new user.

I have not yet tested this with a Virtual Machine yet. Maybe that gives different results? Or maybe I am missing something…

Thanks. This works great so far. Just quickly added a pool, only 2 drives since it’s a laptop. I’ll work more on it later.

I’m also having issues with 20191009.2/eoan-desktop-amd64.iso.

It seems like sometimes, during the first boot, it doesn’t import bpool. Since bpool isn’t mounted, the kernels aren’t there: And since it does mount /boot/grub, you CAN still update grub. Grub doesn’t like it when that happens because it has no kernels.

Oddly, the third time doing an install it worked fine. No idea why.

So, it seems your issue is different from the one we described before (or the one @xalucardx is still getting): grub menu was fine, but you couldn’t boot on your system because it doesn’t impor bpool in the initramfs.

I’m wondering if you click on the last ubiquity step “installation done” or “ready to reboot” when doing the install? If ubiquity didn’t stop properly, the pools aren’t exported. Then, zpool refuse to import the new pools because for zfs, it’s still potentially imported on another system (the live one) which is something new in 0.8.

Do you think that may be the case and can explain the difference between the 3 installations? (you rebooted your machine without closing ubiquity?)

If you can reproduce it, can you try adding zfs.force=1 to the kernel line parameter when booting in grub?

Indeed, this one is known and will be in the release note. It’s difficult to fix for 19.10 (as we actually create multiple partitions and a main ext4 partition to let partman handling the ESP, and then, overwrite the ext4 partition with ZFS), so it’s technically not lying :wink: It’s something we will fix before getting out of the “experimental” phase.

@xalucardx Thanks for the details, it’s very useful. From the pastebin everything seems in order and it’s really strange that grub doesn’t find a kernel which is obviously there. Could you report a bug against ubiquity in Launchpad and attach the log of installation located in /var/log/installer in the installed system. Thanks.

To add to what @jibel told, please report on the bug as well if your machine is using secure boot (then, it could be an ubiquity regression which doesn’t install signed kernel when it should, if this is the case, would worth testing on a traditional pure ext4 install).

An additional note: thanks for the excellent pastebin :slight_smile: That makes us win a lot of roundtrip and is exactly the level of curated details command line and output that is awesome for remote debugging!

@xalucardx: some follow-up (still on the idea that you are booting with secure boot enabled):

We found and filed https://bugs.launchpad.net/ubuntu/+source/grubzfs-testsuite/+bug/1847581 which may your case. A fix in grub2 (2.04-1ubuntu11) is building and will available in the next hours in the archive, do you mind trying upgrading your chroot with that one once available and look if the grub menu now has valid entries?

EDIT: grub is now available in the released archive. A new iso image is currently building.

Could you report a bug against ubiquity in Launchpad

It looks like a bug was filed in grubzfs-testsuite and a patch for grub2 was created. (Per didrocks’ post) However, I don’t see the updated package in the archive. I still see 2.04-1ubuntu10 for grub2 Ubuntu – Error The same thing can be found when I do: apt update ; apt-cache policy grub2

Would you still like for me to file a bug against ubiquity?

And yes, I am using a system that is using EFI + Secure Boot.

You may be on a different mirror, so you need to wait for it to sync with the main archive.
No need to file a bug as you confirm you are using EFI + SB, but please let us know here once you get a change to get the udpate that it fixes it! (or not ;))

Should I store Virtual Machines or data that should be shared between users in the vms dataset, created by me?:

rpool/USERDATA
rpool/USERDATA/bertadmin_r9tu79
rpool/USERDATA/root_r9tu79
rpool/USERDATA/vms

I did give the access rights to a group of users and it seems to work.

I used the latest daily iso, and that worked! 20191010.2

I used the Try Ubuntu without Installing option. Opened ubiquity and installed. I opted to use a signed shim, and used a password to load the shim into the systems’ EFI. Once installed, I selected the option to reboot right away (instead of the Continue Testing option).

Once I was loaded into the newly created rpool, I ran update-grub, and no major issues were found.

I think now that my particular issue is now cooked. Thanks so much for looking into this!!!

Zsys does not create snapshots in my system, so I created an own snapshot today on the right level in the dataset hierarchy ?!
rpool/ROOT/ubuntu_2gm468@191010

afterwards I did see the history entry in grub again and I thought lets try it. That entry with revert system.
Afterwards a new Ubuntu has been added to the system:
zfs list displays:
rpool
rpool/ROOT
rpool/ROOT/ubuntu_2gm468
rpool/ROOT/ubuntu_2gm468/srv

rpool/ROOT/ubuntu_2gm468/var/spool
rpool/ROOT/ubuntu_2gm468/var/www
rpool/ROOT/ubuntu_tc4d0a

Note the last entry, a new system. Looking at its size, it looks like a complete copy.

And my snapshot disappeared and we have now:
rpool/ROOT/ubuntu_tc4d0a@191010

I have no clue, what happened, but I’m afraid that I lost some control of the system.

I assume rpool/ROOT/ubuntu_2gm468 is booted now and rpool/ROOT/ubuntu_tc4d0a is the backup from before the grub revert action?
Can I destroy rpool/ROOT/ubuntu_tc4d0a, if I want to contine to use the reverted system?
I assume the original snapshot is gone in all cases due to revert system or do I still have a backup in rpool/ROOT/ubuntu_tc4d0a@191010?
Can I revert the grub-revert by e.g renaming td4d0a?

Excellent news! Thanks for the feedback.

You need to set them to every children and create them on the USERDATA/* datasets. However, if we didn’t set (yet) the snapshot feature on zsys, you are mostly on your own until we support this.

No, the new set of datasets have been promoted. The snapshots has thus moved to the new promoted set. But as you are missing some datasets by going half-way without zsys, it’s probably better to revert the revert: after an update-grub, you should see in history a ubuntu_tc4d0a entry.

Then, you can delete your snapshot and cloned datasets.

Then, once we have an official “snapshot” command on zsys (which will come soon via a ppa), you will be able to go on the supported path :wink:

I’ve tested Ubuntu Mate Oct. 11 daily iso via 4 different qemu/kvm configuration, i.e., Q35+BIOS/+OVMF_CODE.fd/+OVMF_CODE.secboot.fd/+OVMF_CODE.ms.fd, all were successfully installed and booted.

However, I did spot a small issue , i.e, after choosing the experimental ZFS feature, the next pop-up would show “#1 ESP…#2…, ext4”, which is the same pop-up as the first “Erase the whole disk and install Ubuntu” option. This pop-up should be fixed before release.