Enhancing our ZFS support on Ubuntu 19.10 - an introduction

@xalucardx: some follow-up (still on the idea that you are booting with secure boot enabled):

We found and filed https://bugs.launchpad.net/ubuntu/+source/grubzfs-testsuite/+bug/1847581 which may your case. A fix in grub2 (2.04-1ubuntu11) is building and will available in the next hours in the archive, do you mind trying upgrading your chroot with that one once available and look if the grub menu now has valid entries?

EDIT: grub is now available in the released archive. A new iso image is currently building.

Could you report a bug against ubiquity in Launchpad

It looks like a bug was filed in grubzfs-testsuite and a patch for grub2 was created. (Per didrocksā€™ post) However, I donā€™t see the updated package in the archive. I still see 2.04-1ubuntu10 for grub2 Ubuntu ā€“ Error The same thing can be found when I do: apt update ; apt-cache policy grub2

Would you still like for me to file a bug against ubiquity?

And yes, I am using a system that is using EFI + Secure Boot.

You may be on a different mirror, so you need to wait for it to sync with the main archive.
No need to file a bug as you confirm you are using EFI + SB, but please let us know here once you get a change to get the udpate that it fixes it! (or not ;))

Should I store Virtual Machines or data that should be shared between users in the vms dataset, created by me?:

rpool/USERDATA
rpool/USERDATA/bertadmin_r9tu79
rpool/USERDATA/root_r9tu79
rpool/USERDATA/vms

I did give the access rights to a group of users and it seems to work.

I used the latest daily iso, and that worked! 20191010.2

I used the Try Ubuntu without Installing option. Opened ubiquity and installed. I opted to use a signed shim, and used a password to load the shim into the systemsā€™ EFI. Once installed, I selected the option to reboot right away (instead of the Continue Testing option).

Once I was loaded into the newly created rpool, I ran update-grub, and no major issues were found.

I think now that my particular issue is now cooked. Thanks so much for looking into this!!!

Zsys does not create snapshots in my system, so I created an own snapshot today on the right level in the dataset hierarchy ?!
rpool/ROOT/ubuntu_2gm468@191010

afterwards I did see the history entry in grub again and I thought lets try it. That entry with revert system.
Afterwards a new Ubuntu has been added to the system:
zfs list displays:
rpool
rpool/ROOT
rpool/ROOT/ubuntu_2gm468
rpool/ROOT/ubuntu_2gm468/srv
ā€¦
rpool/ROOT/ubuntu_2gm468/var/spool
rpool/ROOT/ubuntu_2gm468/var/www
rpool/ROOT/ubuntu_tc4d0a

Note the last entry, a new system. Looking at its size, it looks like a complete copy.

And my snapshot disappeared and we have now:
rpool/ROOT/ubuntu_tc4d0a@191010

I have no clue, what happened, but Iā€™m afraid that I lost some control of the system.

I assume rpool/ROOT/ubuntu_2gm468 is booted now and rpool/ROOT/ubuntu_tc4d0a is the backup from before the grub revert action?
Can I destroy rpool/ROOT/ubuntu_tc4d0a, if I want to contine to use the reverted system?
I assume the original snapshot is gone in all cases due to revert system or do I still have a backup in rpool/ROOT/ubuntu_tc4d0a@191010?
Can I revert the grub-revert by e.g renaming td4d0a?

Excellent news! Thanks for the feedback.

You need to set them to every children and create them on the USERDATA/* datasets. However, if we didnā€™t set (yet) the snapshot feature on zsys, you are mostly on your own until we support this.

No, the new set of datasets have been promoted. The snapshots has thus moved to the new promoted set. But as you are missing some datasets by going half-way without zsys, itā€™s probably better to revert the revert: after an update-grub, you should see in history a ubuntu_tc4d0a entry.

Then, you can delete your snapshot and cloned datasets.

Then, once we have an official ā€œsnapshotā€ command on zsys (which will come soon via a ppa), you will be able to go on the supported path :wink:

Iā€™ve tested Ubuntu Mate Oct. 11 daily iso via 4 different qemu/kvm configuration, i.e., Q35+BIOS/+OVMF_CODE.fd/+OVMF_CODE.secboot.fd/+OVMF_CODE.ms.fd, all were successfully installed and booted.

However, I did spot a small issue , i.e, after choosing the experimental ZFS feature, the next pop-up would show ā€œ#1 ESPā€¦#2ā€¦, ext4ā€, which is the same pop-up as the first ā€œErase the whole disk and install Ubuntuā€ option. This pop-up should be fixed before release.

As I wrote previously:
ā€œIndeed, this one is known and will be in the release note. Itā€™s difficult to fix for 19.10 (as we actually create multiple partitions and a main ext4 partition to let partman handling the ESP, and then, overwrite the ext4 partition with ZFS), so itā€™s technically not lying :wink: Itā€™s something we will fix before getting out of the ā€œexperimentalā€ phase.ā€

1 Like

Thatā€™s not just performance, itā€™s fixing a regression as I understand it.

How is ZFS support in Kubuntu going?

Iā€™m linking part 2 of the current series of our work for those interested: https://didrocks.fr/2019/10/11/ubuntu-zfs-support-in-19.10-zfs-on-root/.

There will be a 3rd part describing more what ZSYS is about.

4 Likes

what is the advantage of ZFS over the excellent EXT4?

Why doesnā€™t any linux distribution offer ZFS by default when it has already existed for several years?

The EXT4 will remain the default file system under Ubuntu in the coming years? The ZFS will just remain an alternative but it will never be by default?

Kubuntu should be equal to Xubuntu, Ubuntu Mate and Ubuntu itself. I used those 3 Ubuntus and with respect to ZFS they are exactly the same.

1 Like

Advantages ZFS for me:

  • It stopped my music files corruption due to the frequent power fails in Santiago de Los Caballeros. The same is true for all system crashes. That is due to the COW (Copy On Write).
  • It supports built-in snapshots which are created and deleted in one second. Also a rollback only takes a few seconds. I have used it a few times. Last time when I rolled back my upgrade from 19.04 to 19.10, because there is not yet a Virtualbox version supporting Linux 5.3.
  • It supports compression on a dataset (folder) level. I use LZ4 compression and that approx halves the size needed for that storage and it approx halves the number of disk IOs needed to e.g. load an App or to boot a Host OS or Virtual Machine. For folders with archives you can use e.g. GZIP1 to 9.
  • Backups/replication is based on snapshots and because of that it only sends the modified blocks to the backup. So it easily beats rsync on speed.
  • It supports all raid options including something that would compare to raid-6 and an imaginary raid-7. It avoids the write-hole.
  • Since versions 0.8.x (Ubuntu 19.10) it supports encryption.

Probably Iā€™m forgetting some advantages.

Why isnā€™t it used as default.

  • It is more complex than ext4, so you have more facilities and more to learn.
  • the main reasons is that there are some minor legal incompatibilities between the Open Source Licences of Linux and ZFS (Sunā€™s Open Source licence). That is the reason that ZFS is not part of the Linux kernel. Canonical ignored those legalities since Ubuntu 16.04 and so far nobody sued them.

Quite the opposite, it has not been ignored by Canonical and considered very seriously. For those interested, I recommend reading this blog post from Dustin Kirkland about ZFS licensing and Linux.

3 Likes

I had tried the daily ISO of Kubuntu 19.10 released on the 12th and its installer didnā€™t have the option for installing on ZFS.

1 Like

Hi,

I need a few tech. support regarding to auto-import a previously created zfs partition here, can you help me?

Before this release cycle (19.04), I already started to use zfs to host my project files, it works fine, and the file system can be auto-imported/mounted after reboot w/o trouble.

And now comes to the Eaon, Iā€™ve tried to install both Ubuntu and Ubuntu Mate 19.10 on a real PC, both were installed fine when using the latest iso even with the experimental ZFS root enabled, thatā€™s the good part.

However, no matter how I tried to import the partition or tweak the /etc/default/zfs settings, right now this data partition simply refuse to cooperate, the 19.10 always fails to auto-import it. Assume the partition is /dev/disk/by-id/ata-VBOX_HARDDISK_VB4749ea2c-77ee538e-part4 and the pool name is zdata. I can import it an use the filesystem before reboot, and after reboot, itā€™s not auto-imported. This becomes a trouble to me.

The command Iā€™ve tried includes the two:

sudo zpool import -a

and

sudo zpool import -d /dev/disk/by-id/ata-VBOX_HARDDISK_VB4749ea2c-77ee538e-part4 zdata

Tweak /etc/default/zfs settings all fails to auto-import.

Do you know why? How to resolve it?

On my laptop I used: sudo zpool import -f zdata
-f Forces import, even if the pool appears to be potentially active or used before by another system.