Could you report a bug against ubiquity in Launchpad
It looks like a bug was filed in grubzfs-testsuite and a patch for grub2 was created. (Per didrocksā post) However, I donāt see the updated package in the archive. I still see 2.04-1ubuntu10 for grub2 Ubuntu ā Error The same thing can be found when I do: apt update ; apt-cache policy grub2
Would you still like for me to file a bug against ubiquity?
And yes, I am using a system that is using EFI + Secure Boot.
You may be on a different mirror, so you need to wait for it to sync with the main archive.
No need to file a bug as you confirm you are using EFI + SB, but please let us know here once you get a change to get the udpate that it fixes it! (or not ;))
I used the latest daily iso, and that worked! 20191010.2
I used the Try Ubuntu without Installing option. Opened ubiquity and installed. I opted to use a signed shim, and used a password to load the shim into the systemsā EFI. Once installed, I selected the option to reboot right away (instead of the Continue Testing option).
Once I was loaded into the newly created rpool, I ran update-grub, and no major issues were found.
I think now that my particular issue is now cooked. Thanks so much for looking into this!!!
Zsys does not create snapshots in my system, so I created an own snapshot today on the right level in the dataset hierarchy ?!
rpool/ROOT/ubuntu_2gm468@191010
afterwards I did see the history entry in grub again and I thought lets try it. That entry with revert system.
Afterwards a new Ubuntu has been added to the system:
zfs list displays:
rpool
rpool/ROOT
rpool/ROOT/ubuntu_2gm468
rpool/ROOT/ubuntu_2gm468/srv
ā¦
rpool/ROOT/ubuntu_2gm468/var/spool
rpool/ROOT/ubuntu_2gm468/var/www
rpool/ROOT/ubuntu_tc4d0a
Note the last entry, a new system. Looking at its size, it looks like a complete copy.
And my snapshot disappeared and we have now:
rpool/ROOT/ubuntu_tc4d0a@191010
I have no clue, what happened, but Iām afraid that I lost some control of the system.
I assume rpool/ROOT/ubuntu_2gm468 is booted now and rpool/ROOT/ubuntu_tc4d0a is the backup from before the grub revert action?
Can I destroy rpool/ROOT/ubuntu_tc4d0a, if I want to contine to use the reverted system?
I assume the original snapshot is gone in all cases due to revert system or do I still have a backup in rpool/ROOT/ubuntu_tc4d0a@191010?
Can I revert the grub-revert by e.g renaming td4d0a?
You need to set them to every children and create them on the USERDATA/* datasets. However, if we didnāt set (yet) the snapshot feature on zsys, you are mostly on your own until we support this.
No, the new set of datasets have been promoted. The snapshots has thus moved to the new promoted set. But as you are missing some datasets by going half-way without zsys, itās probably better to revert the revert: after an update-grub, you should see in history a ubuntu_tc4d0a entry.
Then, you can delete your snapshot and cloned datasets.
Then, once we have an official āsnapshotā command on zsys (which will come soon via a ppa), you will be able to go on the supported path
Iāve tested Ubuntu Mate Oct. 11 daily iso via 4 different qemu/kvm configuration, i.e., Q35+BIOS/+OVMF_CODE.fd/+OVMF_CODE.secboot.fd/+OVMF_CODE.ms.fd, all were successfully installed and booted.
However, I did spot a small issue , i.e, after choosing the experimental ZFS feature, the next pop-up would show ā#1 ESPā¦#2ā¦, ext4ā, which is the same pop-up as the first āErase the whole disk and install Ubuntuā option. This pop-up should be fixed before release.
As I wrote previously:
āIndeed, this one is known and will be in the release note. Itās difficult to fix for 19.10 (as we actually create multiple partitions and a main ext4 partition to let partman handling the ESP, and then, overwrite the ext4 partition with ZFS), so itās technically not lying Itās something we will fix before getting out of the āexperimentalā phase.ā
what is the advantage of ZFS over the excellent EXT4?
Why doesnāt any linux distribution offer ZFS by default when it has already existed for several years?
The EXT4 will remain the default file system under Ubuntu in the coming years? The ZFS will just remain an alternative but it will never be by default?
It stopped my music files corruption due to the frequent power fails in Santiago de Los Caballeros. The same is true for all system crashes. That is due to the COW (Copy On Write).
It supports built-in snapshots which are created and deleted in one second. Also a rollback only takes a few seconds. I have used it a few times. Last time when I rolled back my upgrade from 19.04 to 19.10, because there is not yet a Virtualbox version supporting Linux 5.3.
It supports compression on a dataset (folder) level. I use LZ4 compression and that approx halves the size needed for that storage and it approx halves the number of disk IOs needed to e.g. load an App or to boot a Host OS or Virtual Machine. For folders with archives you can use e.g. GZIP1 to 9.
Backups/replication is based on snapshots and because of that it only sends the modified blocks to the backup. So it easily beats rsync on speed.
It supports all raid options including something that would compare to raid-6 and an imaginary raid-7. It avoids the write-hole.
Since versions 0.8.x (Ubuntu 19.10) it supports encryption.
Probably Iām forgetting some advantages.
Why isnāt it used as default.
It is more complex than ext4, so you have more facilities and more to learn.
the main reasons is that there are some minor legal incompatibilities between the Open Source Licences of Linux and ZFS (Sunās Open Source licence). That is the reason that ZFS is not part of the Linux kernel. Canonical ignored those legalities since Ubuntu 16.04 and so far nobody sued them.
Quite the opposite, it has not been ignored by Canonical and considered very seriously. For those interested, I recommend reading this blog post from Dustin Kirkland about ZFS licensing and Linux.
I need a few tech. support regarding to auto-import a previously created zfs partition here, can you help me?
Before this release cycle (19.04), I already started to use zfs to host my project files, it works fine, and the file system can be auto-imported/mounted after reboot w/o trouble.
And now comes to the Eaon, Iāve tried to install both Ubuntu and Ubuntu Mate 19.10 on a real PC, both were installed fine when using the latest iso even with the experimental ZFS root enabled, thatās the good part.
However, no matter how I tried to import the partition or tweak the /etc/default/zfs settings, right now this data partition simply refuse to cooperate, the 19.10 always fails to auto-import it. Assume the partition is /dev/disk/by-id/ata-VBOX_HARDDISK_VB4749ea2c-77ee538e-part4 and the pool name is zdata. I can import it an use the filesystem before reboot, and after reboot, itās not auto-imported. This becomes a trouble to me.