Monday I will have an exiting day, since I should receive my 512GB NVME SSD from Newegg. I will put the ZFS experimental system on it, but I do NOT want to use the WHOLE DISK.
At midnight I expect the system storage to look like:
PLANNED END RESULT
The NVME SSD Silicon Power (sda) of 512GB:
- ext partition + bpool 2.1GB the standard Ubuntu stuff as generated by the install.
- rpool 37.9GB the standard Ubuntu pool, but REDUCED in size.
- freespace 15GB as fallback to restore the 55GB of current ZFS installation, if needed.
- L2ARC for a HDD datapool dpool, say 15GB (or 30GB), no ZIL/LOG since that dataset hardly uses sync writes.
- vpool 442GB as storage for my virtual machines.
The HDDs 500GB Seagate (sdc) and 1TB WD-Black (sdb):
- archives pool, unchanged 584GB at sdb4.
- Ubuntu 19.10 on ext4, 18GB at sdc5 copied from WD-Black to Seagate.
- Swap, 18GB at sdc6, to try hibernation in future.
- dpool, 870GB striped (454 + 416 GB) with one dataset with copies=2.
Friday, I installed Ubuntu 19.10 minimal install on a 40 GB USB disk (IDE). The rpool ashift=12, so that is OK. The system functions as Virtualbox host and only needs system utilities and Firefox.
I booted the installation from the USB HDD. I prepared the USB system as far as possible; conky with tricks for zfs, zram/lz4; installed hddtemp and lmsensors. That systems will be moved to the NVME SSD with “sudo dd /dev/sdd /dev/sda”. It works I have tested it in Virtualbox.
Saturday is my weekly backup day, so I will incrementally backup all datasets (send/receive). I will create two additional partition image files (gnome disk utility) for my current zfs install and for the ext4 install. If something goes terribly wrong, I can restore my current ZFS systems partition. I reduce the number of snapshots for each dataset to two.
Sunday I will use the new ZFS 0.8 device remove command and remove a 320GB laptop disk from its zfs datapools.
- destroy ZFS caches and logs, export datapools;
- physically insert the NVME disk;
- physically remove the SATA-SSD (128GB) and laptop HDD (320GB);
- boot from ext4; check drive assignment sda (NVME),sdb (Seagate) and sdc (WD);
- move the installation from USB to NVME: “sudo dd /dev/sdd /dev/sda”
- remove USB; update grub and reboot from NVME;
- create other NVME partitions with gnome-disk-utility;
- create vpool datapool and datasets;
- move the virtual machines to vpool on the NVME;
- delete the old dpool and vpool partions.
Reorganize the HDD partitions as indicated and restore dpool from the backup over my 1 Gbps link.
Begin December I will try dual boot by adding Mate to the existing bpool and rpool. This time I will install Mate on the USB disk and send/receive the content of bpool/BOOT/Ubuntu; rpool/ROOT/Ubuntu and rpool/USERDATA/ to the existing rpool and bpool. Update grub and look what happens
RESULTS till THURSDAY:
Most of the operation went as planned, however after I moved the system from USB disk to NVME disk, the system did not want to boot from NVME, even not after running update-grub. I think there is a difference in booting from SATA and NVME. So I went to plan B, and installed the system from scratch on the NVME disk.
I introduce two datasets for my virtual machines rpool/VBOX and rpool/VKVM and moved my VMs to those datasets.
I’m happy with the end result! The system is really fast and Host and VMs boot twice as fast In the past the host booted from SATA-SSD and the VMs from L2ARC and 3 striped HDDs.
I measured the disk throughput in the VMs with Crystal Disk Mark and got constantly ~900MB/s in Win 7. In the gnome-disk-utility I did get the first time ~620MB/s and the following measurements where ~1100MB/s.
I can’t yet explain those differences, but I like the disk throughput in my VMs.
I noticed, that by default Ubuntu only added a swap partition of 2 GB, so I added the /dev/sdb swap partition of the ext4 system and did give it a lower priority. so now my total swap is 18GB.
I have added 4 scripts to create a snapshot for all system datasets, to destroy those snapshots and to rollback those snapshots. The 4th script is for changing the canmount property of all system datasets, to allow updating zfs in the ext4 system too. I set everything to canmount=noauto, but I had to change it to “on” for the userdata, apt and dpkg related datasets. Look how that develops in future.