Enhancing our ZFS support on Ubuntu 19.10 - an introduction

Yes, I was a little bit sloppy in my wording, but ZFS is not yet officially part of the Linus’s Linux kernel because of these (fake?) licenses issues.

No the license issues aren’t fake (although they are sometimes greatly exagerated). OpenZFS is licensed under a license that is incompatible with GPL (I don’t remember its exact name) and so it can’t be part of the kernel tree. It can still be used in Linux distros however.

Is there going to be anything like Solaris/FreeBSD beadm? That’s something I am craving for for a long time. Ability to make a backup boot environment whenever a new package/update is installed and the possibility of going back is a deal breaker for me. I have hacked it in several times, but the updates always broke it eventually.

As a long time Solaris user I am really EXCITED to finally see ZFS support in init scripts and can’t wait to see where it is going to evolve. I have been trying proposed betas daily to see the progress. :slight_smile:

I will be submitting bugs, if I find any, once the 19.10 release is out.

Thanks, guys!

1 Like

This is amazing, thank you so much. Personally I think ZFS has three killer features: data integrity, snapshots and RAID.

Correct me if I’m wrong - and I hope that I am wrong - but your current partitioning will wreck the smooth ZFS experience when one decides to mirror a bootable drive. It will require a lot of extra steps and involve mdadm when you need to replace a drive.

And then you are considering to move grub onto ESP - in that case, is it even possible to have efi on software RAID?

Are you looking into ZFS RAID support right in the Ubuntu installer?

1 Like

@jetpac: Have a look at the blog posts I referenced in that thread, and you will have your answer about beadm like experience (named Zsys). A third blog post will detail that a little bit more.

1 Like

ZFS raid support will come later on in the installer once it’s out of the first experimental phase. We have those use case in mind when moving grub onto the ESP as well. Don’t worry :wink:

Using the current daily-live installer I’m getting this update-grub fatal error when setting up zfs:
(replicated by opening a shell & chroot and trying the command manually)

sudo update-grub
Sourcing file /etc/default/grub' Sourcing file /etc/default/grub.d/init-select.cfg’
Generating grub configuration file …
cannot open ‘This’: no such pool

Boot fails at initramfs, but I can get in manually by this:
zpool import -R /root -N rpool
zfs mount -a
exit

(And then bpool isn’t mounting properly and I still can’t run update-grub.)

Any ideas?

Can you file a bug report please, content of grub.cfg, ls /etc/default/grub/ as well?
Ideally, you can set -x and redirect stderr to a file to you attached to this bug report.

Then, just print the bug report number and we’ll follow up there, thanks

Did you want the contents of /etc/default/grub or /etc/grub.d ?

Also it looks like /etc/grub.d/10_linux_zfs is the culprit here, as update-grub seems to be dying on that file being hit.

Thx! Let’s follow up on the bug :slight_smile:

1 Like

Dear all,

After installed the brand new Ubuntu 19.10, if one wants to create a new zpool/zfs on another disk, what’s the best practice? I mean how to avoid some pitfall like race condition?

Assume I installed eoan on /dev/sda, and now I want to use /dev/sdb as another new zpool named as “dpool”, and set it’s mountpoint to /home/[user]/projs, is there any way to let dpool be imported after rpool? Because if not the dpool might be imported earlier than rpool, and in turns making rpool’s import failed.

I wasn’t sure how one is supposed to officially add a user home directory to the USERDATA pool, so I wrote up this hack:

#!/bin/bash -ex
# Hack to add user USERDATA zpool to an Ubuntu 19.10 system.
# I have this at /usr/local/bin/zfsuseradd.sh
user="${1}"
[[ -n "$user" ]] || (echo "User not specified." && exit 1)
zfs_user_suffix=$(mount | grep rpool/USERDATA/root_ | awk '{print $1}' | sed 's/rpool\/USERDATA\/root_//')
[[ -n "$zfs_user_suffix" ]] || (echo "Can't get ZFS user suffix." && exit 1)

echo "creating rpool/USERDATA/""${user}""_""${zfs_user_suffix}"""
[[ -e /"${user}"_"${zfs_user_suffix}" ]] && (echo "mount point already exists" && exit 1)
zfs create rpool/USERDATA/"${user}"_"${zfs_user_suffix}"
zfs set mountpoint=/"${user}"_"${zfs_user_suffix}" rpool/USERDATA/"${user}"_"${zfs_user_suffix}"
rsync -haHAX /home/"${user}"/ /"${user}"_"${zfs_user_suffix}"
rm -r /home/"${user:?}"
chown "${user}"."${user}" /"${user}"_"${zfs_user_suffix}"
zfs set mountpoint=/home/"${user}" rpool/USERDATA/"${user}"_"${zfs_user_suffix}"

Is there a proper way to do this?

2 Likes

I believe this is coming still, but you should generally set all “local” properties that the installer set for the home of the user created during installation.
Doing a grep for local on that dataset, I get:

$ zfs get all rpool/USERDATA/<user>_<suffix> | grep local
rpool/USERDATA/<user>_<suffix>  mountpoint               /home/<user>               local
rpool/USERDATA/<user>_<suffix>  canmount                 on                         local
rpool/USERDATA/<user>_<suffix>  org.zsys:bootfs-datasets rpool/ROOT/ubuntu_<ANOTHER_SUFFIX> local

Is there a recommended way of getting zpools besides bpool and rpool & the children thereof to mount at boot?

(I assume without using the zpool-import-cache and similar systems scripts?)

@didrocks: Will the new ZFS layout/installation option described in the blog posts also be available in conjunction with MAAS (at least in conjunction with Ubuntu 19.10 images or newer), that is, will we see a follow-up for the “Deploying Ubuntu root on ZFS with MAAS” post in the near future as well? :grinning:

I’m trying to introduce ZFS into my life but before I dive into it I’d like to know how to recover my data when I eventually screw something up so I’m being safe.

I’m simulating a crash scenario where I can’t boot the system for example and have to access the data somehow and do a backup possibly.

So, I have an Ubuntu 19.04 installed on my laptop SSD but I boot from Ubuntu 19.04 Live USB.
How do I access the data from the SSD? It’s not quite clear to me. Nautilus recognizes bpool and a few rpools on the + ‘Other locations’ but there’s nothing mounted on the /, /mnt, /media.
sudo zpool status returns nothing while zpool status returns:

$ zpool status
pool: bpool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ‘zpool upgrade’. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
bpool       ONLINE       0     0     0
  sda4      ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
  sda5      ONLINE       0     0     0

errors: No known data errors

I’ve been messing around with zfs mount and zpool import but I basically got nowhere. Can someone please lay out the method to access data from that SSD via Live USB? It would also be preferable if the method was non destructive meaning if the system on the SSD was previously bootable it would stay bootable after the operations performed while accessing via LiveUSB.

@KristijanZic You might want to have a look at the zfsonlinux Wiki; the “Ubuntu 18.04 Root on ZFS” page contains a “Troubleshooting” section which basically explains how you can mount your datasets using a different root/base directory with “zfs import -R ...” (this is what you’re missing).

2 Likes

This is fine, great work! (with the additional property that @ahayes mentionned)

You only need one <pool>/USERDATA on any rpool which is imported during boot and it will be considered as current user data pool. Any new user created with “adduser” or any tool calling it will create the corresponding dataset. If multiple of them, it will consider them all and any new user created will be prefer the current one.

I don’t think people should create in general any additional pools as you can extend any pool adding disks. At worst, add other pools to only handle persistent datasets, but not system ones (more on persistent vs system datasets on a new blog post in the next few days).

I don’t think there is desktop images for MAAS yet. The day there will be, I’m sure the option will be supported. Right now, remember this is only an experimental options and we are at the very early beginning :slight_smile:

canmount=on seems to be the default for user pools on USERDATA. Is there any need to make a local property for those?

I have this for the other property:

zfs set org.zsys:bootfs-datasets=rpool/ROOT/ubuntu_"${zfs_system_suffix}" \
rpool/USERDATA/"${user}"_"${zfs_user_suffix}"

Also, creating a user in gnome or using adduser doesn’t seem to create the zpool for the user. Is there a script somewhere which needs to be called for that to work properly?

Re: additional pools, the question was regarding other pools for persistent datasets. Doing "zpool import doesn’t seem to make those get mounted automatically at next boot.

I’m not sure which zpool targets one should enable otherwise to make that work properly.

Rather than enable any of the zfs upstream infrastructure which has been intentionally disabled, I’ve simply using a simple systemd service to mount the pools I want, and am then making that a dependency for services I have using those pools by just adding zpool-local.service to the “After:” section of other service files. Obviously this is also a hack, but I’m going to wait until the dust settles and see what you guys come up with for a cleaner way to do this. (Especially since /etc/zfs/zpool.cache seems to get recreated at every boot.)

/lib/systemd/system/zpool-local.service

[Unit]
Description=Create Local ZFS mounts
# Placed in /lib/systemd/system/zpool-local.service
#After=network.target
#StartLimitIntervalSec=0
Requires=systemd-udev-settle.service
After=systemd-remount-fs.service
After=systemd-udev-settle.service

[Service]
Type=simple
Restart=always
RestartSec=10
User=root
# Usage: ExecStart=/usr/local/bin/mount_zfs.sh zpool1 zpool2 zpool3 ... zpoolN
ExecStart=/usr/local/bin/mount_zfs.sh 

[Install]
WantedBy=multi-user.target

/usr/local/bin/mount_zfs.sh

#!/bin/bash
# Place in /usr/local/bin/mount_zfs.sh
USAGE="Usage: $0 zpool1 zpool2 zpool3 ... zpoolN"

if [ "$#" == "0" ]; then
        echo "$USAGE"
        exit 0
fi

while (( "$#" )); do
pool="${1}"
if zpool list | grep "${pool}" > /dev/null; then
    echo "${pool} already mounted."
else
    echo "Mounting ${pool}."
    zpool import "${pool}"
    if zpool list | grep "${pool}" > /dev/null; then
        echo "Mount of ${pool} failed." && exit 1
    fi
fi

shift

done

Those scripts are here too: https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

1 Like