Enhancing our ZFS support on Ubuntu 19.10 - an introduction

Can you file a bug report please, content of grub.cfg, ls /etc/default/grub/ as well?
Ideally, you can set -x and redirect stderr to a file to you attached to this bug report.

Then, just print the bug report number and we’ll follow up there, thanks

Did you want the contents of /etc/default/grub or /etc/grub.d ?

Also it looks like /etc/grub.d/10_linux_zfs is the culprit here, as update-grub seems to be dying on that file being hit.

Thx! Let’s follow up on the bug :slight_smile:

1 Like

Dear all,

After installed the brand new Ubuntu 19.10, if one wants to create a new zpool/zfs on another disk, what’s the best practice? I mean how to avoid some pitfall like race condition?

Assume I installed eoan on /dev/sda, and now I want to use /dev/sdb as another new zpool named as “dpool”, and set it’s mountpoint to /home/[user]/projs, is there any way to let dpool be imported after rpool? Because if not the dpool might be imported earlier than rpool, and in turns making rpool’s import failed.

I wasn’t sure how one is supposed to officially add a user home directory to the USERDATA pool, so I wrote up this hack:

#!/bin/bash -ex
# Hack to add user USERDATA zpool to an Ubuntu 19.10 system.
# I have this at /usr/local/bin/zfsuseradd.sh
user="${1}"
[[ -n "$user" ]] || (echo "User not specified." && exit 1)
zfs_user_suffix=$(mount | grep rpool/USERDATA/root_ | awk '{print $1}' | sed 's/rpool\/USERDATA\/root_//')
[[ -n "$zfs_user_suffix" ]] || (echo "Can't get ZFS user suffix." && exit 1)

echo "creating rpool/USERDATA/""${user}""_""${zfs_user_suffix}"""
[[ -e /"${user}"_"${zfs_user_suffix}" ]] && (echo "mount point already exists" && exit 1)
zfs create rpool/USERDATA/"${user}"_"${zfs_user_suffix}"
zfs set mountpoint=/"${user}"_"${zfs_user_suffix}" rpool/USERDATA/"${user}"_"${zfs_user_suffix}"
rsync -haHAX /home/"${user}"/ /"${user}"_"${zfs_user_suffix}"
rm -r /home/"${user:?}"
chown "${user}"."${user}" /"${user}"_"${zfs_user_suffix}"
zfs set mountpoint=/home/"${user}" rpool/USERDATA/"${user}"_"${zfs_user_suffix}"

Is there a proper way to do this?

2 Likes

I believe this is coming still, but you should generally set all “local” properties that the installer set for the home of the user created during installation.
Doing a grep for local on that dataset, I get:

$ zfs get all rpool/USERDATA/<user>_<suffix> | grep local
rpool/USERDATA/<user>_<suffix>  mountpoint               /home/<user>               local
rpool/USERDATA/<user>_<suffix>  canmount                 on                         local
rpool/USERDATA/<user>_<suffix>  org.zsys:bootfs-datasets rpool/ROOT/ubuntu_<ANOTHER_SUFFIX> local

Is there a recommended way of getting zpools besides bpool and rpool & the children thereof to mount at boot?

(I assume without using the zpool-import-cache and similar systems scripts?)

@didrocks: Will the new ZFS layout/installation option described in the blog posts also be available in conjunction with MAAS (at least in conjunction with Ubuntu 19.10 images or newer), that is, will we see a follow-up for the “Deploying Ubuntu root on ZFS with MAAS” post in the near future as well? :grinning:

I’m trying to introduce ZFS into my life but before I dive into it I’d like to know how to recover my data when I eventually screw something up so I’m being safe.

I’m simulating a crash scenario where I can’t boot the system for example and have to access the data somehow and do a backup possibly.

So, I have an Ubuntu 19.04 installed on my laptop SSD but I boot from Ubuntu 19.04 Live USB.
How do I access the data from the SSD? It’s not quite clear to me. Nautilus recognizes bpool and a few rpools on the + ‘Other locations’ but there’s nothing mounted on the /, /mnt, /media.
sudo zpool status returns nothing while zpool status returns:

$ zpool status
pool: bpool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ‘zpool upgrade’. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
bpool       ONLINE       0     0     0
  sda4      ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
  sda5      ONLINE       0     0     0

errors: No known data errors

I’ve been messing around with zfs mount and zpool import but I basically got nowhere. Can someone please lay out the method to access data from that SSD via Live USB? It would also be preferable if the method was non destructive meaning if the system on the SSD was previously bootable it would stay bootable after the operations performed while accessing via LiveUSB.

@KristijanZic You might want to have a look at the zfsonlinux Wiki; the “Ubuntu 18.04 Root on ZFS” page contains a “Troubleshooting” section which basically explains how you can mount your datasets using a different root/base directory with “zfs import -R ...” (this is what you’re missing).

2 Likes

This is fine, great work! (with the additional property that @ahayes mentionned)

You only need one <pool>/USERDATA on any rpool which is imported during boot and it will be considered as current user data pool. Any new user created with “adduser” or any tool calling it will create the corresponding dataset. If multiple of them, it will consider them all and any new user created will be prefer the current one.

I don’t think people should create in general any additional pools as you can extend any pool adding disks. At worst, add other pools to only handle persistent datasets, but not system ones (more on persistent vs system datasets on a new blog post in the next few days).

I don’t think there is desktop images for MAAS yet. The day there will be, I’m sure the option will be supported. Right now, remember this is only an experimental options and we are at the very early beginning :slight_smile:

canmount=on seems to be the default for user pools on USERDATA. Is there any need to make a local property for those?

I have this for the other property:

zfs set org.zsys:bootfs-datasets=rpool/ROOT/ubuntu_"${zfs_system_suffix}" \
rpool/USERDATA/"${user}"_"${zfs_user_suffix}"

Also, creating a user in gnome or using adduser doesn’t seem to create the zpool for the user. Is there a script somewhere which needs to be called for that to work properly?

Re: additional pools, the question was regarding other pools for persistent datasets. Doing "zpool import doesn’t seem to make those get mounted automatically at next boot.

I’m not sure which zpool targets one should enable otherwise to make that work properly.

Rather than enable any of the zfs upstream infrastructure which has been intentionally disabled, I’ve simply using a simple systemd service to mount the pools I want, and am then making that a dependency for services I have using those pools by just adding zpool-local.service to the “After:” section of other service files. Obviously this is also a hack, but I’m going to wait until the dust settles and see what you guys come up with for a cleaner way to do this. (Especially since /etc/zfs/zpool.cache seems to get recreated at every boot.)

/lib/systemd/system/zpool-local.service

[Unit]
Description=Create Local ZFS mounts
# Placed in /lib/systemd/system/zpool-local.service
#After=network.target
#StartLimitIntervalSec=0
Requires=systemd-udev-settle.service
After=systemd-remount-fs.service
After=systemd-udev-settle.service

[Service]
Type=simple
Restart=always
RestartSec=10
User=root
# Usage: ExecStart=/usr/local/bin/mount_zfs.sh zpool1 zpool2 zpool3 ... zpoolN
ExecStart=/usr/local/bin/mount_zfs.sh 

[Install]
WantedBy=multi-user.target

/usr/local/bin/mount_zfs.sh

#!/bin/bash
# Place in /usr/local/bin/mount_zfs.sh
USAGE="Usage: $0 zpool1 zpool2 zpool3 ... zpoolN"

if [ "$#" == "0" ]; then
        echo "$USAGE"
        exit 0
fi

while (( "$#" )); do
pool="${1}"
if zpool list | grep "${pool}" > /dev/null; then
    echo "${pool} already mounted."
else
    echo "Mounting ${pool}."
    zpool import "${pool}"
    if zpool list | grep "${pool}" > /dev/null; then
        echo "Mount of ${pool} failed." && exit 1
    fi
fi

shift

done

Those scripts are here too: https://gist.github.com/satmandu/4da5e900c2c80c93da38c76537291507

1 Like

That’s odd, my USERDATA dataset (and I mean the literal USERDATA one, which acts as a container dataset) has canmount=off. I experienced some odd behavior regarding this, as I set this property to on in my dataset but it kept being switched to noauto after a reboot. I filed #1849179 about this.

Yes, you need to have “ZSYS” apt install zsys to get your annotations on any new user (contrary to default one hardcoded in the installer). Those will be needed once we ship it by default, as zsys will need those annotations to be able to rollback system data, or system + userdata to make those relationship.

Note that you should change the prefix from org.zsys to com.ubuntu.zsys (finale 19.10 release has com.ubuntu.zsys, earlier had org.zsys).

That should work by default though (I tested this locally a week ago, I’ll retry). But any locally imported pools should be reimported on reboot.

I had my zfs user account script adding canmount=noauto to user zfs pools, which made them not come up at all at boot. And I couldn’t get a revert to canmount=on to stick. I ended up reinstalling and just not setting canmount at all on the USERDATA zpools I was creating.

I figured I was messing around with zfs systemd services and must have messed something up myself.

Disabling my zpool-local service (which had imported my local non-bpool/rpool zpools) and rebooting resulted in none of my local non-bpool/rpool zpools importing at boot. I have zsys installed.

Do you mind opening a bug against zsys please (https://github.com/ubuntu/zsys)? I’ll give it another try.

Bug opened here: https://bugs.launchpad.net/ubuntu/+source/zsys/+bug/1849522

1 Like