ZFS focus on Ubuntu 20.04 LTS blog posts

If renaming a manual save to autozsys_(some id) works, the feature request would be to have a zsysctl command that does this for me.

Note that in the example I gave, the save should not be removed yet, as it is one of the three saves the policy says to keep in that time interval, and so gc --all would not remove it either. Removing it manually would be bad because it would leave a large time gap. So I want to say, “I have no special interest in this snapshot anymore, keep it as long as the GC policy says to keep it”.

For “pin”/“promote”, I would only revert in grub if the machine itself had become broken. If instead I messed up something in a project I was working on, I would much rather keep the machine as is, and repair this by hand based on the last known good state. But if that happened near the end of the day I do need to pin the last known good state, so I can work on that tomorrow.

I think this is something that could be added (even if useful for a niche case), mind filing a bug against the upstream repo? It seems an independent feature enough to open that up for external contributors if you are interested :slight_smile:

I have no experience with Go, but I created an issue.

1 Like

Today is Tuesday, meaning, it’s OpenZFS on Ubuntu 20.04 LTS blog post day! We cover ZSys for system administrators. Hope you will enjoy it! :slight_smile:

3 Likes

The series of blog posts has been useful, keep it up.

It looks like zsysctl service dump is the entry point for tools that want to process and display data from zsys. I’m thinking of perhaps a cal -y that shows where (when) your saves are, or something that can quickly list older versions of a file and perhaps diff with them.

Would it be possible to have data use information in there as well? It would of course be possible to correlate with the output of zfs list -o space -t all but it would be quite convenient if that data was already in the json.

Thanks a lot (next one coming in minutes :slight_smile:)

We have to balance the number of properties we fetch from ZFS due to Go → C → syscall layers being slow (one for each property for each datasets) and creating performance issues. However, this is something we keep definitively in mind once we get this performance part is nailed.

Calculating the exact space taken by a snapshot is challenging (because removing this snapshot will maybe move the blocks to next one as retainer and there is no libzfs API to reflect that). We will also tackle this work when doing a zsysctl state reclaim commands to offer interesting range of states to be removed if the user wants to win back some disk space.

New article on our OpenZFS implementation on ubuntu 20.04 LTS: today, let’s expore the ZSys partition layout!

1 Like

The ZFS layout has several datasets that are containers outside the regular file system hierarchy, and are not meant to ever be mounted. Some of these have their mountpoint property set to “none” while others do have a path set.

Is there a reason for this, or is it just a cosmetic difference?

Is it possible to just set the mountpoint for every container dataset to “none”? This would help communicate that they are in fact purely containers.

Currently, these have “none”:

  • bpool/BOOT
  • rpool/ROOT

And these have a path, but I think they should also be “none”:

  • bpool has /boot
  • rpool has /
  • rpool/USERDATA has /

(Source of info: zfs list)

This is precisely what the next blog post is all about :slight_smile: See you on Tuesday!
(Note: I think we should set rpool/USERDATA to none though, it was set to /home before but with _uuid, that’s not a good choice)

Issue I have been having and I keep looking and trying standard zfs directions is I install with zfs to my nvme drive, but since it is small, I want to move USERDATA (ie /home) to another zpool that is a pair of HDD that I have configured as a mirror. Call that zpool hpool for example. What I want to do is move rpool/USERDATA to hpool/USERDATA so that all users have their home on the mirrored zpool hpool instead of on the smaller nvme drive.

If that is next weeks blog post, great.

There will be no step-by-step guide for this, but basically, you will understand the mechanism linking user datasets to system ones.
Basically, the idea is to move all datasets in rpool/USERDATA to hpool/USERDATA, having the same properties and user properties. I think some zfs send/recv can help you with that. Good luck :slight_smile:

Today blog post is dedicated to the reasoning on ZSys dataset layout! Enjoy :slight_smile:

Hi Didier, 1st of all thanks for blog posts. I was eagerly awaiting the blog post about the datasets however after reading it I’m still a bit confused about how to deal with my particular situation. I have replaced the regular dataset for my user with an encrypted equivalent and created a new encrypted dataset mounted as /home/myuser/Downloads. I would like to now make the Downloads dataset persistent, do I have to create a new data set and nest it directly under rpool or can I somehow move an existing dataset from under rpool/USERDATA/myuser/downloads directly onto rpool?

Thanks a lot for the feedback!

I would like to now make the Downloads dataset persistent, do I have to create a new data set and nest it directly under rpool or can I somehow move an existing dataset from under rpool/USERDATA/myuser/downloads directly onto rpool?

In theory, you just move your dataset under rpool as rpool/downloads. You should be able to use zfs rename to directly do it, ensure that zfs set mountpoint= is set to the correct path.

However, I’m wondering if the upstream ZFS generator won’t create an issue with this:

  • your home dataset is encrypted (I think until you log into your session, correct)?
  • you want to mount, only once the home dataset is mounted, the downloads subdirectory.

I’m pretty sure this case isn’t dealt with, but I would be glad for you to try this out and report the results if you don’t mind :slight_smile:

I noticed that ZFS doesn’t show remaining disk space is this a normal behavior or should I file a bug?

zfs list should dataset by dataset, but there is indeed some discrepancies with df. I think that worthes an upstream bug.

I really appreciate your reply Didier but I’m afraid that I can’t test the rename capability anymore. I’ve created a new rpool/DOWNLOADS/myuser dataset while waiting for your message and sure enough I’ve stumbled upon the overlay issue :grin:

Yes you are correct, my dataset gets decrypted upon login. I followed this post: Linux homedir encryption while setting up the the encryption and I’ve fixed the overlay issue by piping the zfs get canmount -s local -H -o name,value command in the mount-zfs-homedir through a reversed sort command (it works with my layout, it might not work for others).

Thanks again for the blog posts and all the work you guys are doing to get the most out of zfs on linux! Thank you!

Any plan to support (encrypted) swap & ZFS off-the-shelf? I managed to set it up on my end but would love to see that become standard. Happy to contribute (modestly).

Thanks a lot! We will try to find a generic way for the overlay issue, can you paste a diff from your changes so that we look at how we can (probably modify) and upstream it?

Yes! Encryption, inluding swap, is planned for groovy. As a bonus, we are looking at per-user encryption capability as well.
The solution is a mix of Luks + native ZFS encryption so that any integration with TPM, certificates and so on, works out of the box! We are currently having challenges with per user encryption with sudo, su and ssh with ssh key (no password). You will note that most of wiki don’t cover this up (even if we start having some ideas, any directions appreciated!).
Our goal, once landed and battle-tested in groovy, is to backport all this in a 20.04.x update for new installations.

1 Like

I can see it with df but can’t see it with GParted or Disks which are the tools that a normal user would use