ZFS focus on Ubuntu 20.04 LTS blog posts

You don’t use gparted with ZFS because basically you don’t repartition your ZFS space once it’s allocated. ZFS comes with it’s own set of tools to manage pools, datasets, volumes and quotas.

For gnome-disks there is an upstream bug.

2 Likes

And if hibernation is not enabled by default, it would be great to be able to customize the size of the swap…

And here we are with the last but not the least blog post presenting our OpenZFS work on Ubuntu 20.04 LTS: ZSys properties on ZFS datasets.

We hope that this whole series was informative and fun to read for you! We tried to answer most of the questions that were raised by the community, on Ars Technica article by @jrssnet and various forums.

Of course, this is only the beginning and the conversation can go on here. :slight_smile:

hi Didier,
thank you for your posts about zsys, they’re very useful and interesting. I just wanted to ask you: I have installed Ubuntu 20.04 with ZFS on my Dell XPS-L702X, is it possible with zsys to revert the machine to a previous state (I mean a complete state, of course, like the ones you can find in “History” in boot menu ) without rebooting, not from GRUB menu, I mean from command line?

The problem is that I’ve used an application (grub customizer) and now my GRUB doesn’t show the history any more. Everything else, in GRUB and in the system, works with no problem, but it was not possible restore the history.

I tried:
a) to rollback bpool/BOOT/ubuntu to a snapshot
b) to reinstall zsys
c) to reinstall grub2
d) to remove grub customizer
e) zsysctl boot prepare
f) zsysctl service reload
g) zsysctl service refresh
h) zsysctl boot update-menu
i) to launch boot repair (doesn’t work, don’t know why… it says “LegacyWindows detected. Please enable BIOS-compatibility/CSM/Legacy mode in your UEFI firmware, and use this software from a live-CD (or live-USB)”.but I set the compatibility and it says still so.

I’ve also tried from live-USB.

So I thought: if I could restore a full machine state, everything should be resolved…

thank you very much

1 Like

Thanks for the great work on the ZSys system! I’ve been using it since 20.04 and also actively experimenting :wink: One rabbithole i found myself in is GRUB. Like you mentioned:

Thus, GRUB should be installed on a fixed partition: There is no need to version its content (save its state), boot from an older GRUB menu or flip-flap between multiple, parallel GRUB instances installed on your system in case of multiple machines. Another issue is when doing a revert, which will thus creates a new clone filesystem dataset with the updated GRUB here, the machine would still pointing on the older GRUB filesystem dataset (no install-grub call and pointing to the stalled old GRUB place) it was installed to!

Experimenting with multiple GRUB installations in parallel and trying revert scenarios, I can testify that this is just a headache: “Oh, I reverted my machine A, but machine B rpool is still the installed bootloader, and thus, not listing this additional entry” or “Why this doesn’t reference latest entry there”… So the safest and more predictable solution was to install GRUB on a small (grub itself is 8 MiB) regular partition

True, i used grub-customizer (apt install) and that totally messed up my grub config. So i figured i’d just revert but GRUB is not part of the system-state in ZSys :wink:

As ZSys does heavy grub modificiations to store History entries etc, the vfat partition has become the ‘single point of failure’. Was wondering what the best practice would be to keep a ‘working’ copy of /boot/grub and if ZSys should manage it or not. And if not, how to recover from a “corrupt” grub.cfg in a sane way.

Regarding SSH and SSH keys, you are trying to wrap your head around how to successfully login with SSH I presume? Would the way ecryptfs handled it be an option? ecryptfs had a folder named .ecryptfs on /home where unencrypted userdata was stored. I did put my authorized_keys file there which solved my problem back then. Is this mechanism too hackish?

Hi Didier, first of all thank you (and your team of course) very much for this amazing work that you did!!
I have a question though: I understood that if you delete a system state the connected user states are also deleted. On my machine (Ubuntu 20.04, begun with ZFS with 19.10 and installed zsys later) only the system state is deleted, the respective user states remain: I have 3 system states and their respective user states. I delete the oldest system state kejfpx, it gets deleted but their user states remain:

majestix@ElTibetano:~$ zsysctl show 
Name:           rpool/ROOT/ubuntu_e1z32f
ZSys:           true
Last Used:      current
History:        
  - Name:       rpool/ROOT/ubuntu_e1z32f@autozsys_jrsvcc
    Created on: 2020-06-25 02:05:59
  - Name:       rpool/ROOT/ubuntu_e1z32f@autozsys_hb1h6o
    Created on: 2020-06-24 22:00:08
  - Name:       rpool/ROOT/ubuntu_e1z32f@autozsys_kejfpx
    Created on: 2020-06-23 16:25:17
Users:
  - Name:    majestix
    History: 
     - rpool/USERDATA/majestix_n8mt93@autozsys_jrsvcc (2020-06-25 02:06:01)
     - rpool/USERDATA/majestix_n8mt93@autozsys_hb1h6o (2020-06-24 22:00:09)
     - rpool/USERDATA/majestix_n8mt93@autozsys_kejfpx (2020-06-23 16:25:18)
  - Name:    root
    History: 
     - rpool/USERDATA/root_n8mt93@autozsys_jrsvcc (2020-06-25 02:06:01)
     - rpool/USERDATA/root_n8mt93@autozsys_hb1h6o (2020-06-24 22:00:09)
     - rpool/USERDATA/root_n8mt93@autozsys_kejfpx (2020-06-23 16:25:18)
  - Name:    tb
    History: 
     - rpool/USERDATA/tb_2gfwzx@autozsys_jrsvcc (2020-06-25 02:06:01)
     - rpool/USERDATA/tb_2gfwzx@autozsys_hb1h6o (2020-06-24 22:00:09)
     - rpool/USERDATA/tb_2gfwzx@autozsys_kejfpx (2020-06-23 16:25:18)
majestix@ElTibetano:~$ 
majestix@ElTibetano:~$ sudo zsysctl state remove -s kejfpx
[sudo] password for majestix: 
INFO Updating GRUB menu                           
majestix@ElTibetano:~$ zsysctl show 
Name:           rpool/ROOT/ubuntu_e1z32f
ZSys:           true
Last Used:      current
History:        
  - Name:       rpool/ROOT/ubuntu_e1z32f@autozsys_jrsvcc
    Created on: 2020-06-25 02:05:59
  - Name:       rpool/ROOT/ubuntu_e1z32f@autozsys_hb1h6o
    Created on: 2020-06-24 22:00:08
Users:
  - Name:    majestix
    History: 
     - rpool/USERDATA/majestix_n8mt93@autozsys_jrsvcc (2020-06-25 02:06:01)
     - rpool/USERDATA/majestix_n8mt93@autozsys_hb1h6o (2020-06-24 22:00:09)
     - rpool/USERDATA/majestix_n8mt93@autozsys_kejfpx (2020-06-23 16:25:18)
  - Name:    root
    History: 
     - rpool/USERDATA/root_n8mt93@autozsys_jrsvcc (2020-06-25 02:06:01)
     - rpool/USERDATA/root_n8mt93@autozsys_hb1h6o (2020-06-24 22:00:09)
     - rpool/USERDATA/root_n8mt93@autozsys_kejfpx (2020-06-23 16:25:18)
  - Name:    tb

History:

  • rpool/USERDATA/tb_2gfwzx@autozsys_jrsvcc (2020-06-25 02:06:01)
  • rpool/USERDATA/tb_2gfwzx@autozsys_hb1h6o (2020-06-24 22:00:09)
  • rpool/USERDATA/tb_2gfwzx@autozsys_kejfpx (2020-06-23 16:25:18)
    majestix@ElTibetano:~$

What am I doing wrong?

Sorry for the formatting, something went wrong…

It seems like GC isn’t working well in my case - I get into a low storage state (w/ 21 snapshots) on updates and it suggests to manually pick states to delete. Is that really the goal? I would have assumed that old states would either be deleted or that apt would show me a “to make space run 'zsysctl remove old” or some such. All I really care about is the last one. If I boot up and things work then I am fine ditching that snapshot the next time I update.

Here is what I am seeing -

Requesting to save current system state
ERROR couldn’t save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%.
Free space on pool “rpool” is 12%.
Please remove some states manually to free up space.

Thanks for any advice on how to clear all 21 snapshots (sorry - ‘states’) without copying them each manually,
Jason

1 Like

The constraint on space left is not yet implemented in the rules of the GC. For the moment the only constraints are time based and we display this message to indicate the system is running low on disk space.

It’s on the roadmap to add a disk space based constraint and add some kind of reclaim command to show which snapshot would free most space.

I installed Ubuntu 18.04 root on ZFS on Sun Jul 21 2019 following the instructions provided by the upstream HOWTO on root on ZFS for Ubuntu.

I am wondering if the configuration I have is compatible with zsysctl?

Will the fact that my user state datasets live at /home/ rather than /USERDATA/ be a problem?

Here is my layout:

root@ubuntuzfs:/home/mike# zfs list -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 26.9G 196G 96K /
rpool/ROOT 8.73G 196G 96K none
rpool/ROOT/ubuntu 8.73G 196G 6.58G /
rpool/home 14.0G 196G 96K /home
rpool/home/mike 14.0G 196G 10.7G /home/mike
rpool/home/root 9.20M 196G 8.05M /root
rpool/opt 39.8M 196G 39.3M /opt
rpool/srv 328K 196G 96K /srv
rpool/usr 5.83M 196G 96K /usr
rpool/usr/local 5.73M 196G 5.12M /usr/local
rpool/var 4.02G 196G 96K /var
rpool/var/cache 1.32G 196G 385M /var/cache
rpool/var/games 160K 196G 96K /var/games
rpool/var/lib 388K 196G 96K /var/lib
rpool/var/lib/AccountsService 292K 196G 116K /var/lib/AccountsService
rpool/var/log 1.47G 196G 1.10G legacy
rpool/var/mail 172K 196G 100K /var/mail
rpool/var/snap 1.31M 196G 176K /var/snap
rpool/var/spool 4.28M 196G 1.27M legacy
rpool/var/test 37.0M 196G 36.9M /var/test
rpool/var/tmp 1.54M 196G 144K legacy
rpool/var/www 1.17G 196G 485M /var/www

root@ubuntuzfs:/home/mike# zfs list -r bpool
NAME USED AVAIL REFER MOUNTPOINT
bpool 277M 90.6M 96K /
bpool/BOOT 275M 90.6M 96K none
bpool/BOOT/ubuntu 275M 90.6M 275M legacy

First of all thank you so much for giving us invaluable information in your blog.

I have one question. I’m running 20.04 with zfs and everything works great. But I have another set of disks which I want to create a pool on. Is there a way to put that pool under zsysd management?

Thanks again.

I figured it out. It may be useful to others so I’ll explain the process.

I have a pool called ‘rocket’ mounted under ‘/rocket’. I want to create a file system which will be mounted under ‘/rocket/managed’ and will be under zsysd control. This is what you do:

  1. sudo zfs create -o canmount=off -o mountpoint=/ rocket/USERDATA
  2. sudo zfs create -o com.ubuntu.zsys:last-used=date +%s
    -o com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_pde7kt
    -o mountpoint=/rocket/managed rocket/USERDATA/rocket_no5zuf

You need to use UUIDs from your particular installation.

Hope this helps someone.

I think I spoke too soon. With my setup zsysd takes snapshots when I do “apt install” but it’s not taking periodic snapshots. What governs how often these snapshots are taken? I see that my home directory is taken every hour but root home directory sometimes is taken once in several days.

I’m somewhat familiar with zfs, much less so with zsys. If I create a new dataset inside of the rpool using the traditional zfs tools will zsys figure out what to do with it or do I risk breaking zsys?

Hello,

First of all, thank you for the great work in bringing ZFS to Ubuntu as a first-class citizen, and for the amazing series of blog posts. They were very enlightening.

I have a couple of questions/comments:

  1. When I first installed 19.10 with ZFS on root, I created a new rpool/encrypted dataset with my home directory in it (my idea was to add other datasets that need to be encrypted later). I tried enrolling it into zsys by setting the org.zsys:bootfs-datasets, com.ubuntu.zsys:last-used and com.ubuntu.zsys:bootfs-datasets appropriately (I copied the values from rpool/USERDATA/root_rlz0uc. However, zsysctl still showed them as persistent datasets. The only way I could get it to work was by doing zfs rename rpool/encrypted rpool/USERDATA/encrypted, at which point zsysctl showed them as user datasets. Is this to be expected?

  2. I would really like to encrypt my whole rpool (or at the very least, /etc). While 20.04 actually asks you for the password for canmount=on filesystems on boot, it’s quite late in the boot process. Is there a way to do this manually? (I do know quite a bit of ZFS administration)

Thanks!

Thanks for the kind words! This is indeed really true that it’s the only point of failure, however, this is really similar to other system when you only have one grub.cfg on ext4 systems, and this didn’t really happen. I suggest that you open a bug on grub-customizer upstream so that they fix the regression they are creating with grub script. I’m unsure we are seeing any other way as of today to avoid a single copy of grub (apart from regularly backup that a future backup tool integrated to ZSys may deal with?).

Ecryptfs used a global key to encrypt your ssh private keys, the issue is mostly decrypting the user keystore, which is needed for accessing those keys. We will figure something out (we have a similar issue with sudo, where the password used isn’t the target user password).

Thanks for the feedback! I was probably unclear in the blog posts, the system datasets are immediatly destroyed. This is not the case for user datasets. They are simply unlinked from the system datasets, and then, will be considered as any user state. Once the GC policy will elect to delete them (compared to the target), they will be destroyed, but you can consider those datasets as any other user datasets.
Note that if you really want to delete them right away, you can always run zsysctl state remove <name_of_user_state>.

It’s quite complex to transition from one layout to the other one. We worked with ZFS upstream so that the new HOW-TO is closer to our need. The only way (but it will take time) to transition to a ZSys compatible layout would be to run some zfs rename + tag some user properties for this. If anyone from the community wants to experiment and write a wiki page on this, we are happy to host it on the ZSys upstream github repo!

This is a combination of 3 things:

  • We are only managing real users, meaning for id > 1000 (or id == 0) (not system users basically). The other users are considered as part of the system and their home should be under /ROOT/ ubuntu_xxx (which only have one hierarchy) to be taken into account.
  • Only logged in users will have a systemd user session, which is used to take hourly snapshots.
  • All users (logged or unlogged) will have a snapshot taken when a system snapshot is taken.
    This explains why you have more user home for who a snapshot is taken compared to root home.
    Do you mind explaining a little bit more in detail your specific use case? If the idea is to move all user datasets to a different pool, under rocker/USERDATA, this is supported (we stop on the first USERDATA we find). (Do not forget about the user properties as well: https://didrocks.fr/2020/06/19/zfs-focus-on-ubuntu-20.04-lts-zsys-properties-on-zfs-datasets/)

Yes! Be aware though that you have different types of datasets, depending on where you create it (look at https://didrocks.fr/2020/06/16/zfs-focus-on-ubuntu-20.04-lts-zsys-dataset-layout/). Some will take hourly snapshots, other will be considered as persistent, and other will be part of the system state.

Thanks for the feedback :slightly_smiling_face:

Yes! Sorry if the blog was unclear (or maybe too verbose :slight_smile:) but user states are made of datasets under:

  • a /USERDATA/ hierarchy
  • AND having the bootfs-dataset metadata (which we added in ZSys 0.4.6 to avoid GC collecting manually created datasets)

We are going to add encryption support for new installs in a future 20.04.X release. We will use per dataset encryption (with ZFS inheritance property) and per-user encryption. So, for your system, the request will be in the initramfs, before calling pivot_root, which is early in the boot process (just after importing the pool). Would that cover your use case ?

1 Like

We are going to add encryption support for new installs in a future 20.04.X release. We will use per dataset encryption (with ZFS inheritance property) and per-user encryption. So, for your system, the request will be in the initramfs, before calling pivot_root, which is early in the boot process (just after importing the pool). Would that cover your use case ?

It does! It sounds really thought-through. My main issue will be migrating my existing system to that layout. I’ll still have to create new datasets and copy things over, but I’d rather not have to install from scratch.

Regarding periodic snapshots, your answer to @vfridman94065 clears things up a bit. I am having the same problem: no periodic snapshots for my user account. Since I created the encrypted dataset as rpool/encrypted (with my user dataset at rpool/encrypted/mario), and then I did zfs rename rpool/encrypted rpool/USERDATA/encrypted, which means that zsys thinks that it belongs to user encrypted instead of user mario. I’ll move things around to get it to work.

Again, thank you very much!

1 Like

Hello! I’m very excited about ZFS becoming 1st class in Ubuntu. But I am primarily running headless servers. When can we expect all this great work to make its way into the server install?

Hi,

Like many others, I too am very much excited with Ubuntu’s Root-on-ZFS support. Many thanks to everyone that have been working on this so far!

Already last year, I had installed 19.10 with Root-on-ZFS on a family laptop, and haven’t had any particular issues so far.

Mind you, that particular laptop isn’t being subjected to any complicated workflows; and I had made sure to go mostly with the defaults during installation (a safe-bet strategy that I often try to apply for freshly introduced stuff).

One of the painpoints early on was the scarcity of public information on the seemingly complicated design choices involved (e.g. dataset layout) and the related zsys behaviour; apart from a discourse thread similar to this one + a few succint blog posts.

Luckily, the chat between @didrocks and rlaager was also there for looking up some of the hairy details and rationale. Thanks for to both making that chat public and also thanks to rlaager in general for authoring and consistentky maintaing the related ZOL Wikis.

With the release of Ubuntu 20.04 LTS, and especially thanks to the new blogpost series by @didrocks (who also appears to be very much hands on with zsys and other ZFS related stuff), there have been very noticeable improvements in that regard. Please keep it up, folks!

Now, with my confidence building up, I am quite tempted to try 20.04 LTS with Root-on-ZFS on my home NAS, as a next step working my way up to more serious stuff.

Granted, Ubuntu still marks it as experimental and only rolls it out for desktop installations at this point. However, I will still give it a try since ZOL itself is known to be quite stable. Besides, it may even be desirable to have the Desktop GUI on a home NAS for some occasions where it can be turned on/off as needed.

Below are my humble thoughts so far about Ubuntu’s approach to Root-on-ZFS, written up on a quite high-level/course-grain manner at this time.

I do intend to post seperate more concrete (and succint) notes for some of those, though.

Things that make me ROCK

(We already see some of the corrolaries/consequences of this approach in the design choices and current implementation; and that is simply great!)

  1. The goal to embrace manual tweaking and ultimately cater various scenarios (desktop, server, …), as well as different levels of automation/wizardary (e.g. with or without zsys, …).

    And, as a corrolary:

    • “persistent” datasets (that can be manually maintained by the system admin)
    • the choice of zfs properties for zsys metadata and settings (and not a separate DB of any sort)
  2. The goal to ultimately cater multiple machine-personalities ( machinalities? ) side-by-side, even with other linux distros (and who knows, perhaps with some other OSes, like FreeBSD, later on?)

    If I have understood this correctly, this one is like multi-boot on zteroids :slight_smile:

    If so, I really like the idea; but I guess it probably still needs a bit more elaboration/work before it becomes a practical reality

  3. Ability to revert the reverts & perform bisection

    IMHO, anyone who have bitten themselves with a revert that erased the present and anything in between cannot but appreciate this.

  4. Automatic snapshots and garbage collection (gc), obviously.

    Just wish those were more configuarble (I will comment on that later)

  5. Last (but not the least… as clear enablers for most of the points above…):

    1. Manifest attention given to making "simple things easy and complicated things possible".

    2. Manifest desire for working with different communities and projects (including upstreams and other distros, …), listening to and evaluating their feedback (as well as that of the user base), and proposing/submitting upstream features/fixes.

      You would expect this one to be the standard way of doing things in the FOSS world. Well, it is… on paper… The real world is another matter…

      Just a small bémole here: I just wish there were more code sumbitted to upstream GRUB, which could utimately help that projet catch up with ZFS-related functionality; which could perhaps also help do away with the need for a bpool for the more common scenarios (as much as the ZFS feature support is concerned), while still keeping it as an option.

Things that make me IMPATIENT

  1. Some sort of configurability during installation

    As a starting point, even a bare minimum of configurability, like the ability to reserve some space at the START and END of the target disk, could help many users out there.

    And, if possible, naturally:

    • the ability to configure partition sizes is also very much welcome.
    • the ability to use existing partitions is even better.

    I am aware that most of this is now possible by modifying the [zys-setup] script thanks to @didrocks mentioning it in one of his recent blogposts.

    However, let’s face it: it’s an unnecessary pain… multiplied by the number of users who would need this kind of thing (who may even form the majority of early adopters, as things stand…)

    BTW, I understand that a major goal of the project is to ultimately enable non-technical users to enjoy the benefits of ZFS, but early adopters are necessarily going to be those who are already aware of its benefits… And those tend to be tweakers…

    Granted, that population is also probably well capable of modifying a shell script… but, still, who wants to that for each install? Or maintain his/her own separate version which risks falling out of sync with the next version of Ubuntu or Ubuquity? Again, multiplied by that many users…

  2. encryption (preferably native ZFS) during installation

    It’s great that this one seems to be already slotted for an upcoming 20.04.x maintenance release, as noted by @didrocks.

  3. server installs

    (as mentioned by a few others)

    I very much understand the desire to insure a certain level of quality and stability before rolling out support for server installs. That’s partly why I have separated out the items above.

Things that make me NERVOUS

I won’t get into the licensing and related subjects here, which are obviously still on a lot of people’s minds, including mine. Yet, this is probably not the right place to discuss about those…

Instead, here are some more technical aspects that make me nervous:

  1. A unique copy of GRUB kept on the ESP (vfat)

    I do understand the reasons, but it still makes me uncomfortable (especially when the rest of the system can be mirrored).

    Things standing as they are now, I will probably opt for an automated backup of all files on ESP, perhaps as part of a successful boot sequence, for starters.

    A complimentary option that I have been tinkering with is setting up a git repo on the root of the ESP (/boot/efi) or at least on /boot/grub; mimicking the way etc-keeper works (which may even be used as is, to be explored).

    Obviously, a git repo won’t protect against a corrupt disk/partition (so a backup would still be needed), but it would still be useful when individual files get corrupted, as in the case reported by another user.

    BTW, any information on the available hooks suitable for triggering such tasks (backup and/or git commit) would be valuable, going forward.

    There are probably other options for mitigating the risk, some of which I will try to enumerate on another post later and also report on my experience with the above.

    Meanwhile, any pointers or shared experience would be very much welcome.

  2. Unreliable bus-based device naming scheme (e.g. /dev/sda3 ) used by zsys-setup when setting up zfs pools

    Unless there are good reasons to do otherwise, I would expect the installer to refer to underlying block devices by more persistent/reliable names when setting up the zfs pools.

    A common advise seems to be using /dev/disk/by-id/<xyz> (as also preferred by the ZOL wiki by rlaager). In fact, even those names are a bit problematic, but still appear to be preferable to the bus-based identifiers such as /dev/sda3).

    Granted, upstream ZOL is said to be more and more apt to handle unrelaible device ordering, but still…

  3. Cherry-picked/distro-patched ZOL

    Well, I guess those would eventually fade out as things start stablizing…

Things that make me WISH for more (aka “FEATURE REQUESTS”)

  1. More configurability of automatic-snapshots

    The current auto-snapshotting behaviour appears to be solely based on dataset hierarchy (BOOT, ROOT, USERDATA) as well as some high-level properties (i.e. ...bootfs) which appear to also serve other purposes.

    While that current behaviour could be thought of as a sensible default, it’s a barrier for some interesting capabilities, such as:

    • Ability to mark some persistent datasets (via zfs props) as candidates for automatic snapshotting by zsys (optionally with a seperate inheritable snapshot-prefix/tag that is not necessarily autozsys_)

    • Ability to disable automatic snapshotting for some datasets

    I can expand on this later, if the rationale/purpose is not clear.

  2. More snapshot-gc strategies + some configurability

    I am glad a gc constraint on space left is on the roadmap, as noted by @jibel.

    My ask would be for some configurability of those time/space constraints (unless I am mistaken, there isn’t much configurability as of today).

    Unlike the settings for auto-snapshotting (which would best be implemented via inheritable ZFS properties), the policy configuration for gc could well be kept in classical config file(s) somewhere under /etc.

    What would really be great is an ability to configure separate policies depending on the snapshot prefix/tag

    Combined with the request listed just above (related to more configurability of the actual auto-snapshots), these would satify a great deal of common needs related to ZFS snapshots, without resorting to other tools.

My apologies for this rather long post.

If needed, I can open tickets for some of the above items on the zsys repo, and perhaps even try to contribute, as much as permitted by my available time and abilities.

And in any case, I will probably come back with more questions/notes (where I hope to be more succint).

Again, many thanks to everyone who have made this possible.

3 Likes