Server installer plans for 20.04 LTS

With 20.04 LTS, we will be completing the transition to the live server installer and discontinuing the classic server installer based on debian-installer (d-i), allowing us to focus our engineering efforts on a single codebase. The next-generation subiquity server installer brings the comfortable live session and speedy install of Ubuntu Desktop to server users.

If you have use cases for which you rely on d-i and that are not addressed by subiquity today, please let us know, by early January, what those are so we can incorporate that feedback into our plans for the 20.04 LTS development cycle.

The set of features we have committed to complete for 20.04 LTS in April are:

  • Implement the autoinstall specification as previously discussed
  • Guided resilient install option
  • Enable SSH into an installer session
  • Support vtoc partition tables as used by DASD disks on s390x

We will announce new versions of subiquity to discourse as they land in the stable channel.

If you are an Ubuntu Server user and you have not tried the live server installer lately, check it out! You can get the latest version by trying the daily Focal Fossa version or by downloading a released version and updating to the latest version of the installer when asked.

Features that have landed since the initial 18.04 LTS release include:

  • Advanced storage support with RAID and LVM, including encrypted LVM
  • Support for offline installs
  • Configuration of network bonds and VLANs
  • Support for arm64, ppc64el, and s390x architectures
  • Reuse of existing partitions, retaining their data
  • Switch to a shell for debugging purposes
  • The ability for the installer to self-update (to get fixes since the media was created)
  • Installation of latest updates during the install
  • Netboot support
  • Integrated error reporting

We have identified certain features of d-i that we have concluded are not requirements to implement in the live server installer prior to obsoleting d-i-based installation:

  • A dedicated recovery mode as the debug shell feature can be used instead
  • OEM config
  • Support for mounting iSCSI volumes
6 Likes

I have a LAN router with no graphics (PC Engines APU series) which requires a text installer capable of being used over an RS-232 line. Will I still be able to install Ubuntu on this somehow (e.g. boot.img.gz or mini.iso)?

Your use case is discussed here: https://bugs.launchpad.net/subiquity/+bug/1770962

1 Like

Thanks. That looks interesting.

I’ve marked myself as affected, and subscribed.

I’m looking forward to more localization (CJKV and others) support.

https://bugs.launchpad.net/subiquity/+bug/1765374

debian-installer has achieved by bterm and unifont (subset font for installer?).

https://salsa.debian.org/installer-team/bterm-unifont

The Live installer creates a /boot partition when using the automatic LVM option whereas the debian installer just creates /boot within the root LV

Does the software raid install path correctly configure both halves of a mirror to be bootable in the event of a faulted raid? I never understood what exactly was missing but I’ve heard several times that our software raid configuration doesn’t allow booting from the second drive.

Thanks

This is currently a known deficit that is on the roadmap for fixing in 20.04 (what’s referred to as “guided resilient install”).

1 Like

1. Please definitely include the possibility to see in the installer previously created LUKS containers, and independently from LUKS (or sometime created under LUKS) LVM groups

With the possibility to first open them either via installer gui, or at least in a terminal next to Alt+F2 (this way was possible in the alternative old installer… hmm at least in expert mode, or sth like that).

2. Also please allow to create LUKS containers, and independently from LUKS (or sometime manually chosen under LUKS) LVM groups under any other cotainer (be it a partition, whole disk, LUKS container, LVM volume - RAID storage, these are the most important, and perhaps also or anything you can think of).

So this would mean e.g. LUKS within LUKS, or partition -> LVM group -> LVM volume -> LUKS -> something.

This is very important because sometimes you want to create something under something to achieve important server goals (e.g. custom storage layout or security - even simple things creating LUKS volume with custom options like cipher etc, or many other reasons). And without support for that in the installer you cannot install Ubuntu! For now we have the alternative installer and can do it (just done it a few minutes ago in 18.04.3 Server alternate installer - created custom LUKS and LVM volumes manually in the shell and then opened/activated these volumes and the installer sees them and installed Ubuntu Server the way I wanted to)

And if majority of the users could be confused by this container within container placement, then you can add an “expert” option or something like that.

Alas, it’s too late now, but I’d also like ZFS on root in the installer.

I had just assumed that was going to be a feature, so I didn’t think to mention it here.

Thanks

Does this mean there won’t be a “light” image provided (like mini.iso) which gets the packages during the installation? Only the “big” (~900MiB) images?

Thanks.

I was shocked by this announcement of phasing out the d-i installer completely in favor of subiquity which is terrible and broken, and that too in an LTS release no less! It’s limitations are very well known and this was a highly irresponsible decision by the release team.

I have been using d-i based mini.iso for my VPS setups that involve pre-existing luks-encrypted partition hosting LVM volumes. I tried the subiquity installer with one of the pre-beta daily builds and it doesn’t even come close to getting the job done.

Fortunately (and to my surprise), I noticed that when Focal beta was announced, a reference to the d-i image was also included (I hope that was not a mistake and the release team probably realized that subiquity isn’t quite ready for prime-time and they can’t just get rid of d-i just yet).

The beta announcement also included a netboot URL, but it throws a 404 and they didn’t bother to fix it even though I reported it. Nevertheless, I was able to locate the d-i based mini.iso for the focal release here:
http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/current/legacy-images/netboot/

Please note that this location tends to get moved around a bit - it was “classic-images” a few days ago and is now renamed to “legacy-images”.

I hope they keep the d-i based mini.iso around until such time they are able to build all the functionality of the legacy installer into subiquity, although it’s just wishful thinking on my part.

5 Likes

There are a number of concerns with this. The “live” installer no longer creates a swapfile partition. This is understandably done for cloud deployments, but what about local installs?

1 Like

This topic was a request for input into the roadmap for the live installer in 20.04, which has now been released. For additional future requests, it would be a good idea for you to open a bug report at https://bugs.launchpad.net/subiquity instead.

I don’t believe the lack of a swap file by default is an intentional difference from the previous installer. However, the new installer has been around for two years and I believe this is the first time anyone has mentioned this particular issue, which suggests it’s not having a major impact on users of Ubuntu Server.

I don’t believe the lack of a swap file by default is an intentional difference from the previous installer. However, the new installer has been around for two years and I believe this is the first time anyone has mentioned this particular issue, which suggests it’s not having a major impact on users of Ubuntu Server.

@vorlon You are correct the new installer has been around for 2 years. No one has likely mentioned this tiny detail precisely because there has always been an alternate install. Now there is not. Hence the point made. This is not a bug, hence not reporting it a bug. It is a concern in terms of the direction taken. There is no longer choice of installer, so all I am saying maybe this should be considered.

2 Likes

This also breaks my setup to build vagrant boxes for vmware. I’m using packer and some scripts for this purpose, which don’t work with the “live” install images and rely on things like preseed.cfg and the option to install directly from the boot menu :frowning_face:

There are no official vagrant boxes for vmware and running ubuntu in virtualbox is a pain for a different reason: https://askubuntu.com/questions/1035410/ubuntu-18-04-gnome-hangs-on-virtualbox-with-3d-acceleration-enabled

I also came here because of packer / vagrant baseboxes for Ubuntu 20.04: it seems that the sudden disappearing of the classic .iso images now breaks many of the established packer templates for Ubuntu, which all rely on the d-i installer (at least I haven’t found a single one that uses subiquity yet).

In fact, I would actually like to use the official baseboxes from https://app.vagrantup.com/ubuntu/boxes/focal64, but these are published for virtualbox only, but not for vmware_desktop and other providers.

Wondering: are the above official ubuntu boxes built with packer as well, and if so, where can we find the packer templates for these?

On second thought, it has already been reported as a bug here:

https://bugs.launchpad.net/subiquity/+bug/1785321

…mind you, a bug reported in 2018. Why no one did anything? Because there was alternate installers. So I suppose we will expect a resurgence of interest on this.

Came here because installing via PXE/preseed is my primary method of installing to bare metal. It seems the kernel and ram disk used for the installer have been moved to a folder called ‘legacy-images’.

http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/current/legacy-images/

I download initrd.gz and linux from ‘/ubuntu/dists/focal/main/installer-amd64/current/netboot/ubuntu-installer/amd64’ and provide them to my pxe install infrastructure (The Foreman).

The Foreman then generates a preseed.cfg files and provides it via HTTP. The pxe boot line specifies filename=http://example.com/preseed.cfg

How does this process map using subiquity? Can I build my pxe files in the same way? Where do I get the installer kernel and RAM disk? Can I provide a url to the yaml?
I assume there are docs for all of this. Can someone point me to them?

The new installer is perfectly capable of creating a swap partition but the guided options do not create one, it is true. Instead a swapfile is created by default. It’s possible they should – we haven’t thought extremely hard about this – but a swapfile is much easier to reconfigure after install.