Server installer plans for 20.04 LTS

I also came here because of packer / vagrant baseboxes for Ubuntu 20.04: it seems that the sudden disappearing of the classic .iso images now breaks many of the established packer templates for Ubuntu, which all rely on the d-i installer (at least I haven’t found a single one that uses subiquity yet).

In fact, I would actually like to use the official baseboxes from, but these are published for virtualbox only, but not for vmware_desktop and other providers.

Wondering: are the above official ubuntu boxes built with packer as well, and if so, where can we find the packer templates for these?

On second thought, it has already been reported as a bug here:

…mind you, a bug reported in 2018. Why no one did anything? Because there was alternate installers. So I suppose we will expect a resurgence of interest on this.

Came here because installing via PXE/preseed is my primary method of installing to bare metal. It seems the kernel and ram disk used for the installer have been moved to a folder called ‘legacy-images’.

I download initrd.gz and linux from ‘/ubuntu/dists/focal/main/installer-amd64/current/netboot/ubuntu-installer/amd64’ and provide them to my pxe install infrastructure (The Foreman).

The Foreman then generates a preseed.cfg files and provides it via HTTP. The pxe boot line specifies filename=

How does this process map using subiquity? Can I build my pxe files in the same way? Where do I get the installer kernel and RAM disk? Can I provide a url to the yaml?
I assume there are docs for all of this. Can someone point me to them?

The new installer is perfectly capable of creating a swap partition but the guided options do not create one, it is true. Instead a swapfile is created by default. It’s possible they should – we haven’t thought extremely hard about this – but a swapfile is much easier to reconfigure after install.

So it seems we need to find a solution for this. The server installer is not really intended to be used to make images to run in VMs. I don’t know much about packer or vagrant unfortunately. Perhaps someone (maybe even me but I’m not going to get to it today) should start a new thread to work out what to do here.

I think in broad outline it can be roughly the same. Lots of details will differ of course.

Currently you have to download the iso and fish them out (we’ll fix this and publish them separately soon, I hope).


Heh well… the best docs are at and Netbooting the live server installer currently. The docs will get better over time, too.

+1 on publishing them separately. Fishing them out of this iso is less than ideal. I wonder if it is reasonable to put them in the same general location as the netboot files? They would then just show up in the mirror I have setup for my org.

Is there any word on Debian adopting Subiquity?

It is unlikely that we will publish these as part of the archive mirror. The d-i netboot images were published there because they were artifacts produced by a .deb package build. The initramfs for the new live installer is not; reinjecting it onto mirrors would just generally slow down development. We will most likely be publishing these alongside the image downloads.

The mini.iso allowed users to netboot low-memory systems lacking cd-rom over the network and install a minimal server with ease( just needed ~64 Mb RAM if I remember correctly).
On the other hand, the current live-installer downloads the entire ISO file and loads it in memory.
Have you decided to just take away this functionality from the existing server users?

If the plan is to discontinue d-i in favor of the live-installer, it would be a reasonable user expectation that equivalent functionality will be offered by the new installer (plus other improvements). Is there a plan to redesign the live installer netboot along the lines of how d-i used to work (before it’s dropped), so that the users are not impacted by this change?

1 Like

So I found out that with the new installer, there’s no option to unlock the luks partition hosting existing LVM volumes.
Nor can I manually create a similar setup afresh, if I wanted to. If I try to partition my disk manually by launching a shell, the installer wouldn’t even detect the changes.
I can’t even install the new release in an existing primary partition - the installer just crashes! That’s the most basic functionality one can expect from an installer!

D-I could easily handle all this with no issues at all. The new live installer is so limited in functionality and so buggy - I find it mind boggling that this half-baked piece of software has been approved to be the default installer in an LTS release! Would the users still have d-i available until the new live-installer has been developed enough to cover most of its functionality?

1 Like

When we talk about server grade hardware we imply beefy machines with 128 cores, TBs of RAM, Petabytes of storage.

A 64MB RAM machine is a minimal system, that is border line a container. To setup a CLI system on such machines we provide ubuntu-base tarballs as well as preinstalled images for select targets, for example preinstalled images we provide for RaspberryPi at It is a far better experience on small machines to simply boot in-place minimal Ubuntu Server CLI installation, instead of booting an installer, and wasting a lot of time and network to try to slowly install a system, whilst relying on swap to complete the install.

Our steady state / minimum RAM requirements have not changed significantly post install. And we have appropriately sized installers (features & ease of use) for appropriate targets.

We provide cloud-images, server squashfs with kernel&initrd pairs, server-grade installer, desktop-grade installer, preinstalled server images, Ubuntu Core images and minimal ubuntu-base tarballs. To cater from smallest autonomous devices, to deploy at scale, and the very largest mainframes. Why do you even want to run an installer, instead of booting an Ubuntu installation in place?

1 Like

The installer offers guided LUKS encrypted LVM option.
When using manual partitioning, in the create an LVM volume group dialog, one can select to encrypt it too.

Is that not the setup you have? Can you find the above options, now, with this hint? Are there any UI / UX issues that prevented you from finding these options?

The new installer is perfectly capable of creating a swap partition but the guided options do not create one, it is true. Instead a swapfile is created by default. It’s possible they should – we haven’t thought extremely hard about this – but a swapfile is much easier to reconfigure after install.

Really the 2 things that the installer is doing by default with the “guided” approach (only when using LVM, that is) are, no swap file and a root partition of only 4GB, which sometimes causes install crashes, and/or “disk full” errors for those who are used to turn-key guided installations. I have since gone and customized the LVM specs from the “guided” approach to add swapfile and specify a bigger desired root logical volume, and the installs have been successful.

I’m confused, you’re replying to a post where I explain that a swapfile is created by default. Was one not created for you? (we don’t create a swapfile on btrfs root, but you said you followed the default options…)

We do need to do something better by default here, I agree. Let’s continue that conversation here:

I’m confused, you’re replying to a post where I explain that a swapfile is created by default. Was one not created for you? (we don’t create a swapfile on btrfs root, but you said you followed the default options…)

That is correct. It is “not” created by default when using LVM guided approach. In contrast, the classic installer “did” create a swapfile lv of around 975-980MB. So maybe I am not undestanding this correctly. You are saying that the default LVM options “are” creating a swapfile (like a file-based swapfile, which I have seen in some approaches), but it is no longer a swapfile partition??

This would be good to know, as I have started creating swapfile lv partitions for my deployments.


BTW, talking about “swapfile partitions” is confusing. There are swap files (what you call a “file-based swapfile”) and “swap partitions” (what you seem to be expecting). subiquity creates the former, by default.

Thanks for the clarification and differentiation between terms, that is extremely helpful. Seems I will have to go and remove the swap partition on my 2 deployments.

Thank you for responding to my concerns. I was just quoting the minimum system requirements for the traditional d-i mini installer, not implying that I’m working with a 64M system. But indeed, VPS with even less than 512M ram aren’t uncommon for use-cases like personal cloud, VPN, etc and they are fully capable of running text-based ubuntu server edition (which is how I have it setup currently). The documentation makes a distinction between this and the base CLI:

Note : the Server Install CD provides a simple command line system, but it is not the same as “install a command-line system”

I’m looking for the former, not the latter - which is what you seem to be suggesting by CLI base image (which is just 30Mb download). However, I totally agree with you that a pre-installed server image would be a far better experience in this scenario - especially if users are looking for more control over the installation process instead of using the guided partitioning. Are there any images I can use that would be equivalent to server setup created by d-i install from mini.iso (for amd64)? If yes, it will totally solve my problem without having to deal with the installer (d-i or live) at all.

You mentioned server squashfs with kernel&initrd pairs… Are you referring to the ones under: or the ones within the ISO? Is there more documentation available on what these various images are and how to use them?

Would it be an equivalent setup (to base server install) if I just extract squashfs image into my root and boot the accompanying kernel/initrd? This is looking very promising - thanks for the pointers!

I think you might be talking past each other here. A “preinstalled server image” is one that you boot in place; it is definitely not fit for purpose if you want something other than default guided partitioning. It will also certainly not help you with the use case you’ve outlined that involves installing to a pre-existing encrypted volume, except to the extent that if you could attach the cloud image to your VPS as an additional disk, you could conceivably boot from that, log in, unlock your volume, and manually copy your root filesystem to the target volume and reconfigure the bootloader.

Effectively, for your use case you DO need an installer; the only question is whether you’re going to use an Ubuntu-provided one or if you’re going to do the installation steps manually yourself.

I’m glad you’ve been able to get mileage out of the debian-installer-based installer images as long as you have. And these are made available for 20.04 LTS in the legacy-images path that you’ve already identified. But there are no plans to make subiquity work in such lower-memory environments, and new d-i images will not be provided beyond 20.04 GA.

I don’t think there’s any confusion here - they way you described it is more or less how I envision it. I usually have a miniature live linux environment residing right in the boot partition (such as gparted/alpine/etc, even mini.iso can do the job here - albeit fulfilling a slightly different purpose). I can boot into it, set up disk partitions, luks, lvm etc as needed. Then download the pre-installed root-fs image and extract it into the target volume. Followed by update-initramfs and configure boot loader if needed.

I’m just trying to figure out which image would be most appropriate for this purpose and what additional setup might be required before booting into the target environment ( which is otherwise taken care of by cloud-init in the regular boot-in-place scenario). I see a tar.gz, a squashfs, a -root.tar.xz image, etc - these are obviously not boot-in-place images. Is there any help/documentation available on how are these supposed to be used?