Using autoinstall for ubuntu-20.04.4-live-server-amd64.iso here, and I cannot understand why it would consistently install a GNOME based desktop environment
Some context, if anything here makes sense to explain the issue:
Net booting environment
apt-cacher-ng to save my bandwith and go faster on repeated installs
I’ve added a GRUB option in my netboot to boot on the exact same Ubuntu 20.04.4 Server without the autoinstall config: I can install manually, and no GNOME based desktop environment is ever installed (which is totally expected).
If I take the /var/log/autoinstall-user-data from that manual install and apply it, the autoinstall will proceed but it will - again - add a GNOME based desktop environment
the documentation page does not mention (or at least, I didn’t find it) where and how to report bugs in the autoinstallation. It just tells how to report bugs for the regular, interactive installer, which is not applicable here.
What I wanted to report:
The automated server install supports configuring a proxy in the autoinstallation file as a URL.
Nowadays, proxies should use TLS and have a https URL. Many programs complain about plain http, for good reason.
But typically, proxies are part of the LAN infrastructure and not publicy available. Therefore, they quite often have TLS certificates issued by some local CA, not an official one from a CA covered by the standard CA certificates.
It is possible to install certificates through the user-data file through cloud-init, but since is is usally run after installation, this probably comes too late or is not guaranteed. The web page should at least mention, how this is expected to be achieved.
The next guess would be to use early-commands or late-commands, but the form seems to be too early, the latter too late.
Install on USB works but Boot from USB doesn’t work on HP Microserver Gen8
I’ve tried everything I can think of but an Ubuntu 20.04 or 22.04 installation on a USB drive results in a system that perfectly installs, but on reboot never seems to boot from USB, it just falls back to PXE booting.
I don’t have this problem with Debian or Ubuntu 18.04 installations. That has worked fine for years.
I’ve even tried different USB enclosures for my SATA SSD. I’ve tried different SATA SSDs in different enclosures. The only thing that worked was a powered Dockingstation but that’s not a solution as it can’t fit in my Microserver.
Although it can be a quirk of the Microserver, (I don’t have other hardware to test on) since other operating systems work fine, I’m skeptical and I suspect an issue with the Autoinstaller.
For my own personal experience, the Autoinstaller is pain for zero gain. And that includes the scattered documentation, the undiscoverable way to configure storage (RAID and such).
I want to be mindful and respect all the work people have put into Autoinstaller, for some people it has probably been their full-time work. But I’m not happy about it at all, I’m sorry to say.
Currently this seems to not be possible, however a fix for it is currently in progress. For updates on this I found the issue in Launchpad and an active pull request in GitHub
This probably isn’t the correct place to ask this question but for sure if someone can point me in a direction I’d be more than grateful.
I’m new to a lot of this stuff so please bare with me if I don’t use the correct terminology but basically, my question boils down to wondering if I’m correct in discovering (the hard way) that supported cloud-init modules != supported autoinstall modules ???
To be a bit more specific, I have a working PXE boot solution for booting both 20.04 and 22.04 live server ISO’s that is booting and reading my user-data & meta-data files via http without any issues.
One example I ran into was wanting to use the cloud-init ansible module. This module isn’t listed as being supported in the autoinstall-reference so even if it’s listed in the user-data file, it appears to me anyways that it’s being ignored and not processed (ie. skipped) even though it’s a valid cloud-init module.
The workaround we have come up with was to use a late-command to setup a firsrun.service to execute a bash script that launches ansible-pull… it works but isn’t ideal. What I’m wondering is if someone who has a bit more experience with all this can confirm for me that supported cloud-init modules != supported autoinstall modules.
Also if anyone has any ideas or feedback on how we might be able to make better use of all of the cloud-init modules by installing off of the ISO media against bare metal I’d be all ears! Maybe I’ve missed something along the way. It seems that the problem that cloud-init was developed to solve might not work so great with using the ISO trying to install on bare metal systems.
@dbungert did you just hear a loud bang? Because you just blew my mind!
All joking aside… this clears things up for sure. After all of the research, and hours of trial and error I don’t know why it didn’t occur to me to try this! I’ll echo some of the other comments in this thread in that information to tie all this together is a bit spread out but for sure it always amazes me how this community is always more than willing to help.
I can’t promise I won’t have more questions but your psudo code has for sure opened my eyes. I can’t wait to get back to work and give this all a try.
Thanks for the very positive feedback @zero0ne. I have incorporated this information in the documentation improvements we already have started for Subiquity.
I’m wondering if it would be possible to leverage autoinstall and possibly cloud-init for some more custom install configurations … Pretty much everything I build is a root-on-zfs setup, leveraging zfsbootmenu for the system boot. That then finds zfs pools with datasets with kernels and presents a menu for booting. zfsbootmenu is Very Slick.
The build process involves typical disk partitioning etc. zpool creates the zfs pool from the disk(s), and the datasets are mounted to /mnt/builder. debootstrap drops a basic install there, then a chroot process does the rest.
Given an appropriate config, I thinkautoinstall could prep the disk(s), but would then need specific commands to create the zfs pool/datasets and so on. Once the system is bootable, I think cloud-init might be able to do the rest.
Right now that ZFS-root.sh script is interactive, but can use an answer config file. In fact there is a packer setup to enable building bootable disk images too. Somehow, this should be able to be translated into the autoinstall/cloud-init world …
23.10 will have ZFS improvements. A guided option will be available for ZFS root in the Desktop installer (no encrypted ZFS yet, next on the list). Also, it will be possible to trigger ZFS root with guided partitioning on server with autoinstall. This produces a similar result to the ZFS setup Ubiquity produces.
storage:
layout:
name: zfs
For further customization, it will be possible to use curtin storage actions for ZFS/Zpool objects on any of the 23.10 subiquity installers (or older ISOs with snap refresh). Curtin storage actions are a bit verbose so if that’s something you’re interested in, you may find it convenient to first to a guided install to get a template, copy the template from /var/log/installer/autoinstall-user-data, and make further tweaks.
Yes. You can start today if desired with the following in autoinstall:
refresh-installer:
update: yes
channel: beta
Now specifically 22.10 hasn’t been tested since that one is EOL, but 23.04/22.04/20.04 should be good shape (I’ve tested guided ZFS installs via autoinstall on Focal and Jammy).