I also came here because of packer / vagrant. To work with packer, the installer has to be able to be controlled by the keyboard, which is what the classic installer had. That is, if you pressed esc twice the classic installer would drop to a command prompt. This is my specific packer command for Ubuntu 18.
You can always access the commandline from the live system in one of two ways:
sending either F2 or Ctrl-Z to subiquity (chosen because of its parallel to âbackgroundâ in shell; but note that to get back to subiquity you have to send âCtrl-Dâ rather than typing âfgâ)
navigating to the âHelpâ menu and choosing âEnter shellâ.
If you are installing using a video console, rather than a serial console, in addition you can get a shell by sending Alt-F2 to switch VTs.
Hi, but why would you want to run the installer at all?
We provide preinstalled vagrant boxes at Ubuntu 20.04 LTS (Focal Fossa) release [20240220] if you need any modifications (like user names, hostnames, extra packages, etc) you can provide all of that on first boot with a suitable cloud-config metadata which will apply all of those customizations on first boot, skipping the whole âboot installer, preseed installer, run installer, reboot, do first bootâ.
These require VirtualBox, donât they? I would prefer to use Vagrant with the libvirt backend, which requires me to use alternative boxes. In the past I used the âgeneric/ubuntuXXYYâ boxes provided by https://roboxes.org/, and these are built using Packer.
As you can tell, Iâm not familiar with Vagrant. Normally, for pure libvirt one uses the QCOW2 (.img) image we provide, which one can import to initialise a new VM. But I donât know if that is compatible with how Vagrant uses libvirt.
I use Packer to build VMware vCenter templates. While Iâd love to skip some of the scripts that are involved with d-i the provided Vagrant boxes are not something I can use. It would be unfortunate if these server builds fall to the wayside for a focus on only cloud-based or pre-installed Vagrant boxes.
I am currently using the ubuntu-20.04-legacy-server-amd64.iso as that seems to provide for the legacy d-i boot environment, however just wanted to put the use case out for consideration.
It seems there is a gap in our offerings here. We should figure out how to support this use case better (itâs not one I personally really understand at all though).
The installer always installs full Ubuntu server. You can always remove bits of it in an autoinstall late-command I guessâŠ
The guided resilient install option was supposed to be done in April and I tried to install 20.04 LTS server edition this time and it downloaded the newest installer and I didnât see this option. Can you advise on the timeline?
I hope I never promised to get the guided resilient option done in April did I? Anyway, no thatâs not done and I currently donât have a timeline for it. Hopefully not too long though.
Over the years the preseeding using PXE was improved and the netboot files needed (linux & initrd.gz ) totaled 66MB. Now just the two replacements (vmlinuz & initrd) total over 100MB and this is just to load and run the new âlive serverâ which is 952 MB.
The documentation on this âreplacementâ is weak, and the YAML replacement for the preseed files appears to be lacking as well. For example the documentation does not appear to say where the actual auto-installation configuration is collected from.
After using the netboot installation successfully since 10.04, I was confronted with a completely new and definitely not better replacement, that is slow and buggy, while the working solution has been labelled âlegacyâ.
Normally with preseeding the modification of the preseed file can be taken from the generated files but the new installer doesnât appear to cope with aspects of multiple drive configurations.
Iâm wondering, was there any reason, like some fundamental missing functionality that could not be provided with the old installer and was absolutely required by paying market? This new installer looks like another âUnityâ experiment. Do something new for the sake of doing it differently, then never do it properly and deploy it broken.
The new installer does not give me the option anymore to configure automatic security updates, it does not allow me to set time zone, it forces me to use lower case host name which I then rename later. It does not allow me to select the standard services in a simple and nice way and the cherry is the fully broken support for software RAID over Intel RST which worked fine in 14.04 and 16.04 (and also 18.04 alternate). Many thanks though to the person who decided to still provide the âLegacyâ installer. And maybe the same person is wise to kill the new installer and put the developer to do real work
Since I got a friendly yet unreasonable warning from the moderators, Iâll rephrase my message:
The new installer is missing the following features:
ability to set the updates policy
set the timezone
set upper case host names
setup bootable software RAID on top of software RAID already defined in BIOS (RST)
setup bootable software RAID on top of individual SSD without software RAID already defined in BIOS (server motherboard is X10DRi, SSDs configured in AHCI mode, EFI boot in case this is a bug and not a missing feature).
And if someone can share, I believe I am entitled to know the reason for such a change since this already lead to many man days lost (and implicitly money lost) in the company I work. I do have to justify my lost time and further, such decision do impact my company political decision of strenghtening the ecosystem around Ubuntu and access paid features or choose another business partner which is more stable, reliable and leads to less time lost.
I strongly believe my previous message complies to " Constructive criticism is welcome, but criticize ideas , not people." rule. Ideas can be critisised in a sarcastic way, there is is no rule for avoiding sarcasm. If moderators believe otherwise, feel free to moderate my post. I publicly disagree with the judgement.
My feedback is honest, not discouraging. It is negative because a bad job was done. I cannot give positive feedback for a negative job. People should be mature and take responsibility for their actions and accept such feedback. And I would be extremely happy if management would see the honest feedback and prevent something like this in the past. If responsibility would have been assumed, none of this would have happened. Invest in quality control or be able to accept the negative feedback you get and change fast.
ability to set the updates policy - if sane policy is set by default, then please make it clear during installation. Update policy is one of the most critical part when it comes to installation. If the team who handles the installation is delivering a buggy installer, I cannot trust that it does the right job in setting what I believe is a sane solution. I have to dig deeper and check whatever is default and maybe change it. If this takes 2 hours for 10000 system admins, this means 20000 hours of work or 2500 man days lost. The goal should be to minimize work, not create extra.
timezone - the ability to set it during installation saves me one extra step after. For my case it is highly relevant, itâs a loss of functionality. Could as well say âUsing this timezoneâ and give me the ability to click next or change it. Makes better experience.
I broke the RST RAID, switched SSDs to AHCI mode but kept server in EFI mode. The installer can create a software RAID setup but it requires me to set a boot drive and it does not allow me to even format a partition as EFI format (or just plain FAT). Basically itâs stuck. Whatâs now even worse is that after all this mess, both the legacy and previous 18.04 alternate installer are now stuck with âUnable to install GRUB in dummyâ error (âExecuting âgrub-install dummyâ failed.â . This is a perfectly working server and I have already erased a few times completely the EFI partition. Server had Ubuntu 18.04 before and for some reason I get this error now even with 18.04 alternate.
Note that after re-enabling it, depending on which controller there are 5 different Ctrl+(LETTER) keys that will invoke the pre-OS UI to resetup RST from scratch.
Once you have RST resynced from firmware you should be good to go.
There is no linux userspace tooling to unbreak any of this, one has to do it from the firmware settings.
Iâve run into a few issues with the ISO installation. The first issue is present on the 5/29 live server ISO. The user-data file it creates has two errors in it that result in it failing to be used for an autoinstall.
No version is written in the file, so it immediately fails to load
Under the keyboard section, toggle:null is added which causes an exception during autoinstall
Another issue Iâve run into is trying to insert additional packages onto the ISO. With 18.04 and 16.04 I could use a gpg signature to rewrite the packages files and add an extras folder with additional packages. This doesnât work with 20.04, are there additional steps that need to be done besides making a packages and release file for additional packages?