Netbooting the live server installer

on ubuntu 20.04.3 , the dir is /etc/dnsmasq.d

5.11.0-43-generic #47~20.04.2-Ubuntu SMP

I am having issues with 21.04 and 21.10 as well, i am using NOC-PS to do a physical install but its still loading up the language selection and so on and wont do an auto install, here is my pxe code.

this is my code

#!gpxe
kernel http://$server/proxy.php/casper/vmlinuz ks=http://$server/kickstart.php/ubuntu?netplan=1 netcfg/choose_interface=$mac netcfg/no_default_route=1 vga=normal ip=dhcp url=http://www.releases.ubuntu.com/21.04/ubuntu-21.04-live-server-amd64.iso auto=true priority=critical debian-installer/locale=en_US keyboard-configuration/layoutcode=us ubiquity/reboot=true languagechooser/language-name=English countrychooser/shortlist=US localechooser/supported-locales=en_US.UTF-8 boot=casper automatic-ubiquity autoinstall quiet splash noprompt noshell
initrd http://$server/proxy.php/casper/initrd
boot

So I was able to get this working with Cobbler. However i’m having issues with the preseed/cloud-init autoinstall file. I think it has everything to do with storage layout.

#cloud-config
autoinstall:
  apt:
    geoip: true
    preserve_sources_list: false
    primary:
    - arches:
      - amd64
      - i386
      uri: http://us.archive.ubuntu.com/ubuntu
    - arches:
      - default
      uri: http://ports.ubuntu.com/ubuntu-ports
  identity:
    hostname: fdsfsadf
    password: 
    realname: username1
    username: username1
  kernel:
    package: linux-generic
  keyboard:
    layout: us
    toggle: null
    variant: ''
  locale: en_US.UTF-8
  network:
    ethernets:
      ens33:
        dhcp4: true
    version: 2
  ssh:
    allow-pw: true
    authorized-keys: []
    install-server: true
  storage:
    config:
    - ptable: gpt
      path: /dev/sda
      wipe: superblock
      preserve: false
      name: ''
      grub_device: true
      type: disk
      id: disk-sda
    - device: disk-sda
      size: 1048576
      flag: bios_grub
      number: 1
      preserve: false
      grub_device: false
      type: partition
      id: partition-3
    - device: disk-sda
      size: 1073741824
      wipe: superblock
      flag: ''
      number: 2
      preserve: false
      grub_device: false
      type: partition
      id: partition-4
    - fstype: ext4
      volume: partition-4
      preserve: false
      type: format
      id: format-2
    - device: disk-sda
      size: 52610203648
      wipe: superblock
      flag: ''
      number: 3
      preserve: false
      grub_device: false
      type: partition
      id: partition-5
    - name: vg_root
      devices:
      - partition-5
      preserve: false
      type: lvm_volgroup
      id: lvm_volgroup-1
    - name: lv_root
      volgroup: lvm_volgroup-1
      size: 3221225472B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-1
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-0
    - path: /
      device: format-0
      type: mount
      id: mount-0
    - name: lv_usr
      volgroup: lvm_volgroup-1
      size: 8589934592B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-1
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-1
    - path: /usr
      device: format-1
      type: mount
      id: mount-1
    - name: lv_var
      volgroup: lvm_volgroup-1
      size: 8589934592B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-2
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-2
    - path: /var
      device: format-2
      type: mount
      id: mount-2
    - name: lv_home
      volgroup: lvm_volgroup-1
      size: 10737418240B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-3
    - fstype: ext4
      volume: lvm_partition-3
      preserve: false
      type: format
      id: format-3
    - path: /home
      device: format-3
      type: mount
      id: mount-3
    - name: lv_tmp
      volgroup: lvm_volgroup-1
      size: 5368709120B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-4
    - fstype: ext4
      volume: lvm_partition-4
      preserve: false
      type: format
      id: format-4
    - path: /tmp
      device: format-4
      type: mount
      id: mount-4
    - name: lv_opt
      volgroup: lvm_volgroup-1
      size: 5368709120B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-5
    - fstype: ext4
      volume: lvm_partition-5
      preserve: false
      type: format
      id: format-5
    - path: /opt
      device: format-5
      type: mount
      id: mount-5
    - path: /boot
      device: format-2
      type: mount
      id: mount-2
  updates: security
  version: 1

https://imgur.com/a/C8AUOdM

Hey, I am from theforeman.org community (systems management open source software) and user come to us telling us that provisioning via PXE no longer works with modern Ubuntu releases. After I investigated this, it looks like Ubuntu/Canonical no longer provides PXE files via HTTP asking users to download a huge ISO just to extract those two small files.

Is there any plan to start re-publishing the kernel and initramdisk of the new installer/livecd somewhere? Pretty much all Linux distributions do publish PXE files in a plain way and we have architected our system based on this - Foreman is able to download and prepare PXE environment on request.

Changing the workflow just because Ubuntu would not be ideal - our software supports around 30 various distributions and all do work fine. Before we commit to any changes, I just want to be sure that those PXE files are not planned to be published.

Thanks and say safe!

Yes, we plan to start doing this again before the release of 22.04.

3 Likes

Thanks so much for replying! @lzap I hope this helps answer your concerns - and thank you for coming here with your questions. :blush:

1 Like

Many thanks for the information. Just to confirm, there are no plans publishing them for version 21.10 I guess?

Can we expect the files in the same directories with the same names, or changes are planned? That would be:

http://archive.ubuntu.com/ubuntu/ubuntu/dists/XXX/main/installer-amd64/current/legacy-images/netboot/ubuntu-installer/amd64/

I wonder how MaaS does it :thinking:

1 Like

There will be significant changes here. The previous netboot images published to the archive were artifacts from the debian-installer source package that no longer exists. The new netboot images will be published from our ISO publishing infrastructure.

With a completely separate initramfs unrelated to our server image build, because the functionality is different.

2 Likes

Yeah, understood. That’s all clear, we are working on updating our project. We just need the new autoinstallation (cloud-init based) PXE files to be available at some path. Will be probably different.

Okay thanks.

For me the following worked:
nano /etc/default/grub
where I replaced GRUB_CMDLINE_LINUX_DEFAULT=“splash quiet”
with:
GRUB_CMDLINE_LINUX_DEFAULT=""
saved
then:
update-grub
reboot
works :slight_smile:

Hi, our corporation uses the legacy netboot image for Focal. Recently something changed because the computers refuses to boot PXE with SecureBoot enabled.

After updating grubx64.efi & bootx64.efi the computers now boot into PXE menu but our entries have stopped working:

menuentry 'Deploy Ubuntu 20.04 LTS Desktop' {
                gfxmode $linux_gfx_mode
                linux /images/ubuntu/focal-20211013/amd64/linux $vt_handoff toram netcfg/choose_interface=enp0s31f6 url=http://pxe01.example.com/dev/pxe-boot/preseed/focal.seed auto=true hostname=unassigned nomodeset ---
                initrd /images/ubuntu/focal-20211013/amd64/initrd.gz
        }

We are using the initrd.gz & linux from the legacy netboot image. Are these no longer being updated? Is that why we get this error message:

error: /images/ubuntu/focal-20211013/amd64/linux has invalid signature.
error: you need to load the kernel first.

Any help would be appreciated. Not sure how we can use the live server image with a preseed file. Can’t find any documentation on that. I tried creating a entry for the live server image without a seed file. But then it wouldn’t download the 900MB ISO, it stalled at 405MB.

menuentry 'Ubuntu 20.04 Liveserver' --id ubuntu-server_1804 {
 linuxefi /images/ubuntu/focal-live-20220317/amd64/vmlinuz  ethdevice-timeout=30 ip=dhcp url=http://pxe01.example.com/dev/pxe-boot/focal-live-server-amd64.iso boot=casper ide=nodma debian-installer/language=de console-setup/layoutcode?=de
  initrdefi /images/ubuntu/focal-live-20220317/amd64/initrd
}

Error message:

(initramfs) focal-live-server-am 31% ####### | 405M 0:00:14 ETA
wget: read error: Connection reset by peer

In one of most frequent use-case, IT does not allow to run anything that does pxe/dhcp, or usually the server is the first server in a remote datacenter that is shipped without an OS. So dhcp/netboot is out of the question. I am half-away across the globe to setup a server. What works for me is a small netboot image that boots up and then downloads and install everything from the internet.

Have you guys ever tried to mount a 900mb iso on a 250ms latency link via idrac just to get the loading screen and select the nearest mirror ?

1 Like

Have you guys ever tried to mount a 900mb iso on a 250ms latency link via idrac just to get the loading screen and select the nearest mirror ?

I’m pretty sure they haven’t :frowning: I’m currently trying to boot the 22.04 server ISO via iLO over a link with 20 ms latency. The boot started about an hour ago and it’s still scanning the ISO (it’s at the step “A start job is running for casper-md5check Verify Live ISO checksums”) :man_facepalming:

3 Likes

I did this just this week (booting the ISO over a BMC using virtual media) with 22.04 on a couple machines ( not a production DC but in a lab remotely ). Because my connection is so bad (ADSL 20Mb/s down & 1.5Mb/s up), I have to set up an nfs or http source inside the lab, pull the ISOs in there and then add them as virtual media in the BMC to boot from.

Interestingly, I did that with a Lenovo XClarity controller and it was fairly fast and painless, but then did it with a Cisco CIMC and it mirrored what marianrh experienced, over an hour, maybe closer to 2 hours to boot, and then a lot longer to get through the installer due to the latency issues.

That was done on different days though, so it could well be that I just happened to hit a bad day when I did the Cisco one. And to be fair, almost all (like pretty much every one) of my installs are done using MAAS though, I was doing this to test a couple subiquity features on actual server hardware that I don’t have local to me.

2 Likes

Hi @vorlon!
Thank you for explaining why and how the new netboot images setup will differ from the previous method.
Do you know if the new netboot images for 22.04 are now released (and where)?

Hi, I’m about to start work on publishing the netboot artefacts more sensibly and just pasted the spec for the work here: [spec] Publishing netboot artifacts (after a bit of raging at discourse syntax). Please comment there with any thoughts about the contents!

1 Like

So, the basic idea is that now with 22.04…

  • every time we PXE, we have to download a 1.4GB file
  • we lost the well known preseed format for the debian installer
  • …so we now need to re-adapt/re-write every preseed file we ever used; edit: oh, and it behaves completely different in regards to interactive parts
  • …and if we have systems that automatically generate preseeds, we’ll have to re-write everything

This seems such a downgrade.

I’m struggling to understand how subiquity is in any way better than the old legacy installer.

2 Likes

Hi @vorlon,

We are trying to install Ubuntu 22.04 desktop on thousands of computers.

We are testing PXE to boot the live ISO for Ubuntu desktop. It works ok with a virtual machine that has 5GB of RAM. We boot the live ISO and install Ubuntu manually without problems.

But when tried on the same virtual machine with 4GB of RAM it errors with the message:
“Out of memory: Killed process 179 (system-udevd) total-vm:12144kB, anon-rss:1360kB, file-rss:2672kB, shmem-rss:0kB, UID:0 pgtables:56kB oom_score_adj:0”

It seems that with the Ubuntu 22.04 desktop ISO being 3.5GB in size, there is no RAM available to boot and load the ISO with just 4GB of RAM.

We can boot the server ISO with 4GB of RAM and then install ‘ubuntu-desktop’ as a package, but I suppose that this is not supported.

Is there any solution to install Ubuntu desktop 22.04 using PXE in computers with 4GB of RAM?

The approach we’re taking is to use the Ubuntu Server ISO to automate the install and in the cloud-init file, at the end run the following late command:

  • curtin in-target --target=/target – apt-get install -y ubuntu-desktop plymouth-theme-ubuntu-logo grub-gfxpayload-lists

The Ubuntu Server ISO is much smaller.

Minimum requirements for Ubuntu is:
Ubuntu Desktop Edition is:
2 GHz dual core processor
4 GiB RAM (system memory)
25 GB (8.6 GB for minimal) of hard-drive space (or USB stick, memory card or external drive but see LiveCD for an alternative approach)
VGA capable of 1024x768 screen resolution
Source: https://help.ubuntu.com/community/Installation/SystemRequirements