Netbooting the live server installer

Can you send the link for the instruction. Thanks in Advance

You are literally replying to the page that has these instructions at the top. If you have trouble following these instructions, please ask specific questions so we can help.

Is the above instruction is applicable to ubuntu 21 and newer version that will be release?

Thank you so much for finding that out! Thanks to that, I found a solution here. You can just add cloud-config-url=/dev/null to the boot parameters (beside autoinstall ds=nocloud-net;s=jhttp://....). This will stop cloud-init from re-downloading the ISO, so that the installation was successful on my 3 GB RAM VM.

1 Like

Is anyone here using cobbler to netboot/pxeboot? Have you gotten it to work with Ubuntu’s cloud-init?

We manage about 60K computers over 800 remote schools.

We had about 9K ubuntu desktops (mainly 16 and 18.04). All were installed using iPXE, booting with linux.bin & initrd.gz from the netboot images, and then using d-i preseed to install packages from our internal mirror.

We even support installing W7, 8.1 and 10 over the network with iPXE and WindowsPE.

The number of ubuntu computers has been down in recent years. But we would like to repurpose our old Windows 7 and 8.1 (about 20K computers) using Ubuntu 22.04.

But as I understand, the new network installer procedure requires downloading and storing the full install iso in memory. Then it will boot from this ‘ramdisk’, loading the new installer.

Am I right? Is this the new way of installing over the network?.

There is also another new way: Switch to Debian and use their mini.iso. That’s what I have done.

It’s becoming more and more clear that Canonical is not interested in a use case dealing with recycling old computers.

on ubuntu 20.04.3 , the dir is /etc/dnsmasq.d

5.11.0-43-generic #47~20.04.2-Ubuntu SMP

I am having issues with 21.04 and 21.10 as well, i am using NOC-PS to do a physical install but its still loading up the language selection and so on and wont do an auto install, here is my pxe code.

this is my code

#!gpxe
kernel http://$server/proxy.php/casper/vmlinuz ks=http://$server/kickstart.php/ubuntu?netplan=1 netcfg/choose_interface=$mac netcfg/no_default_route=1 vga=normal ip=dhcp url=http://www.releases.ubuntu.com/21.04/ubuntu-21.04-live-server-amd64.iso auto=true priority=critical debian-installer/locale=en_US keyboard-configuration/layoutcode=us ubiquity/reboot=true languagechooser/language-name=English countrychooser/shortlist=US localechooser/supported-locales=en_US.UTF-8 boot=casper automatic-ubiquity autoinstall quiet splash noprompt noshell
initrd http://$server/proxy.php/casper/initrd
boot

So I was able to get this working with Cobbler. However i’m having issues with the preseed/cloud-init autoinstall file. I think it has everything to do with storage layout.

#cloud-config
autoinstall:
  apt:
    geoip: true
    preserve_sources_list: false
    primary:
    - arches:
      - amd64
      - i386
      uri: http://us.archive.ubuntu.com/ubuntu
    - arches:
      - default
      uri: http://ports.ubuntu.com/ubuntu-ports
  identity:
    hostname: fdsfsadf
    password: 
    realname: username1
    username: username1
  kernel:
    package: linux-generic
  keyboard:
    layout: us
    toggle: null
    variant: ''
  locale: en_US.UTF-8
  network:
    ethernets:
      ens33:
        dhcp4: true
    version: 2
  ssh:
    allow-pw: true
    authorized-keys: []
    install-server: true
  storage:
    config:
    - ptable: gpt
      path: /dev/sda
      wipe: superblock
      preserve: false
      name: ''
      grub_device: true
      type: disk
      id: disk-sda
    - device: disk-sda
      size: 1048576
      flag: bios_grub
      number: 1
      preserve: false
      grub_device: false
      type: partition
      id: partition-3
    - device: disk-sda
      size: 1073741824
      wipe: superblock
      flag: ''
      number: 2
      preserve: false
      grub_device: false
      type: partition
      id: partition-4
    - fstype: ext4
      volume: partition-4
      preserve: false
      type: format
      id: format-2
    - device: disk-sda
      size: 52610203648
      wipe: superblock
      flag: ''
      number: 3
      preserve: false
      grub_device: false
      type: partition
      id: partition-5
    - name: vg_root
      devices:
      - partition-5
      preserve: false
      type: lvm_volgroup
      id: lvm_volgroup-1
    - name: lv_root
      volgroup: lvm_volgroup-1
      size: 3221225472B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-1
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-0
    - path: /
      device: format-0
      type: mount
      id: mount-0
    - name: lv_usr
      volgroup: lvm_volgroup-1
      size: 8589934592B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-1
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-1
    - path: /usr
      device: format-1
      type: mount
      id: mount-1
    - name: lv_var
      volgroup: lvm_volgroup-1
      size: 8589934592B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-2
    - fstype: ext4
      volume: lvm_partition-1
      preserve: false
      type: format
      id: format-2
    - path: /var
      device: format-2
      type: mount
      id: mount-2
    - name: lv_home
      volgroup: lvm_volgroup-1
      size: 10737418240B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-3
    - fstype: ext4
      volume: lvm_partition-3
      preserve: false
      type: format
      id: format-3
    - path: /home
      device: format-3
      type: mount
      id: mount-3
    - name: lv_tmp
      volgroup: lvm_volgroup-1
      size: 5368709120B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-4
    - fstype: ext4
      volume: lvm_partition-4
      preserve: false
      type: format
      id: format-4
    - path: /tmp
      device: format-4
      type: mount
      id: mount-4
    - name: lv_opt
      volgroup: lvm_volgroup-1
      size: 5368709120B
      wipe: superblock
      preserve: false
      type: lvm_partition
      id: lvm_partition-5
    - fstype: ext4
      volume: lvm_partition-5
      preserve: false
      type: format
      id: format-5
    - path: /opt
      device: format-5
      type: mount
      id: mount-5
    - path: /boot
      device: format-2
      type: mount
      id: mount-2
  updates: security
  version: 1

https://imgur.com/a/C8AUOdM

Hey, I am from theforeman.org community (systems management open source software) and user come to us telling us that provisioning via PXE no longer works with modern Ubuntu releases. After I investigated this, it looks like Ubuntu/Canonical no longer provides PXE files via HTTP asking users to download a huge ISO just to extract those two small files.

Is there any plan to start re-publishing the kernel and initramdisk of the new installer/livecd somewhere? Pretty much all Linux distributions do publish PXE files in a plain way and we have architected our system based on this - Foreman is able to download and prepare PXE environment on request.

Changing the workflow just because Ubuntu would not be ideal - our software supports around 30 various distributions and all do work fine. Before we commit to any changes, I just want to be sure that those PXE files are not planned to be published.

Thanks and say safe!

Yes, we plan to start doing this again before the release of 22.04.

3 Likes

Thanks so much for replying! @lzap I hope this helps answer your concerns - and thank you for coming here with your questions. :blush:

1 Like

Many thanks for the information. Just to confirm, there are no plans publishing them for version 21.10 I guess?

Can we expect the files in the same directories with the same names, or changes are planned? That would be:

http://archive.ubuntu.com/ubuntu/ubuntu/dists/XXX/main/installer-amd64/current/legacy-images/netboot/ubuntu-installer/amd64/

I wonder how MaaS does it :thinking:

1 Like

There will be significant changes here. The previous netboot images published to the archive were artifacts from the debian-installer source package that no longer exists. The new netboot images will be published from our ISO publishing infrastructure.

With a completely separate initramfs unrelated to our server image build, because the functionality is different.

2 Likes

Yeah, understood. That’s all clear, we are working on updating our project. We just need the new autoinstallation (cloud-init based) PXE files to be available at some path. Will be probably different.

Okay thanks.

For me the following worked:
nano /etc/default/grub
where I replaced GRUB_CMDLINE_LINUX_DEFAULT=“splash quiet”
with:
GRUB_CMDLINE_LINUX_DEFAULT=""
saved
then:
update-grub
reboot
works :slight_smile:

Hi, our corporation uses the legacy netboot image for Focal. Recently something changed because the computers refuses to boot PXE with SecureBoot enabled.

After updating grubx64.efi & bootx64.efi the computers now boot into PXE menu but our entries have stopped working:

menuentry 'Deploy Ubuntu 20.04 LTS Desktop' {
                gfxmode $linux_gfx_mode
                linux /images/ubuntu/focal-20211013/amd64/linux $vt_handoff toram netcfg/choose_interface=enp0s31f6 url=http://pxe01.example.com/dev/pxe-boot/preseed/focal.seed auto=true hostname=unassigned nomodeset ---
                initrd /images/ubuntu/focal-20211013/amd64/initrd.gz
        }

We are using the initrd.gz & linux from the legacy netboot image. Are these no longer being updated? Is that why we get this error message:

error: /images/ubuntu/focal-20211013/amd64/linux has invalid signature.
error: you need to load the kernel first.

Any help would be appreciated. Not sure how we can use the live server image with a preseed file. Can’t find any documentation on that. I tried creating a entry for the live server image without a seed file. But then it wouldn’t download the 900MB ISO, it stalled at 405MB.

menuentry 'Ubuntu 20.04 Liveserver' --id ubuntu-server_1804 {
 linuxefi /images/ubuntu/focal-live-20220317/amd64/vmlinuz  ethdevice-timeout=30 ip=dhcp url=http://pxe01.example.com/dev/pxe-boot/focal-live-server-amd64.iso boot=casper ide=nodma debian-installer/language=de console-setup/layoutcode?=de
  initrdefi /images/ubuntu/focal-live-20220317/amd64/initrd
}

Error message:

(initramfs) focal-live-server-am 31% ####### | 405M 0:00:14 ETA
wget: read error: Connection reset by peer

In one of most frequent use-case, IT does not allow to run anything that does pxe/dhcp, or usually the server is the first server in a remote datacenter that is shipped without an OS. So dhcp/netboot is out of the question. I am half-away across the globe to setup a server. What works for me is a small netboot image that boots up and then downloads and install everything from the internet.

Have you guys ever tried to mount a 900mb iso on a 250ms latency link via idrac just to get the loading screen and select the nearest mirror ?

1 Like

Have you guys ever tried to mount a 900mb iso on a 250ms latency link via idrac just to get the loading screen and select the nearest mirror ?

I’m pretty sure they haven’t :frowning: I’m currently trying to boot the 22.04 server ISO via iLO over a link with 20 ms latency. The boot started about an hour ago and it’s still scanning the ISO (it’s at the step “A start job is running for casper-md5check Verify Live ISO checksums”) :man_facepalming:

3 Likes