Automated Server installation

Using autoinstall for ubuntu-20.04.4-live-server-amd64.iso here, and I cannot understand why it would consistently install a GNOME based desktop environment :upside_down_face:

Some context, if anything here makes sense to explain the issue:

  • Net booting environment
  • apt-cacher-ng to save my bandwith and go faster on repeated installs

I’ve added a GRUB option in my netboot to boot on the exact same Ubuntu 20.04.4 Server without the autoinstall config: I can install manually, and no GNOME based desktop environment is ever installed (which is totally expected).

If I take the /var/log/autoinstall-user-data from that manual install and apply it, the autoinstall will proceed but it will - again - add a GNOME based desktop environment :exploding_head:

Here’s the full user-data file I’m using:

#cloud-config
autoinstall:
  version: 1
  network:
    version: 2
    ethernets:
      id0:
        match:
          name: *
        dhcp4: true
        critical: true
    wifis:
      id1:
        match:
          name: *
        optional: true
  refresh-installer:
    update: yes
  identity:
    hostname: whateverhostname
    realname: whatever
    username: whatever
    password: "$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0"
  timezone: Europe/Paris
  locale: fr_FR.UTF-8
  keyboard:
    layout: "fr"
    toggle: null
    variant: ''
  ssh:
    install-server: true
    allow-pw: false
    authorized-keys:
      - ssh-ed25519 MyPublicSSHKey my@email.com
  proxy: http://192.168.1.50:3142/
  updates: security
  kernel:
    package: linux-generic
  drivers:
    install: false
  apt:
    disable_components: []
    geoip: true
    preserve_sources_list: false
    primary:
    - arches:
      - amd64
      - i386
      uri: http://fr.archive.ubuntu.com/ubuntu
    - arches:
      - default
      uri: http://ports.ubuntu.com/ubuntu-ports
  storage:
    swap:
      size: 0
    config:
      # the disk
      - id: sda
        type: disk
        ptable: gpt
        match:
          size: largest
        wipe: superblock-recursive
        preserve: false
        grub_device: true
      # /dev/sda1
      - id: sda1
        type: partition
        number: 1
        size: 1GB
        device: sda
        flag: boot
        # for a description why this shouldn't be there, but is needed anyways
        # see https://discourse.ubuntu.com/t/please-test-autoinstalls-for-20-04/15250/340
        grub_device: true
      - id: sda1-format
        type: format
        fstype: fat32
        volume: sda1
      - id: sda1-mount
        type: mount
        path: /boot/efi
        device: sda1-format
      # /dev/sda2
      - id: sda2
        type: partition
        number: 2
        size: 10GB
        device: sda
      - id: sda2-format
        type: format
        fstype: ext4
        volume: sda2
      - id: sda2-mount
        type: mount
        path: /
        device: sda2-format
      # /dev/sda3
      - id: sda3
        type: partition
        number: 3
        size: 10GB
        device: sda
        flag: home
      - id: sda3-format
        type: format
        fstype: ext4
        volume: sda3
      - id: sda3-mount
        type: mount
        path: /home
        device: sda3-format
      # /dev/sda4
      - id: sda4
        type: partition
        number: 4
        size: -1
        device: sda
      - id: sda4-format
        type: format
        fstype: ext4
        volume: sda4
      - id: sda4-mount
        type: mount
        path: /data
        device: sda4-format

Did I miss something? That sounds so weird to me… Any tip or pointer is welcome :nerd_face:

Hi,

the documentation page does not mention (or at least, I didn’t find it) where and how to report bugs in the autoinstallation. It just tells how to report bugs for the regular, interactive installer, which is not applicable here.

What I wanted to report:

The automated server install supports configuring a proxy in the autoinstallation file as a URL.

Nowadays, proxies should use TLS and have a https URL. Many programs complain about plain http, for good reason.

But typically, proxies are part of the LAN infrastructure and not publicy available. Therefore, they quite often have TLS certificates issued by some local CA, not an official one from a CA covered by the standard CA certificates.

It is possible to install certificates through the user-data file through cloud-init, but since is is usally run after installation, this probably comes too late or is not guaranteed. The web page should at least mention, how this is expected to be achieved.

The next guess would be to use early-commands or late-commands, but the form seems to be too early, the latter too late.

Install on USB works but Boot from USB doesn’t work on HP Microserver Gen8

I’ve tried everything I can think of but an Ubuntu 20.04 or 22.04 installation on a USB drive results in a system that perfectly installs, but on reboot never seems to boot from USB, it just falls back to PXE booting.

I don’t have this problem with Debian or Ubuntu 18.04 installations. That has worked fine for years.

I’ve even tried different USB enclosures for my SATA SSD. I’ve tried different SATA SSDs in different enclosures. The only thing that worked was a powered Dockingstation but that’s not a solution as it can’t fit in my Microserver.

Although it can be a quirk of the Microserver, (I don’t have other hardware to test on) since other operating systems work fine, I’m skeptical and I suspect an issue with the Autoinstaller.

For my own personal experience, the Autoinstaller is pain for zero gain. And that includes the scattered documentation, the undiscoverable way to configure storage (RAID and such).

I want to be mindful and respect all the work people have put into Autoinstaller, for some people it has probably been their full-time work. But I’m not happy about it at all, I’m sorry to say.

So, what about an automated install of 22.04 Desktop then?

I’m having issues setting up swap.

My current conf is:

storage:
    swap:
      filename: /swap.img
      size: 3GB
    layout:
      name: direct

But no matter what values I put, remove filename and only have size, add maxsize, etc, no swapfile is created.

This is utilising ubuntu-22.04.1-live-server-amd64 via Packer.

The full config is:

#cloud-config
autoinstall:
  version: 1
  refresh-installer:
    update: true
    channel: stable
  locale: en_GB
  keyboard:
    layout: gb
  apt:
    geoip: true
  ssh:
    install-server: true
    allow-pw: true
    disable_root: false
    ssh_quiet_keygen: true
    allow_public_ssh_keys: true
  network:
    version: 2
    ethernets:
      ens18:
        dhcp4: true
  package_update: true
  package_upgrade: true
  package_reboot_if_required: true
  updates: all
  packages:
    - qemu-guest-agent
    - sudo
    - bash-completion
    - cloud-init
    - cloud-utils
    - cloud-guest-utils
    - git
    - curl
    - mlocate
    - resolvconf
    - htop
    - net-tools
    - dnsutils
    - aptitude
    - unzip
    - tuned
    - tuned-utils
    - tuned-utils-systemtap
    - tldr
    - needrestart
    - acl
    - libsasl2-modules
  storage:
    swap:
      filename: /swap.img
      size: 3GB
    layout:
      name: direct
  user-data:
    timezone: geoip
    users:
      - name: packer
        gecos: Packer User
        no_user_group: true
        groups: [adm, sudo]
        lock-passwd: true
        homedir: /tmp/packer
        sudo: ALL=(ALL) NOPASSWD:ALL
        shell: /bin/bash
        ssh_authorized_keys:
         - ${ssh_key}

Under storage, if you have a layout section then the swap (and grub) sections do not get used.

My notes from the last time I looked into this. The current source code still appears to behave the same way.

1 Like

Ah, that’s a bit bunch of pants.

Thanks for the reply, it’s certainly stopped me blaming my config!

At least with no swap I can create a custom one.

Thanks again!!

Suggestion to improve the document

Location

  • “Introduction > Providing the autoinstall config”

Improvement suggestion

In the line:

In most scenarios the easiest way will be to provide user-data via the nocloud data source.

Replace the deadlink with https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html.

Thank you for catching that! I’ve updated the link now

I want to use layout and grub both on storage , so what should i do ?

Currently this seems to not be possible, however a fix for it is currently in progress. For updates on this I found the issue in Launchpad and an active pull request in GitHub

This probably isn’t the correct place to ask this question but for sure if someone can point me in a direction I’d be more than grateful.

I’m new to a lot of this stuff so please bare with me if I don’t use the correct terminology but basically, my question boils down to wondering if I’m correct in discovering (the hard way) that supported cloud-init modules != supported autoinstall modules ???

To be a bit more specific, I have a working PXE boot solution for booting both 20.04 and 22.04 live server ISO’s that is booting and reading my user-data & meta-data files via http without any issues.

I’ve started to build my user-data file assuming (classic mistake) that all the modules specified at https://cloudinit.readthedocs.io/en/latest/reference/modules.html would work and be supported. What I’ve come to realize is that only the modules specified at https://ubuntu.com/server/docs/install/autoinstall-reference are actually supported because behind the scenes it’s reading the data in the user-data file and creating an autoinstall.yml file for subiquity to use as it’s answer file.

One example I ran into was wanting to use the cloud-init ansible module. This module isn’t listed as being supported in the autoinstall-reference so even if it’s listed in the user-data file, it appears to me anyways that it’s being ignored and not processed (ie. skipped) even though it’s a valid cloud-init module.

The workaround we have come up with was to use a late-command to setup a firsrun.service to execute a bash script that launches ansible-pull… it works but isn’t ideal. What I’m wondering is if someone who has a bit more experience with all this can confirm for me that supported cloud-init modules != supported autoinstall modules.

Also if anyone has any ideas or feedback on how we might be able to make better use of all of the cloud-init modules by installing off of the ISO media against bare metal I’d be all ears! Maybe I’ve missed something along the way. It seems that the problem that cloud-init was developed to solve might not work so great with using the ISO trying to install on bare metal systems.

Thoughts?

This bit is confusing. See the pseudo-yaml below for what I hope clarifies this situation.

#cloud-config
<cloud-init stuff affecting the install environment>
    autoinstall:
        <autoinstall stuff affecting the target system>
        user-data:
            <cloud-init stuff affecting the target system>

@dbungert did you just hear a loud bang? Because you just blew my mind! :stuck_out_tongue:

All joking aside… this clears things up for sure. After all of the research, and hours of trial and error I don’t know why it didn’t occur to me to try this! I’ll echo some of the other comments in this thread in that information to tie all this together is a bit spread out but for sure it always amazes me how this community is always more than willing to help.

I can’t promise I won’t have more questions but your psudo code has for sure opened my eyes. I can’t wait to get back to work and give this all a try.

My sincere thanks!

Thanks for the very positive feedback @zero0ne. I have incorporated this information in the documentation improvements we already have started for Subiquity.

Hello,
I am using quite successfully the new automated installation procedure, but I have a problem with the proxy settings.

If the proxy is in the interactive section, that info is not saved on disk, so after reboot I have not a proxy setup for APT.

The result is, of course, different if I am doing a standard install and the specified proxy, during the installation, is saved also on disk.

Is there any bug open about that?

thanks,
Fausto

I’m wondering if it would be possible to leverage autoinstall and possibly cloud-init for some more custom install configurations … Pretty much everything I build is a root-on-zfs setup, leveraging zfsbootmenu for the system boot. That then finds zfs pools with datasets with kernels and presents a menu for booting. zfsbootmenu is Very Slick.

This is my root-on-zfs builder https://github.com/Halfwalker/ZFS-root

The build process involves typical disk partitioning etc. zpool creates the zfs pool from the disk(s), and the datasets are mounted to /mnt/builder. debootstrap drops a basic install there, then a chroot process does the rest.

Given an appropriate config, I think autoinstall could prep the disk(s), but would then need specific commands to create the zfs pool/datasets and so on. Once the system is bootable, I think cloud-init might be able to do the rest.

Right now that ZFS-root.sh script is interactive, but can use an answer config file. In fact there is a packer setup to enable building bootable disk images too. Somehow, this should be able to be translated into the autoinstall/cloud-init world …

23.10 will have ZFS improvements. A guided option will be available for ZFS root in the Desktop installer (no encrypted ZFS yet, next on the list). Also, it will be possible to trigger ZFS root with guided partitioning on server with autoinstall. This produces a similar result to the ZFS setup Ubiquity produces.

storage:
  layout:
    name: zfs

For further customization, it will be possible to use curtin storage actions for ZFS/Zpool objects on any of the 23.10 subiquity installers (or older ISOs with snap refresh). Curtin storage actions are a bit verbose so if that’s something you’re interested in, you may find it convenient to first to a guided install to get a template, copy the template from /var/log/installer/autoinstall-user-data, and make further tweaks.

1 Like

That looks interesting, thanks. So it’s possible to play with the new installer in a 22.04/10 server iso via a snap refresh ?

Yes. You can start today if desired with the following in autoinstall:

refresh-installer:
  update: yes
  channel: beta

Now specifically 22.10 hasn’t been tested since that one is EOL, but 23.04/22.04/20.04 should be good shape (I’ve tested guided ZFS installs via autoinstall on Focal and Jammy).