Please test autoinstalls for 20.04!

And newest of undocumented features. If you modify /autoinstall.yaml in early-commands, you aren’t warned that indeed that’s not exactly copy of your average user-data anymore. It’s mostly the same, except - you have to REMOVE “autoinstall:” line. Otherwise it fails the check. This is with 20.04.1 and “refresh-installer”. But hey, I’ve finally solved this, I only needed to do another 20 reboots, and start recording screen to catch all the errors.

In case anyone wonders what I’m trying here, if you use this in your user-data during PXE boot:

  early-commands:
- curl -G -o /autoinstall.yaml http<myserver>/user-data -d "mac=$(ip a | grep ether | cut -d ' ' -f6)"

You can fetch the contents of your new autoinstall.yaml mid install, and overwrite one that subiquity uses. It just needs to look like this (note that I DO NOT have autoinstall: here):

  version: 1
  refresh-installer:
    update: yes
  apt:
    geoip: true
    preserve_sources_list: false
    primary:
    - arches: [amd64, i386]
      uri: http<://hr.archive.ubuntu.com>/ubuntu
    - arches: [default]
      uri: http<://ports.ubuntu.com>/ubuntu-ports
  identity:
    hostname: php-client
    password: xxxencryptedpassxxx
    realname: php
    username: php
  keyboard: {layout: hr, toggle: toggle, variant: ""}
  locale: en_US
  network:
    network:
      version: 2
      ethernets:
        eth0:
          dhcp4: yes
          dhcp6: no
  ssh:
    allow-pw: true
    install-server: true
  late-commands:
    - poweroff

Now with all that I can finally identify my server individually (through MAC) and serve them their own custom tailored autoinstall.yaml via some PHP smarties.

If only now I could solve other 20 issues in autoinstall structure… I guess first step is tracking down that /snap/subiquity/...../python3/.../jsonschema/validator.py file that errors talk about, maybe I’ll be able to proof my files before failing to boot them 20-30 times.

Edit: for those interested I just answered my own question on AskUbuntu here: https://askubuntu.com/a/1292607/1080682

Hi,

When it appeared 20.04 my automated build worked.

But I try to build 20.04.1 and fails because by any reason no read the identity or the user-data section, then it is not creating the user, and not setting the hostname.
Then, when install openssh the process fails because try to ssh to do other things, and not exist any (root access is disabled).

Could it be an installer or cloud-init bug?
Exist a workaround for this issue?

I can provide my build process for tests if need.

Regards,
Cesar Jorge

Ya, I am having no shortage of storage-related issues that could have been solved easily using the old installation method, but currently have to be mapped to curtians storage format which I have yet to get to function correctly with 2 disks and partition them correctly. Trying to path match them doesn’t seem to work or give me the “they are busy error” udev error. Not to mention it fails to correctly flag the bootable partition correctly about 1/2 the time. Final issue is that curtian ignores my “FILL to 100%” section once again about 1/2 the time. So half of my servers have full disks and the others have a default 100G partition.

I’ll post logs if I get a chance. I cannot seem to make a setup that just works for all of our systems. I just want sda to have 3 partitions 2 LVM and one UEFI/GRUB boot and I want the / directory to fill. The defaults from curtian dont work giving me the udev busy error, the defaults in the auto-installer gives me the 100G partition half the time. I cant use layout for some reason cause the match feature does not recognize the path variable thats listed in subiquity documentation.

Anyone have a working solution?

EDIT:
ui.crash file


    layout:

      name: lvm

      match:

        path: /dev/sda

This is the storage section I am trying to get to function. If I just leave it to the default I get the final directory at 100G sometimes instead of the filled disk so I have to add a curtain command that I would prefer not to…

# Storage Expansion for Apphosts

    - curtin in-target --target=/target -- pvresize /dev/sda3

    - curtin in-target --target=/target -- lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

    - curtin in-target --target=/target -- resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

I have finally gotten the build to work, but doing a pretty ugly workaround. It remains to review the postscript process that for some reason it is dangerous to upgrade packages:

In short, the workaround consists of creating by hand the user that subiquity / cloud-init does not create. At least with that I can go to the next screen.

Any news about the resolution of this issue (or other) it is really appreciated.

Regards,
Cesar Jorge

Is there anything we need to add to accommodate multiple disks? (VM install)

With 1 disk configured it works as expected, with 2 the install completes but fails to boot (no OS found)

I have not been able to get the autoinstaller to work with no proxy and a local apt mirror. Has anyone? Any suggestions?

Errors and firewall logs are consistent with attempts to hit canonical geoip and snapcraft servers, even when geoip is turned off and refresh-installer update is False or the section is absent. The snaps are mounted, the packages are available, the internal apt repos are configured, but the installer fails if it can’t reach the internet. Rev 1772.

ERROR subiquity.controllers.snaplist:74 loading list of snaps failed
ERROR subiquity.controllers.mirror:98 geoip lookup failed

Is it possible to re-use the autoinstall-user-data file resulting from a manual install “as is” without any changes and feed it back into the installer to create more installations? I see the discussions regarding passing kernel options to read the user-data off a webserver, should I be able to just place the autoinstall-user-data file somewhere on the ISO and have the installer read it automatically?

Am I right in thinking that an ISO labeled “cidata” containing an empty meta-data file and an user-data file containing my autoinstall-user-data contents should do the trick?

I have similar issues like you. In packer in the boot_command I had to use ds=nocloud to use a floppy to seed user-data and meta-data. Http (ds=nocloud-net) did not work at all. My problem is that the ui installer still pops up asking for the language/locale. It looks like user-data is not picked up to answer these questions for some reason. Can you spot some problems in my packer config? I have tried your user-data file rule out potential problems in my user-data file
I use ubuntu 20.04.1 / packer 1.6.6 / vcenter server 7

  ....
  iso_paths = [
    "[qnap-vm-datastore-1] /iso-images/ubuntu-20.04.1-live-server-amd64.iso"
  ]

  floppy_files = ["ubuntu-20.04/meta-data","ubuntu-20.04/user-data"]
  floppy_label= "CIDATA"
  boot_wait = "5s"
  boot_command = [
    "<enter><enter>",
    "<f6><esc>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs><bs><bs><bs><bs><bs><bs><bs>",
    "<bs><bs><bs>",
    "initrd=/casper/initrd ",
    "autoinstall ds=nocloud;s=/",
    "<enter>"  ]

PS: Information about the nocloud settings are from here: https://cloudinit.readthedocs.io/en/latest/topics/datasources/nocloud.html

Hello,

I already have a working configuration and it works. But I would like to keep the device names instead of the uuids of the disks in my fstab. Is there any option to tell it the autoinstaller ?

root@ubuntu2004-xauto:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use ‘blkid’ to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/vg_sys/lv_root during curtin installation
/dev/disk/by-id/dm-uuid-LVM-JfUrrBVjcnnYyBb3JCweiRPRUOdYR9vRU03vvSs1olhv9n0F9xjUTfwHkzl8YQXJ / xfs defaults 0 0
# /opt was on /dev/vg_sys/lv_opt during curtin installation
/dev/disk/by-id/dm-uuid-LVM-JfUrrBVjcnnYyBb3JCweiRPRUOdYR9vRb2cggZRF26O66GAo7QwxrdKLbvKwSa3V /opt xfs defaults 0 0
# /var was on /dev/vg_sys/lv_var during curtin installation
/dev/disk/by-id/dm-uuid-LVM-JfUrrBVjcnnYyBb3JCweiRPRUOdYR9vRgUPpYFlRyfwo13aIVxVxfa4EEsZkSS8U /var xfs defaults 0 0
# /var/log was on /dev/vg_sys/lv_var_log during curtin installation
/dev/disk/by-id/dm-uuid-LVM-JfUrrBVjcnnYyBb3JCweiRPRUOdYR9vRw6rwUhrZnFT462sxyxZzLYEkntl2NQ0t /var/log xfs defaults 0 0
# /tmp was on /dev/vg_sys/lv_tmp during curtin installation
/dev/disk/by-id/dm-uuid-LVM-JfUrrBVjcnnYyBb3JCweiRPRUOdYR9vRm4AxMfrd7uBdzpDQeMlB0ZgswggyTCEK /tmp xfs defaults 0 0
/dev/disk/by-id/dm-uuid-LVM-h7WXG5hDG7G1J1juwSEbM9nq3MZEIlngv7hNFxLI3gCOd1uxZrWpJP4pLLg3GQBV none swap sw 0 0
# /data was on /dev/vg_data/lv_data during curtin installation
/dev/disk/by-id/dm-uuid-LVM-Uv3v1uF9bSzm8tsfoG6XTHXk3YTDrV022uV5003n0ewebHNXi1yS9tec7x6UvZf5 /data xfs defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/97f734a1-a9dd-4e03-8d9d-66a199fff0d2 /boot ext4 defaults 0 0

i got this to work by setting the boot_wait = “1s”

Hi everyone.

Based on some of the really useful information in this discussion I have created a script to automate the creation of ISO images that can be used for fully-unattended installations, including baking the autoinstall data into the ISO if you want. I’m using this myself already but as it has saved me a lot of time, I want to share it with the community: https://github.com/covertsh/ubuntu-autoinstall-generator

Happy holidays.

1 Like

So I created a user-data file that works great. The only problem is, I need my drive to be LUKS encrypted and sometimes I need this to run on systems with NVMe drives and sometimes it needs to run on systems with regular hard drives. In order to do the LUKS encryption, it seems I’ll need the storage section to specify the drive with path: /dev/nvme0n1 or path: /dev/sda. Is there any way to tell it to use whatever hard drive it detects? These machines will only ever have 1 drive.

Example:

...
storage:
  config:
    - ptable: gpt
      path: /dev/nvme0n1
      wipe: superblock
      preserve: false
      type: disk
      id: disk1
...

Regarding:

you can put a user config with ssh keys into the user-data you give the live session.

Could you give an example kernel command line?

The autoinstall reference says that you can supply a match key for pretty much any section when doing selection. I’ve had success using it in the storage section like so,

storage:
  version: 1
  config:
    - id: disk0
      match: {}
      type: disk
      ptable: gpt
      name: main_disk

which matches the largest disk in the system. Assuming, as you said, there’s only ever one disk in your systems, you can just use this match spec in the type: disk k,v pair, and then in the dm-crypt section, reference the assigned id.

1 Like

These are (or should be) non-fatal errors. How is your install actually failing?

Yes, that’s the intent. You might find the script convertsh posted useful.

No. The spec we wrote for this FSTAB - Ubuntu Wiki tries to use the “most unique” possible specification for the device. Is there a reason you want to differ from this?

I think you just need to put

ssh_authorized_keys:
 - <ssh key text>

into your user-data, but I haven’t tried this.

yes, this is how to do it. You don’t even need the match: {} section, a disk action without path or serial will get matched to an arbitrary disk.

2 Likes

Is there any way to use the autoinstall to prevent installation of snaps? It seems the 20.04 installer includes lxd and core18 snaps by default, neither of which I need or want.

I know I am not the original poster (mdalling), but I wanted to share my input, as I was intending to ask about this myself. Is there no reason why the current behavior cannot be kept, yet at the same time allow the spec to be carried through to curtin if it’s specified? For instance, I prefer to use the UUID= notation for entities like /boot and /boot/efi, but use the /dev/mapper/VG-LV notation for LVM. I am now forced to do a bunch of sed replacements in the late-commands since the spec can’t just be specified explicitly as part of the autoinstall file. While I understand the reason you are choosing to do things the way you are, you have to understand that existing environments may have tooling in place that expects consistency with existing servers and workstations. There should be some way to be able to specify the spec as part of the build, with the person making that change understanding there may be risk of “floating disks” or the like if they don’t know what they are doing. It is much easier/safer to just specify the desired spec than hack on the fstab during the late-commands to get it to look the way it needs to look.

The same goes for the specifying the uuid. It seems that actually carries through for partitions, but not all other scenarios. I could probably be fine not specifying uuids at build time, but it is just one more level of control, as uuids can easily be generated and inserted by a custom Python script I use to generate a server or workstation’s build template. It would also be useful if coupled with the ability to pass the spec during the build (since you could specify a UUID= or /dev/disk/by/uuid spec easily if you know the UUID ahead of time). Of course, that would only work if the ability to provide the spec is added in as a feature.

1 Like

I do not believe there is a way to do this at the moment. Currently I remove any unneeded snaps from server and workstation builds as part of a post-install script run shortly after the first login after all snaps have fully activated.

No. In some sense the question you ask gets its premise wrong: you can’t prevent the snaps being installed because they are already installed in the filesystem image that the installer copies.

Now I’m a bit more awake I see the point and it should be easy to implement. Someone filing a bug asking for it will make it more likely I don’t forget…

This worked out for me. Thank you, so much!

2 Likes