Please test autoinstalls for 20.04!

Would you mind sharing that script?
Why do you run it after login rather than during install using late-commands, for example?

The cloud-init source in the ubuntu 20.04 LTS server iso differs (imho arbitrarily) from the focal server iso from the cdimage subdomain. I assumed, I suppose incorrectly, that these images would be the same since they’re both 20.04 LTS images. I guess not. If you’re interested, here’s what happened…

I inserted some logging calls into cloudinit/sources/DataSourceNoCloud.py to debug, and when I copied them over to the cdimage version of the iso, lo’ and behold, cloud-init failed because in one iso, log.DEBUG(str) is used, and in the other, LOG.debug(str). What’s up with that sort of change in cloud-init source? My peers would never let me pass that through a MR.

Also, another issue I encountered. I found that the the cloud-init source in the subiquity snap for the ubuntu 20.04 server iso was different from the python source code in the very same iso, viz., the cloudinit/distros/networking.py module is missing from the source contained in the ro fs of the subiquity snap. So, of course my install fails when subiquity can’t load a pickled cache from cloud-init NoCloud. Damn shame, I’ve been struggling with this all week and I still haven’t managed to get an automated install working satisfactorily.

That’s exactly my issue. We have some automation in place which is working for all of our linux systems, doesn’t matter which distribution we are using. And to replace everything manually in the late commands would be another source of failure.

Can anyone tell me how I’m supposed to create a new file inside the home directory of a user created in the identity section as part of the install? Even when late-commands are run, /target/home/ is completely empty.

The identity user does not created until first boot. The installer creates cloud-init configuration in /target/var/lib/cloud/seed/nocloud-net/user-data. This is used by cloud-init during first boot to do things like create the user.

You can create the directory that will exist, place the files, then set the proper permissions. E.g:

late-commands:
  - mkdir /target/home/user
  - touch /target/home/user/file.txt
  - chown -R 1000: /target/home/user

Thanks, this seems to be the most straightforward solution.

/target/var/lib/cloud/seed/nocloud-net/user-data does not exist for Ubuntu 20.04 server installs, and I also got failures when attempting to run cloud-init-per invocations via curtin in-target and via chroot.

Sorry, I double-checked my user-run. I actually did this to create the user’s home directory:

late-commands:
  - install -o 1000 -g 1000 -d /target/home/user

I just got stung by this exact issue when re-provisioning a fileserver and ended up installing to a data disk by mistake.

Somehow through the docs I’d ended up with a user-data storage config which looked like:

  storage:
    layout:
      name: direct
    config:
      - type: disk
        match:
          serial: SERIALNUMBERHERE 

I had hoped that specifying the disk serial number directly would avoid any incidents where somehow the wrong disk might get chosen during installation.

Unfortunately what ended up happening when deploying an autoinstall was that presumably the layout: config took precedence and the installer picked a random data disk from the fileserver to install to.

I’m still not quite sure on the logic of how it picked the disk that it did, as the disk that was ultimately chosen usually loads as /dev/sdk, so it’s hardly the first in the system.

After finding the post quoted above, I’ve updated the config to be more in keeping with the correct syntax:

  storage:
    layout:
      name: direct
      match:
        ssd: yes
        serial: SERIALNUMBERHERE

Fortunately the disk that got clobbered by my mistake was part of a ZFS pool so it’s just a case of resilvering, but I was wondering that if the first syntax is “mixing two types of configuration” and might cause unexpected or damaging results, would it be sensible for the autoinstall to have halted and refused to continue when faced the broken config in my first example?

1 Like

Whoops, we should make things complain loudly if both config and layout are present there.

3 Likes

After some pains, I got autoinstall working satisfactorily. I reported some issues I had earlier. I just want to comment on what I find very nice.

  • It’s really cool that I can put my user-data and meta-data on a seperate volume and just change those for the NoCloudDataSource

  • It’s cool that most of the tooling is in a familiar, easy to read, high level, general purpose language (e.g. python and yaml). This helps with debugging

  • The defaults are mostly very sensible for things like storage and users and whatnot

  • Despite my initiate frustration, it really is a very flexible tool chain. I think I can get very far with this.

Thanks for the hard work.

2 Likes

Thanks, I always wanted to automate my installation, but d-i and remastering an image looked like a great time sink. I created a working autoinstall file in the past days which works quite well with 20.04, but it doesn’t work with 18.04 even though versions of subiquity on 20.04.1 and 18.04.5 are the same. I can seen in the mounted squashfs image that cloudinit for example has been disabled. During my research on this topic and others related to cloudinit I gained some knowledge of how things work, but after spending a lot of time reading outdated (remastering the old installer, kickseed and other) or inaccurate articles I would really appreciate if someone could lend me a hand. Please tell me what needs to—or can—be done to be able to boot the 18.04 installer with a provided (and working for 20.04!) autoinstall file generated with cloud-localds. I’ve written a few shell scripts over the past 10 years, and I think I know some areas of ubuntu quite well but at the moment I am stuck. If there is a process or a public build configuration file where I can see how these images are created and what differs from 18.04 to 20.04 that would help me a lot, but right now all the search terms and results that came up and that I have researched for hours could not solve my problem.

Thanks for the update. I know moving to a new system is always painful but it’s good to hear it’s working out well for some people at least!

This can’t be done at the moment, at least not without backporting the significant modifications to the installer ISO to bionic, something that is not currently planned, I’m afraid.

1 Like

Thanks for your quick response. I can understand that there will probably never be enough time to work on backporting this delta for a two year old product. But I’m still interested in how the images are built and how one would look and compare the changes. I think I can at least learn a few more things about complexity of such software and also explain it better to management folks.

The following commits are (I think) most of what implemented this:




For those that try to use the autoinstall with packer. I came up with the following solution to the automatic SSHD answering early:

  early-commands:
    - 'systemctl stop sshd'

That allows packer to wait for the ssh after the instance reboots

I am trying to install with autoinstall in VirtualBox for testing purpose. Installation starts with the following grub config:

set gfxpayload=keep
linux /casper/vmlinuz “ds=nocloud-net;s=http://10.0.2.2:8028/” quiet autoinstall —
initrd /casper/initrd
boot

Installation gets stuck at “Reached target Host and Network Name Lookups.”

I am using Ubuntu Server 20.10 with the following user-data:

autoinstall:
  version: 1
  identity:
    hostname: ubuntu-server
    password: "$6$exDY1mhS4KUYCE/2$zmn9ToZwTKLhCw.b4/b.ZRTIZM30JZ4QrOQ2aOXJ8yk96xpcCof0kxKwuX1kqLG/ygbJ1f8wxED22bTL4F46P0"
    username: ubuntu

Hi,
Did anyone tried to use “early-commands” to format and/or mount target filesystem?

Is there a way to get the system to shut down after the installation instead of rebooting?

Since I am doing the install with a USB stick, a fresh installation starts if I am not quick enough to remove the stick when the system is about to reboot resulting in an installation loop.

put poweroff in late-commands? This possibly should have a more declarative option though, indeed.