Please test autoinstalls for 20.04!

So after debugging this a whole bunch more it is now working, just not like I expected. It seems the installer doesn’t actually create the user. Instead it creates a cloud-init config to be used at first startup. Is this accurate? I had a late-command in the autoinstall config that was disabling cloud-init which is why the user wasn’t being created.

Maybe this was a bug with the 20.04 iso and it got fixed in 20.04.1 because I can’t seem to reproduce it any more.

My only issue now is I can’t get the installer to NOT create swap. I have the following in my config. Is layout direct conflicting with swap size 0? Is there a way to specify effectively: fill entire disk, don’t use lvm and no swap.

  storage:
    swap:
      size: 0
    layout:
      name: direct

Having some issues getting this working:

According to my cloud-init.log:

DataSourceNoCloud.py[DEBUG]: Seed from http://192.168.57.4/ubuntu_2004/ not supported by DataSourceNoCloud [seed=None][dsmode=net]

http://192.168.57.4/ubuntu_2004/ contains both user-data and meta-data

IPXE config:

#!ipxe

set base-url http://192.168.57.4/ubuntu_2004
kernel ${base-url}/vmlinuz 
initrd ${base-url}/initrd
imgargs vmlinuz quiet autoinstall ds=nocloud-net;s=${base-url}/ console=ttyS0,115200n8 ip=dhcp url=${base-url}/ubuntu-20.04.1-live-server-amd64.iso
boot

I assume the url kernel param points to the ISO image, but looking at the logs:

2020-09-24 09:28:42,405 - main.py[INFO]: contents of 'http://192.168.57.4/ubuntu_2004/ubuntu-20.04.1-live-server-amd64.iso' did not start with b'#cloud-config'
2020-09-24 09:28:47,201 - main.py[INFO]: contents of 'http://192.168.57.4/ubuntu_2004/ubuntu-20.04.1-live-server-amd64.iso' did not start with b'#cloud-config'

So it looks like it’s looking at the url param for cloud-config?
Anyone got any pointers? Thanks :slight_smile:

2 Likes

Does anybody know how can I change default filesystem type? It is possible via system installer guide (supports: xfs, ext4 and brtfs), so I believe that there must be a key for that. All I want is to install ubuntu with lvm (default layout: name: lvm) but with xfs as filesystem type.
Please don’t tell me that I need to do it manually I mean define all the partions like it is mentioned above (look for EFI customization). Docs are silent about changing fs type. All I found so far is doc related to cloud-init here: https://cloudinit.readthedocs.io/en/latest/topics/examples.html.

Any ideas are appreciated.

MY latest bug…

Which is just stuck at connecting forever, when stopped with ctrl + c it displays the message below and starts over.

Yes.

Yep, that would do it.

Possibly, I don’t think anything changed around this but oh well. Get me logs if it starts happening again please!

Ah, looks like the swap key is ignored if you use a layout option. That would be a bug, sorry.

Sign it does look like cloud-init config looks at url= in the kernel command line (I didn’t know this!), but the s= url should still be looked at as well. Can you post any more logs?

Currently the layout options do hardcode ext4 filesystem I’m afraid. This should be easy to fix though, can you file a bug on Launchpad?

You must be using edge? I guess you should stop doing that for the moment, some disruptive changes are landing. It should still be working so I’ll try and fix it, but if you use edge it’s going to break from time to time, realistically.

I plan on going back to stable but it doesn’t work either. Also with the snap package management, I cannot go back to using 20.07.1 like I want. It’s disabled in the stable channel for some reason.

This happens with the stable branch.

@mwhudson when using another volume to provide the autoinstall config, why am I forced to extract the server iso, write in autoinstall to the kernel command options then repack for a truly hands-off install?

I understand the desire for caution but there ought to be a better solution than this.

1 Like

Did you end up with an iPXE config that works? I’m having an issue similar to what you’re describing and haven’t been able to figure it out.

How can i disable autoupdates (“run_unattended_upgrades”) from packages at the end of install while network is available?
Cant find any options for the “user-data” file to prevent this.

refresh-installer:
update: no

Is set.
Many thanks!

1 Like

You can’t currently. Why do you want to install a system with known security vulnerabilities?

You can probably work around this by not configuring the network during the install and dumping netplan into place with a late-command.

Please put a lot more effort into the interactive partitioner and let the auto-install behind a bit. The installer is currently a whole mess, with a lot less knowledge than the d-i installer did know. It does not support FAT32 formatting, does not recognize existing RAID/LVM devices, does not allow mounting partitions without formatting, it just wants to be smart but really, it’s so dumb. Please don’t pull out the partitioning from the admin’s hand because the installer has a lot less knowledge about what could be needed than a human could. Thanks.

I have tested autoinstall alot lately with packer building vmware templates.

All the basic stuff getting a template installed and running went smooth.

However as soon as I try to partition differently it just does not do what asked. For instance single disk with a /boot partition as ext4 and a / partition as xfs is not possible at the moment.
I have tried with this config in autoinstall (extracted from curtin-install-cfg.yaml and modified to add xfs as fstype):

storage:
    layout:
      name: direct
    config:
      - grub_device: true
        id: disk-sda
        name: ''
        path: /dev/sda
        preserve: false
        ptable: gpt
        type: disk
        wipe: superblock-recursive
      - device: disk-sda
        flag: bios_grub
        id: partition-0
        number: 1
        preserve: false
        size: 1048576
        type: partition
      - device: disk-sda
        flag: ''
        id: partition-1
        number: 2
        preserve: false
        size: 34356592640
        type: partition
        wipe: superblock
      - fstype: ext4
        id: format-0
        preserve: false
        type: format
        volume: partition-0
      - device: format-0
        id: mount-0
        path: /boot
        type: mount
      - fstype: xfs
        id: format-1
        preserve: false
        type: format
        volume: partition-1
      - device: format-1
        id: mount-1
        path: /
        type: mount

This seems to work at first:

2020-11-05 21:41:31,210 DEBUG root:39 start: subiquity/Filesystem/load_autoinstall_data: 
2020-11-05 21:41:31,210 DEBUG subiquitycore.controller.filesystem:93 load_autoinstall_data {'config': [{'grub_device': True, 'id': 'disk-sda', 'name': '', 'path': '/dev/sda', 'preserve': False, 'ptable': 'gpt', 'type': 'disk', 'wipe': 'superblock-recursive'}, {'device': 'disk-sda', 'flag': 'bios_grub', 'id': 'partition-0', 'number': 1, 'preserve': False, 'size': 1048576, 'type': 'partition'}, {'device': 'disk-sda', 'flag': '', 'id': 'partition-1', 'number': 2, 'preserve': False, 'size': 34356592640, 'type': 'partition', 'wipe': 'superblock'}, {'fstype': 'ext4', 'id': 'format-0', 'preserve': False, 'type': 'format', 'volume': 'partition-0'}, {'device': 'format-0', 'id': 'mount-0', 'path': '/boot', 'type': 'mount'}, {'fstype': 'xfs', 'id': 'format-1', 'preserve': False, 'type': 'format', 'volume': 'partition-1'}, {'device': 'format-1', 'id': 'mount-1', 'path': '/', 'type': 'mount'}], 'layout': {'name': 'direct'}}
2020-11-05 21:41:31,211 DEBUG subiquitycore.controller.filesystem:103 self.ai_data = {'config': [{'grub_device': True, 'id': 'disk-sda', 'name': '', 'path': '/dev/sda', 'preserve': False, 'ptable': 'gpt', 'type': 'disk', 'wipe': 'superblock-recursive'}, {'device': 'disk-sda', 'flag': 'bios_grub', 'id': 'partition-0', 'number': 1, 'preserve': False, 'size': 1048576, 'type': 'partition'}, {'device': 'disk-sda', 'flag': '', 'id': 'partition-1', 'number': 2, 'preserve': False, 'size': 34356592640, 'type': 'partition', 'wipe': 'superblock'}, {'fstype': 'ext4', 'id': 'format-0', 'preserve': False, 'type': 'format', 'volume': 'partition-0'}, {'device': 'format-0', 'id': 'mount-0', 'path': '/boot', 'type': 'mount'}, {'fstype': 'xfs', 'id': 'format-1', 'preserve': False, 'type': 'format', 'volume': 'partition-1'}, {'device': 'format-1', 'id': 'mount-1', 'path': '/', 'type': 'mount'}], 'layout': {'name': 'direct'}}
2020-11-05 21:41:31,211 DEBUG root:39 finish: subiquity/Filesystem/load_autoinstall_data: SUCCESS: 

However a bit later in the log:

2020-11-05 21:41:33,563 DEBUG root:39 start: subiquity/Filesystem/apply_autoinstall_config: 
2020-11-05 21:41:33,564 DEBUG root:39 start: subiquity/Filesystem/apply_autoinstall_config/convert_autoinstall_config: 
2020-11-05 21:41:33,564 DEBUG subiquitycore.controller.filesystem:171 self.ai_data = {'config': [{'grub_device': True, 'id': 'disk-sda', 'name': '', 'path': '/dev/sda', 'preserve': False, 'ptable': 'gpt', 'type': 'disk', 'wipe': 'superblock-recursive'}, {'device': 'disk-sda', 'flag': 'bios_grub', 'id': 'partition-0', 'number': 1, 'preserve': False, 'size': 1048576, 'type': 'partition'}, {'device': 'disk-sda', 'flag': '', 'id': 'partition-1', 'number': 2, 'preserve': False, 'size': 34356592640, 'type': 'partition', 'wipe': 'superblock'}, {'fstype': 'ext4', 'id': 'format-0', 'preserve': False, 'type': 'format', 'volume': 'partition-0'}, {'device': 'format-0', 'id': 'mount-0', 'path': '/boot', 'type': 'mount'}, {'fstype': 'xfs', 'id': 'format-1', 'preserve': False, 'type': 'format', 'volume': 'partition-1'}, {'device': 'format-1', 'id': 'mount-1', 'path': '/', 'type': 'mount'}], 'layout': {'name': 'direct'}}
2020-11-05 21:41:33,564 DEBUG subiquitycore.controller.filesystem:549 partition_disk_handler: Disk(path='/dev/sda', wipe='superblock', type='disk', id='disk-sda') None {'size': 34357641216, 'fstype': 'ext4', 'mount': '/'}
2020-11-05 21:41:33,564 DEBUG subiquitycore.controller.filesystem:550 disk.freespace: 34357641216
2020-11-05 21:41:33,564 DEBUG subiquitycore.controller.filesystem:567 model needs a bootloader partition? True
2020-11-05 21:41:33,565 DEBUG subiquitycore.controller.filesystem:467 _create_boot_partition - adding bios_grub partition
2020-11-05 21:41:33,565 DEBUG subiquity.models.filesystem:1690 add_partition: rounded size from 1048576 to 1048576
2020-11-05 21:41:33,565 DEBUG subiquitycore.controller.filesystem:577 Adjusting request down: 34357641216 - 1048576 = 34356592640
2020-11-05 21:41:33,565 DEBUG subiquity.models.filesystem:1690 add_partition: rounded size from 34356592640 to 34356592640
2020-11-05 21:41:33,565 DEBUG subiquity.models.filesystem:1759 adding ext4 to Partition(device=disk-sda, size=34356592640, wipe='superblock', flag='', grub_device=None, id='partition-1')
2020-11-05 21:41:33,565 DEBUG subiquitycore.controller.filesystem:582 Successfully added partition
2020-11-05 21:41:33,565 DEBUG root:39 finish: subiquity/Filesystem/apply_autoinstall_config/convert_autoinstall_config: SUCCESS: 

So I end up with 2 partitions on the disk, however it is only one partition that is formatted as ext4 and used as root-partition. It still creates 2 partitions and marks the first one as bios_grub, but no filesystem created on that.

I dont know if this is a bug and if so if it is in curtin or subiquity. Documentation is vague about this kind of config. Next lvl for us would be to split into separate disks and use full disks instead of partitions, but right now it looks impossible to get to that. (This is possible when done manually).

Tried with the simplest form of partitioning i could think of:

storage:
    layout:
      name: direct
    config:
      - type: disk
        id: disk0
        match:
          size: largest
      - type: partition
        id: boot-partition
        device: disk0
        size: 500M
      - type: partition
        id: root-partition
        device: disk0
        size: -1

But then I end up with:

Partition Table: gpt
Disk Flags:
Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  34.4GB  33.3GB

And the following mounts:

# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-19fIZkacmGtVeXaeSECUjTlfT4MSr6B91WlLhhxEv0eFJWETSoOW9GP27rMVJ6VU / ext4 defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/50206c78-3092-4036-949f-90088542781e /boot ext4 defaults 0 0
/swap.img	none	swap	sw	0	0

So I started asking for direct and ended up with LVM ???

You are mixing two types of configuration.
You provide either:
a) layout: and get automatic partitioning associated with selected layout
b) config: and the provide your custom configuration you want

That sounds like it makes sense, however using this example:
https://curtin.readthedocs.io/en/latest/topics/storage.html#basic-layout
Just changing the path to device and remove model entry:

Model: VMware Virtual disk (scsi)
Disk /dev/sda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  34.4GB  33.3GB

And filesystems created and mounted is as follows from etc/fstab:

# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-KikzkotFAtg4IEA6u2jDkfpmSUrUXE59O9QaAt3ij5ImadFdEI586Q6eOidC3487 / ext4 defaults 0 0
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/bb7d5d9a-1eb4-413f-b273-90730d58dce9 /boot ext4 defaults 0 0
/swap.img	none	swap	sw	0	0

So partition-table is not honored, as a matter of fact nothing of the config is honored.

Can you put somewhere your user-data file?
In your first post it looks like your storage key has wrong indentation. It shoud have two spaces indent, to be a key in autoinstall dict.

Yeah i got it working with the correct indention, missed that in the multiple tries and errors.

So layout: and config: are mutually exclusive ( layout: always wins ). Should be explained better at the reference here: https://ubuntu.com/server/docs/install/autoinstall-reference
Especially since layout: is not mentioned in the curtin docs.

1 Like

It’s not about installing a system with outstanding updates, it’s about how long it takes and then you still have to install all other updates afterwards. Using something like packer, we do all the updates after the install phase and then hand it over to puppet for the BAU stuff.

Edit: I can see how this is useful “in a cloud” - but we are more of a corp. setup, we don’t hand the servers over to the general public.

Please reconsider this.

1 Like

I’m new to the discussion, but even after 20.10 and many updates autoinstall seems quite rough around the edges. First of all, we really need autoinstall generator. Next best tool we miss is simple autoinstall checker or linter. While we can use yamllint and similar, these just check for spacing, and throw bunch of false positives that live installer still accepts. We lack good examples about storage section in general, we understand it’s complicated topic, but just because a multitude of examples would help the understanding. We need a tool to encrypt the password, or link to it in docs, or allow autoinstall to have ability to accept unencrypted value, otherwise it’s hard to handle these. Next, we need an exact numbers on minimum specs. Autoinstall just crashes with RAM being too low, and I find it unacceptable for autoinstall to fail with 1GB RAM. MANY small servers are published with 1GB RAM in micro-server world. Plus, it should make a check early in the install phase and check the disk/CPU/memory and confirm these specs will indeed work. And print out a human readable error if they don’t. Getting stuck on some random section doesn’t help diagnosing that memory was actually too low (you get to that with trial and error through 20+ reboots by yourself, if you get lucky to accidentalyl stumble upon it). Autoinstall is altogether unstable still, makes it hard to recommend for production use. Setting refresh installer to yes hardly helps. Using the auto-generated user-data from var/log of some manual install doesn’t help much either. It is rigged with errors and wrong settings. If 20.04 didn’t catch that, why does 20.04.1 still produce unusable file, and why doesn’t 20.10 still warn about it at installation start? Why doesn’t new installer downloaded by setting refresh-installer actually incorporate a way to feed these error-filled configs and just do a config check and either warn about errors (in human understandable text) or auto-fix them if they were a known bug, or just switch to interactive for that section (if supported). If you know that network: should be network: network: and someone feeds you wrong data, and installer makes auto-refresh, then have it fixed on new installer’s side! if it feeds you toggle=null then fix it by warning or asking a supported value! The whole purpose of auto-update and new versions and iterations is to catch the issues, and try to incorporate the fixes in the new product. I mean, sorry, but if I install 20.04 GA, and get /val/log/installer/user-data-autoinstall and feed it to 20.10 with refresh-installer enabled then it should just swallow that config, auto-fix the known issues, or AT LEAST tell me this is unsupported file (possibly with noting sections that threw errors). I’m not even gonna rant about YAML issues like spacing, and how in the hell this error-prone format is suddenly being used for virtually everything despite it’s obvious issues. Sorry if this post seems like a rant, but booting autoinstall 100 times to fix all the bugs from a file is annoying and literally wastes more time than anyone would spend manually installing 100 servers. That “snap described here (that) does not yet exist” couldn’t come soon enough. Actually, should’ve been here in 18.04 era, and not missing still with old installer being deprecated. I’m slightly disapointed with how this is being handled, and hope that points that I made can be taken as constructive suggestions. They’d make lifes of numerous sysadmins and just advanced users way easier than just telling them “exit code 3”.

1 Like