Correct usage of LXD's cloud-init scripts_user when launching LXD container?


Good evening.

How does one make use of scripts_user when launching a container with LXD? The documentation is limited.

Currently, I pass a cloud-init yaml file upon launch:

lxc launch \
    ubuntu:jammy tst-01 \
    --config=user.user-data="$(cat lxd/user-data.yaml)"

inside of which is the following scripts_user stanza:

# ... other stanzas here...
  - |
    echo "1" >> /srv/progress
  - |
    echo "2" >> /srv/progress

These test commands do not result in any data being present in /srv/progress

Ultimately I will need to reload systemd, enable services, and reload NGINX configuration after all of my other instructions have completed, hence I am experimenting with scripts_user.

Best regards


Hi @dwd-daniel, I don’t know much about the user scripts, but can’t you use runcmd to achieve the same? See this example:

  - echo "abc" > /tmp/hello

Hi @jpelizaeus

Thank you for your input. My understanding is that runcmd is executed prior to the package_update_upgrade_install phase, per the ordering of /etc/cloud/cloud.cfg (see the listing further down). I need to perform the following actions ostensibly in this order:

  1. install NGINX
  2. deploy NGINX config to /etc/nginx/sites-available/
  3. deploy systemd unit and timer to /etc/systemd/system
  4. symlink the NGINX config into /etc/nginx/sites-enabled/
  5. run systemctl daemon-reload
  6. run systemctl restart nginx
  7. run systemctl enable --now myservice.timer

I believe executes these steps in the following order:

  • cloud_init_modules: write_files
  • cloud_config_modules: runcmd
  • cloud_final_modules: package_update_upgrade_install

and that after these, cloud_final_modules: scripts_user; where I intend to reload, restart, and enable systemd units.

Listing: /etc/cloud/cloud.cfg

This is the configuration file I pulled out of an ubuntu:jammy container moments ago.

# The top level settings are used as module
# and base configuration.

# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
  - default

# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the default $user
disable_root: true

# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false

# If you use datasource_list array, keep array items in a single line.
# If you use multi line array, ds-identify script won't read array items.
# Example datasource config
# datasource:
#   Ec2:
#     metadata_urls: [ '' ]
#     timeout: 5 # (defaults to 50 seconds)
#     max_wait: 10 # (defaults to 120 seconds)

# The modules that run in the 'init' stage
  - migrator
  - seed_random
  - bootcmd
  - write_files
  - growpart
  - resizefs
  - disk_setup
  - mounts
  - set_hostname
  - update_hostname
  - update_etc_hosts
  - ca_certs
  - rsyslog
  - users_groups
  - ssh

# The modules that run in the 'config' stage
  - wireguard
  - snap
  - ubuntu_autoinstall
  - ssh_import_id
  - keyboard
  - locale
  - set_passwords
  - grub_dpkg
  - apt_pipelining
  - apt_configure
  - ubuntu_advantage
  - ntp
  - timezone
  - disable_ec2_metadata
  - runcmd
  - byobu

# The modules that run in the 'final' stage
  - package_update_upgrade_install
  - fan
  - landscape
  - lxd
  - ubuntu_drivers
  - write_files_deferred
  - puppet
  - chef
  - ansible
  - mcollective
  - salt_minion
  - reset_rmc
  - rightscale_userdata
  - scripts_vendor
  - scripts_per_once
  - scripts_per_boot
  - scripts_per_instance
  - scripts_user
  - ssh_authkey_fingerprints
  - keys_to_console
  - install_hotplug
  - phone_home
  - final_message
  - power_state_change

# System and/or distro specific settings
# (not accessible to handlers/transforms)
  # This will affect which distro class gets used
  distro: ubuntu
  # Default user name + that default users groups (if added/used)
    name: ubuntu
    lock_passwd: True
    gecos: Ubuntu
    groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    shell: /bin/bash
    renderers: ['netplan', 'eni', 'sysconfig']
    activators: ['netplan', 'eni', 'network-manager', 'networkd']
  # Automatically discover the best ntp_client
  ntp_client: auto
  # Other config here will be given to the distro class and/or path classes
    cloud_dir: /var/lib/cloud/
    templates_dir: /etc/cloud/templates/
    - arches: [i386, amd64]
          - http://%(ec2_region)
          - http://%(availability_zone)
          - http://%(region)
        security: []
    - arches: [arm64, armel, armhf]
          - http://%(ec2_region)
          - http://%(availability_zone)
          - http://%(region)
        security: []
    - arches: [default]
  ssh_svcname: ssh

There are dedicated directories under /var/lib/cloud/scripts/(per-once|per-boot|per-instance). Those scripts get executed in the final stage. You can place your custom scripts there and cloud init picks them up:

  - content: |
      echo "hello" > /tmp/world
    path: /var/lib/cloud/scripts/per-once/
    permissions: 0755

Thank you @jpelizaeus. Your suggestion, in and of itself, worked.

I’ve continued to research the original question, and now understand that the entries under runcmd are stored for execution at the point when cloud_final_modules: scripts_user is invoked. Therefore, your initial suggestion of using runcmd was ultimately correct.

Thank you for your help