Starting an LXC container with a profile for NFS-Server configuration causes host to reboot

I created a k8s cluster on my laptop (QEMU/KVM based) and in order to make use of ReadWriteMany (RWX) volumes, I created a Ubuntu 24.04 LXC container to act as the NFS server. To this end, I have the following profile attached to the container which I gleaned from this github repo:

lxc profile show nfs-server
$ > lxc profile show nfs-server 
name: nfs-server
description: LXC profile for nfs nodes
config:
  boot.autostart: "true"
  linux.kernel_modules: bridge,br_netfilter,x_tables,ip_tables,ip_vs,ip_set,ipip,xt_mark,xt_multiport,ip_tunnel,tunnel4,netlink_diag,nf_conntrack,nfnetlink,overlay
  raw.lxc: |-
    lxc.apparmor.profile = unconfined
    lxc.cgroup.devices.allow = a
    lxc.mount.auto=proc:rw sys:rw cgroup:rw
    #lxc.mount.entry=/media/lxcshare share none rw,bind 0.0
    lxc.cap.drop =
  security.nesting: "true"
  security.privileged: "true"
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    source: /dev/null
    type: disk
  aadisable1:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: disk
  nfsdir:
    path: /opt/pv
    source: /opt/pv
    type: disk
used_by:
- /1.0/instances/nfs-server

The container also has the default profile attached which looks like:

lxc profile show default
$ > lxc profile show default
name: default
description: Default LXD profile
config: {}
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
used_by:
- /1.0/instances/nfs-server

After setting this up, I proceeded to start the container which caused the laptop to reboot. However, when it came back up, the container was running and I could access the NFS shares just fine and the jobs in the cluster could access and write to the NFS shares as expected.

When I stop the cluster and stop the container, the next time I start it back up, it causes the laptop to reboot so now I’m 2/2 meaning it is 100% repeatable.

I found absolutely nothing in the logs. When I logged back in, I checked my command history for the time I ran the lxc start command and there’s absolutely nothing logged after that timestamp other than the usual kernel boot entry.

Is there maybe something extra I need to add to/remove from the profile to make it start up with causing a host reboot?

EDIT: I’d better include the Laptop and LXD details:

Laptop details
$ > sudo inxi -Mz
Machine:
  Type: Laptop System: LENOVO product: 21DE000KAU v: ThinkPad X1 Extreme Gen 5
    serial: <filter>
  Mobo: LENOVO model: 21DE000KAU v: SDK0T76538 WIN serial: <filter>
    UEFI: LENOVO v: N3JET42W (1.26 ) date: 09/25/2024
lxc info
$ > lxc info
config:
  core.https_address: 10.2.20.29:18443
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_devlxd
- migration_pre_copy
- infiniband
- maas_network
- devlxd_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- devlxd_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- storage_api_remote_volume_snapshot_copy
- zfs_delegate
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- init_preseed_storage_volumes
- metrics_instances_count
- server_instance_type_info
- resources_disk_mounted
- server_version_lts
- oidc_groups_claim
- loki_config_instance
- storage_volatile_uuid
- import_instance_devices
- instances_uefi_vars
- instances_migration_stateful
- container_syscall_filtering_allow_deny_syntax
- access_management
- vm_disk_io_limits
- storage_volumes_all
- instances_files_modify_permissions
- image_restriction_nesting
- container_syscall_intercept_finit_module
- device_usb_serial
- network_allocate_external_ips
- explicit_trust_token
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: akk
auth_user_method: unix
environment:
  addresses:
  - 10.2.20.29:18443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    REDACTED
    -----END CERTIFICATE-----
  certificate_fingerprint: REDACTED
  driver: lxc | qemu
  driver_version: 6.0.0 | 8.2.1
  instance_types:
  - container
  - virtual-machine
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_fscaps: "true"
  kernel_version: 6.8.0-51-generic
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Ubuntu
  os_version: "24.04"
  project: default
  server: lxd
  server_clustered: false
  server_event_mode: full-mesh
  server_name: akstpx1g5.anroet.com
  server_pid: 3474
  server_version: 5.21.2
  server_lts: true
  storage: dir
  storage_version: "1"
  storage_supported_drivers:
  - name: cephobject
    version: 17.2.7
    remote: true
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.11(2) (2021-01-08) / 1.02.175 (2021-01-08) / 4.48.0
    remote: false
  - name: powerflex
    version: 1.16 (nvme-cli)
    remote: true
  - name: zfs
    version: 2.2.2-0ubuntu9.1
    remote: false
  - name: btrfs
    version: 5.16.2
    remote: false
  - name: ceph
    version: 17.2.7
    remote: true
  - name: cephfs
    version: 17.2.7
    remote: true

Thought I’d mention LXD version running too:

snap info lxd
$ > snap info lxd
name:      lxd
summary:   LXD - container and VM manager
publisher: Canonical✓
store-url: https://snapcraft.io/lxd
contact:   lxd@lists.canonical.com
license:   AGPL-3.0
description: |
  LXD is a system container and virtual machine manager.
  
  It offers a simple CLI and REST API to manage local or remote instances,
  uses an image based workflow and support for a variety of advanced features.
  
  Images are available for all Ubuntu releases and architectures as well
  as for a wide number of other Linux distributions. Existing
  integrations with many deployment and operation tools, makes it work
  just like a public cloud, except everything is under your control.
  
  LXD containers are lightweight, secure by default and a great
  alternative to virtual machines when running Linux on Linux.
  
  LXD virtual machines are modern and secure, using UEFI and secure-boot
  by default and a great choice when a different kernel or operating
  system is needed.
  
  With clustering, up to 50 LXD servers can be easily joined and managed
  together with the same tools and APIs and without needing any external
  dependencies.
  
  
  Supported configuration options for the snap (snap set lxd [<key>=<value>...]):
  
    - apparmor.unprivileged-restrictions-disable: Whether to disable restrictions on unprivileged
    user namespaces [default=true]
    - ceph.builtin: Use snap-specific Ceph configuration [default=false]
    - ceph.external: Use the system's ceph tools (ignores ceph.builtin) [default=false]
    - criu.enable: Enable experimental live-migration support [default=false]
    - daemon.debug: Increase logging to debug level [default=false]
    - daemon.group: Set group of users that have full control over LXD [default=lxd]
    - daemon.user.group: Set group of users that have restricted LXD access [default=lxd]
    - daemon.preseed: Pass a YAML configuration to `lxd init` on initial start
    - daemon.syslog: Send LXD log events to syslog [default=false]
    - daemon.verbose: Increase logging to verbose level [default=false]
    - lvm.external: Use the system's LVM tools [default=false]
    - lxcfs.pidfd: Start per-container process tracking [default=false]
    - lxcfs.loadavg: Start tracking per-container load average [default=false]
    - lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
    - lxcfs.debug: Increase logging to debug level [default=false]
    - minio.path: Path to the directory containing the minio and mc binaries to use with LXD
    [default=""]
    - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
    - openvswitch.external: Use the system's OVS tools (ignores openvswitch.builtin) [default=false]
    - ovn.builtin: Use snap-specific OVN configuration [default=false]
    - ui.enable: Enable the web interface [default=true]
  
  For system-wide configuration of the CLI, place your configuration in
  /var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:
  - lxd.buginfo
  - lxd.check-kernel
  - lxd.lxc
  - lxd
services:
  lxd.activate:    oneshot, enabled, inactive
  lxd.daemon:      simple, enabled, active
  lxd.user-daemon: simple, enabled, inactive
snap-id:      J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking:     5.21/stable
refresh-date: 50 days ago, at 22:40 AEDT
channels:
  5.21/stable:      5.21.2-084c8c8 2024-11-20 (31214) 109MB -
  5.21/candidate:   5.21.2-45d4bdf 2025-01-06 (31888) 117MB -
  5.21/beta:        ↑                                       
  5.21/edge:        git-50b920f    2025-01-06 (31895) 117MB -
  latest/stable:    6.2-bde4d03    2025-01-06 (31820) 115MB -
  latest/candidate: 6.2-bde4d03    2024-12-18 (31820) 115MB -
  latest/beta:      ↑                                       
  latest/edge:      git-72baca0    2025-01-07 (31911) 120MB -
  6.1/stable:       6.1-78a3d8f    2024-09-03 (30130) 109MB -
  6.1/candidate:    ↑                                       
  6.1/beta:         ↑                                       
  6.1/edge:         ↑                                       
  6/stable:         6.2-bde4d03    2025-01-06 (31820) 115MB -
  6/candidate:      6.2-bde4d03    2024-12-18 (31820) 115MB -
  6/beta:           ↑                                       
  6/edge:           git-72baca0    2025-01-07 (31911) 120MB -
  5.0/stable:       5.0.4-497fe1e  2024-11-26 (31333)  93MB -
  5.0/candidate:    5.0.4-ee7e9fc  2024-12-18 (31787)  93MB -
  5.0/beta:         ↑                                       
  5.0/edge:         git-584857b    2024-12-17 (31732) 118MB -
  4.0/stable:       4.0.10-e664786 2024-07-31 (29619)  96MB -
  4.0/candidate:    4.0.10-3e98e62 2024-12-18 (31808)  96MB -
  4.0/beta:         ↑                                       
  4.0/edge:         git-4eded33    2024-12-18 (31802)  96MB -
  3.0/stable:       3.0.4          2019-10-10 (11348)  55MB -
  3.0/candidate:    3.0.4          2019-10-10 (11348)  55MB -
  3.0/beta:         ↑                                       
  3.0/edge:         git-81b81b9    2019-10-10 (11362)  55MB -
installed:          5.21.2-084c8c8            (31214) 109MB -