Installation anbox cloud appliance (version 1.22/stable) failed

Hi,
I have a problem during the installation of anbox cloud appliance version 1.22 on my phisical server, after the anbox-cloud-appliance init command I receive this error:

this is the result of command anbox-cloud-appliance status
status: error
error: Failed to initialize LXD
update-available: false
reboot-needed: false
version: 1.22.1

An error occurred while bootstrapping the Anbox Cloud Appliance.

You can find more information about the failure in the log file at
/var/snap/anbox-cloud-appliance/common/logs/bootstrap.log

You can either file a bug report at OpenID transaction in progress
or ask for help at Anbox Cloud - Ubuntu Community Hub

this is the content of file: /var/snap/anbox-cloud-appliance/common/logs/bootstrap.log

2024-11-12 16:15:48 Public location: 10.10.47.8
2024-11-12 16:15:48 Public address: 10.10.47.8
2024-11-12 16:15:48 Private address: 10.10.47.8
2024-11-12 16:15:48 Private subnet: 10.10.47.0/24
Since Juju 2 is being run for the first time, it has downloaded the latest public cloud information.
Only clouds with registered credentials are shown.
There are more clouds, use --all to see them.
2024-11-12 16:15:49 Using UA subscription from host
2024-11-12 16:15:49 Successfully extracted credentials from UA subscription
2024-11-12 16:15:49 Starting installation of dependencies

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Hit:1 Index of /ubuntu-ports noble InRelease
Hit:2 Index of /ubuntu-ports noble-updates InRelease
Hit:3 Index of /ubuntu-ports noble-backports InRelease
Hit:4 Index of /ubuntu-ports noble-security InRelease
Get:5 Index of /stable/ noble InRelease [2,483 B]
Get:6 Index of /stable/ noble/main arm64 Packages [2,472 B]
Hit:7 Index of /apps/ubuntu/ noble-apps-security InRelease
Hit:8 Index of /apps/ubuntu/ noble-apps-updates InRelease
Hit:9 Index of /infra/ubuntu/ noble-infra-security InRelease
Hit:10 Index of /infra/ubuntu/ noble-infra-updates InRelease
Fetched 4,955 B in 1s (5,224 B/s)
Reading package lists…
Building dependency tree…
Reading state information…
2 packages can be upgraded. Run ‘apt list --upgradable’ to see them.

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists…
Building dependency tree…
Reading state information…
expect is already the newest version (5.45.4-3).
cpufrequtils is already the newest version (008-2build2).
linux-headers-6.8.0-48-generic is already the newest version (6.8.0-48.48).
linux-modules-extra-6.8.0-48-generic is already the newest version (6.8.0-48.48).
linux-image-generic is already the newest version (6.8.0-48.48).
linux-headers-generic is already the newest version (6.8.0-48.48).
The following packages were automatically installed and are no longer required:
libgbm1 libnvidia-egl-wayland1 libpciaccess0 libwayland-client0
libwayland-server0 libxcb-randr0 nvidia-firmware-550-server-550.127.05
Use ‘apt autoremove’ to remove them.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
Packages installation done in parallel
fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576
vm.max_map_count = 262144
kernel.dmesg_restrict = 1
kernel.pid_max = 4194304
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv6.neigh.default.gc_thresh3 = 8192
kernel.keys.maxkeys = 2000
kernel.keys.maxbytes = 2000000
fs.aio-max-nr = 524288
net.ipv4.ping_group_range = 0 2147483647
abi.swp = 1
2024-11-12 16:18:02 LXD is ready, continuing with its initialization
2024-11-12 16:18:02 Using the following preseed configuration:
cluster:
enabled: true
server_name: lxd0
config:
cluster.https_address: 10.10.47.8:8443
core.https_address: 10.10.47.8:8443
networks:

  • name: lxdbr0
    type: bridge
    config:
    ipv4.nat: true
    ipv4.dhcp.expiry: infinite
    ipv4.address: 10.0.0.1/23
    ipv6.address: none
    profiles:
  • name: default
    devices:
    root:
    path: /
    pool: data
    type: disk
    eth0:
    type: nic
    nictype: bridged
    parent: lxdbr0
    storage_pools:
  • name: data
    driver: zfs
    config:
    size: 32212254720
    /usr/sbin/zfs
    2024-11-12 16:18:04 Tuning LXD ZFS based storage pool
    Storage volume backups created
    Storage volume images created
    error: cannot perform the following tasks:
  • Run service command “reload-or-restart” for services [“daemon”] of snap “lxd” (systemctl command [reload-or-restart snap.lxd.daemon.unix.socket snap.lxd.daemon.service] failed with exit status 1: stderr:
    Job failed. See “journalctl -xe” for details.)

Log for lxd

Log for lxc

config: {}
api_extensions:

  • storage_zfs_remove_snapshots
  • container_host_shutdown_timeout
  • container_stop_priority
  • container_syscall_filtering
  • auth_pki
  • container_last_used_at
  • etag
  • patch
  • usb_devices
  • https_allowed_credentials
  • image_compression_algorithm
  • directory_manipulation
  • container_cpu_time
  • storage_zfs_use_refquota
  • storage_lvm_mount_options
  • network
  • profile_usedby
  • container_push
  • container_exec_recording
  • certificate_update
  • container_exec_signal_handling
  • gpu_devices
  • container_image_properties
  • migration_progress
  • id_map
  • network_firewall_filtering
  • network_routes
  • storage
  • file_delete
  • file_append
  • network_dhcp_expiry
  • storage_lvm_vg_rename
  • storage_lvm_thinpool_rename
  • network_vlan
  • image_create_aliases
  • container_stateless_copy
  • container_only_migration
  • storage_zfs_clone_copy
  • unix_device_rename
  • storage_lvm_use_thinpool
  • storage_rsync_bwlimit
  • network_vxlan_interface
  • storage_btrfs_mount_options
  • entity_description
  • image_force_refresh
  • storage_lvm_lv_resizing
  • id_map_base
  • file_symlinks
  • container_push_target
  • network_vlan_physical
  • storage_images_delete
  • container_edit_metadata
  • container_snapshot_stateful_migration
  • storage_driver_ceph
  • storage_ceph_user_name
  • resource_limits
  • storage_volatile_initial_source
  • storage_ceph_force_osd_reuse
  • storage_block_filesystem_btrfs
  • resources
  • kernel_limits
  • storage_api_volume_rename
  • macaroon_authentication
  • network_sriov
  • console
  • restrict_devlxd
  • migration_pre_copy
  • infiniband
  • maas_network
  • devlxd_events
  • proxy
  • network_dhcp_gateway
  • file_get_symlink
  • network_leases
  • unix_device_hotplug
  • storage_api_local_volume_handling
  • operation_description
  • clustering
  • event_lifecycle
  • storage_api_remote_volume_handling
  • nvidia_runtime
  • container_mount_propagation
  • container_backup
  • devlxd_images
  • container_local_cross_pool_handling
  • proxy_unix
  • proxy_udp
  • clustering_join
  • proxy_tcp_udp_multi_port_handling
  • network_state
  • proxy_unix_dac_properties
  • container_protection_delete
  • unix_priv_drop
  • pprof_http
  • proxy_haproxy_protocol
  • network_hwaddr
  • proxy_nat
  • network_nat_order
  • container_full
  • candid_authentication
  • backup_compression
  • candid_config
  • nvidia_runtime_config
  • storage_api_volume_snapshots
  • storage_unmapped
  • projects
  • candid_config_key
  • network_vxlan_ttl
  • container_incremental_copy
  • usb_optional_vendorid
  • snapshot_scheduling
  • snapshot_schedule_aliases
  • container_copy_project
  • clustering_server_address
  • clustering_image_replication
  • container_protection_shift
  • snapshot_expiry
  • container_backup_override_pool
  • snapshot_expiry_creation
  • network_leases_location
  • resources_cpu_socket
  • resources_gpu
  • resources_numa
  • kernel_features
  • id_map_current
  • event_location
  • storage_api_remote_volume_snapshots
  • network_nat_address
  • container_nic_routes
  • rbac
  • cluster_internal_copy
  • seccomp_notify
  • lxc_features
  • container_nic_ipvlan
  • network_vlan_sriov
  • storage_cephfs
  • container_nic_ipfilter
  • resources_v2
  • container_exec_user_group_cwd
  • container_syscall_intercept
  • container_disk_shift
  • storage_shifted
  • resources_infiniband
  • daemon_storage
  • instances
  • image_types
  • resources_disk_sata
  • clustering_roles
  • images_expiry
  • resources_network_firmware
  • backup_compression_algorithm
  • ceph_data_pool_name
  • container_syscall_intercept_mount
  • compression_squashfs
  • container_raw_mount
  • container_nic_routed
  • container_syscall_intercept_mount_fuse
  • container_disk_ceph
  • virtual-machines
  • image_profiles
  • clustering_architecture
  • resources_disk_id
  • storage_lvm_stripes
  • vm_boot_priority
  • unix_hotplug_devices
  • api_filtering
  • instance_nic_network
  • clustering_sizing
  • firewall_driver
  • projects_limits
  • container_syscall_intercept_hugetlbfs
  • limits_hugepages
  • container_nic_routed_gateway
  • projects_restrictions
  • custom_volume_snapshot_expiry
  • volume_snapshot_scheduling
  • trust_ca_certificates
  • snapshot_disk_usage
  • clustering_edit_roles
  • container_nic_routed_host_address
  • container_nic_ipvlan_gateway
  • resources_usb_pci
  • resources_cpu_threads_numa
  • resources_cpu_core_die
  • api_os
  • container_nic_routed_host_table
  • container_nic_ipvlan_host_table
  • container_nic_ipvlan_mode
  • resources_system
  • images_push_relay
  • network_dns_search
  • container_nic_routed_limits
  • instance_nic_bridged_vlan
  • network_state_bond_bridge
  • usedby_consistency
  • custom_block_volumes
  • clustering_failure_domains
  • resources_gpu_mdev
  • console_vga_type
  • projects_limits_disk
  • network_type_macvlan
  • network_type_sriov
  • container_syscall_intercept_bpf_devices
  • network_type_ovn
  • projects_networks
  • projects_networks_restricted_uplinks
  • custom_volume_backup
  • backup_override_name
  • storage_rsync_compression
  • network_type_physical
  • network_ovn_external_subnets
  • network_ovn_nat
  • network_ovn_external_routes_remove
  • tpm_device_type
  • storage_zfs_clone_copy_rebase
  • gpu_mdev
  • resources_pci_iommu
  • resources_network_usb
  • resources_disk_address
  • network_physical_ovn_ingress_mode
  • network_ovn_dhcp
  • network_physical_routes_anycast
  • projects_limits_instances
  • network_state_vlan
  • instance_nic_bridged_port_isolation
  • instance_bulk_state_change
  • network_gvrp
  • instance_pool_move
  • gpu_sriov
  • pci_device_type
  • storage_volume_state
  • network_acl
  • migration_stateful
  • disk_state_quota
  • storage_ceph_features
  • projects_compression
  • projects_images_remote_cache_expiry
  • certificate_project
  • network_ovn_acl
  • projects_images_auto_update
  • projects_restricted_cluster_target
  • images_default_architecture
  • network_ovn_acl_defaults
  • gpu_mig
  • project_usage
  • network_bridge_acl
  • warnings
  • projects_restricted_backups_and_snapshots
  • clustering_join_token
  • clustering_description
  • server_trusted_proxy
  • clustering_update_cert
  • storage_api_project
  • server_instance_driver_operational
  • server_supported_storage_drivers
  • event_lifecycle_requestor_address
  • resources_gpu_usb
  • clustering_evacuation
  • network_ovn_nat_address
  • network_bgp
  • network_forward
  • custom_volume_refresh
  • network_counters_errors_dropped
  • metrics
  • image_source_project
  • clustering_config
  • network_peer
  • linux_sysctl
  • network_dns
  • ovn_nic_acceleration
  • certificate_self_renewal
  • instance_project_move
  • storage_volume_project_move
  • cloud_init
  • network_dns_nat
  • database_leader
  • instance_all_projects
  • clustering_groups
  • ceph_rbd_du
  • instance_get_full
  • qemu_metrics
  • gpu_mig_uuid
  • event_project
  • clustering_evacuation_live
  • instance_allow_inconsistent_copy
  • network_state_ovn
  • storage_volume_api_filtering
  • image_restrictions
  • storage_zfs_export
  • network_dns_records
  • storage_zfs_reserve_space
  • network_acl_log
  • storage_zfs_blocksize
  • metrics_cpu_seconds
  • instance_snapshot_never
  • certificate_token
  • instance_nic_routed_neighbor_probe
  • event_hub
  • agent_nic_config
  • projects_restricted_intercept
  • metrics_authentication
  • images_target_project
  • cluster_migration_inconsistent_copy
  • cluster_ovn_chassis
  • container_syscall_intercept_sched_setscheduler
  • storage_lvm_thinpool_metadata_size
  • storage_volume_state_total
  • instance_file_head
  • resources_pci_vpd
  • qemu_raw_conf
  • storage_cephfs_fscache
  • vsock_api
  • storage_volumes_all_projects
  • projects_networks_restricted_access
  • cluster_join_token_expiry
  • remote_token_expiry
  • init_preseed
  • cpu_hotplug
  • storage_pool_source_wipe
  • zfs_block_mode
  • instance_generation_id
  • disk_io_cache
  • storage_pool_loop_resize
  • migration_vm_live
  • auth_user
  • instances_state_total
  • numa_cpu_placement
  • network_allocations
  • storage_api_remote_volume_snapshot_copy
  • zfs_delegate
  • operations_get_query_all_projects
  • event_lifecycle_name_and_project
  • instances_nic_limits_priority
  • operation_wait
  • cluster_internal_custom_volume_copy
  • instance_move_config
    api_status: stable
    api_version: “1.0”
    auth: trusted
    public: false
    auth_methods:
  • tls
    auth_user_name: user
    auth_user_method: unix
    environment:
    addresses:
    architectures:
    • aarch64
    • armv6l
    • armv7l
    • armv8l
      certificate: |
      -----BEGIN CERTIFICATE-----
      MIIB7TCCAXOgAwIBAgIRANhHEClLem6LIfhEHgpvnMwwCgYIKoZIzj0EAwMwJzEM
      MAoGA1UEChMDTFhEMRcwFQYDVQQDDA5yb290QGFuYm94LWRldjAeFw0yNDExMTIx
      NjE1MjdaFw0zNDExMTAxNjE1MjdaMCcxDDAKBgNVBAoTA0xYRDEXMBUGA1UEAwwO
      cm9vdEBhbmJveC1kZXYwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAARXs2OEPqTHm3nK
      TspVTlJGyJgd2ZCcN7y5Gbc55jGYe2/CPf3eMtbeiA5APupPuS6bLIkK4R294ZI7
      qSJxOQWM6usCCNkxwuoxeOOUN2DPIe1UqbDg3c0tU7qVksJPO/2jYzBhMA4GA1Ud
      DwEB/wQEAwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCwG
      A1UdEQQlMCOCCWFuYm94LWRldocEfwAAAYcQAAAAAAAAAAAAAAAAAAAAATAKBggq
      hkjOPQQDAwNoADBlAjEA4dhlo9Vx4RXsJkBqvtdXjPGZdxSj/pUE0h6bM+sLk9wd
      6jkvCmfi8znZuIxC4VkWAjBUUavyWmADHICXyR/FSBRd9UZ9WopITA+9vzcpAdip
      KXEs1SZTaMVrUqN63CyY8BA=
      -----END CERTIFICATE-----
      certificate_fingerprint: 45327bd12182cc3bdec46b4eae4474d2b7cae27f9392f26fe0ed6c02bc0b34bb
      driver: lxc | qemu
      driver_version: 5.0.3 | 8.0.5
      firewall: nftables
      kernel: Linux
      kernel_architecture: aarch64
      kernel_features:
      idmapped_mounts: “true”
      netnsid_getifaddrs: “true”
      seccomp_listener: “true”
      seccomp_listener_continue: “true”
      shiftfs: “false”
      uevent_injection: “true”
      unpriv_fscaps: “true”
      kernel_version: 6.8.0-48-generic
      lxc_features:
      cgroup2: “true”
      core_scheduling: “true”
      devpts_fd: “true”
      idmapped_mounts_v2: “true”
      mount_injection_file: “true”
      network_gateway_device_route: “true”
      network_ipvlan: “true”
      network_l2proxy: “true”
      network_phys_macvlan_mtu: “true”
      network_veth_router: “true”
      pidfd: “true”
      seccomp_allow_deny_syntax: “true”
      seccomp_notify: “true”
      seccomp_proxy_send_notify_fd: “true”
      os_name: Ubuntu
      os_version: “24.04”
      project: default
      server: lxd
      server_clustered: false
      server_event_mode: full-mesh
      server_name: anbox-dev
      server_pid: 53964
      server_version: 5.0.3
      storage: “”
      storage_version: “”
      storage_supported_drivers:
    • name: btrfs
      version: 5.4.1
      remote: false
    • name: ceph
      version: 15.2.17
      remote: true
    • name: cephfs
      version: 15.2.17
      remote: true
    • name: cephobject
      version: 15.2.17
      remote: true
    • name: dir
      version: “1”
      remote: false
    • name: lvm
      version: 2.03.07(2) (2019-11-30) / 1.02.167 (2019-11-30) / 4.48.0
      remote: false
    • name: zfs
      version: 2.2.2-0ubuntu9
      remote: false
      bond0,bond,NO,0,
      enP4p1s0f0np0,physical,NO,0,
      enP4p1s0f1np1,physical,NO,0,
      eno1np0,physical,NO,0,
      eno2np1,physical,NO,0,
      enxbe3af2b6059f,physical,NO,0,
      Storage volume backups created
      Storage volume images created

What is the problem?
Do you want additional info?

thanks a lot,
Demis.

Hi!

Please note that 1.22 is out of support. Please check our Release and support policy for more information.

Going back to your problem, you unfortunately got hit by Bug #2083961 “snapd fails to reload LXD snap” : Bugs : snapd as we can see in the logs:

Run service command “reload-or-restart” for services [“daemon”] of snap “lxd” (systemctl command [reload-or-restart snap.lxd.daemon.unix.socket snap.lxd.daemon.service] failed with exit status 1: stderr:
Job failed. See “journalctl -xe” for details.)

If you want to keep using 1.22, you can find a workaround below:

  1. Copy the appliance bootstrap script to the /var/snap/anbox-cloud-appliance/common:
cp /snap/anbox-cloud-appliance/current/bin/bootstrap.sh /var/snap/anbox-cloud-appliance/common/bootstrap.debug.sh
  1. Edit the copied script at /var/snap/anbox-cloud-appliance/common/bootstrap.debug.sh and replace this:
snap restart --reload lxd

With this:

systemctl reload-or-restart snap.lxd.daemon.service
  1. Purge any former failed deployment via the following command:
sudo anbox-cloud-appliance destroy --force
  1. Finally, run sudo anbox-cloud-appliance init again.

Hope this helps!

Alexis