Versions:
What error are you getting now?
Oh no, it is happened again β¦
andrei@quasarapp:~$ lxc list
Error: LXD unix socket "/var/snap/lxd/common/lxd-user/unix.socket" not accessible: Get "http://unix.socket/1.0": dial unix /var/snap/lxd/common/lxd-user/unix.socket: connect: connection refused
andrei@quasarapp:~$ lxd version
5.21.1 LTS
log
=> Preparing the system (27049)
==> Loading snap configuration
==> Setting up mntns symlink (mnt:[4026532367])
==> Setting up kmod wrapper
==> Preparing /boot
==> Preparing a clean copy of /run
==> Preparing /run/bin
==> Preparing a clean copy of /etc
==> Preparing a clean copy of /usr/share/misc
==> Setting up ceph configuration
==> Setting up LVM configuration
==> Setting up OVN configuration
==> Rotating logs
==> Setting up ZFS (2.1)
==> Escaping the systemd cgroups
====> Detected cgroup V2
==> Escaping the systemd process resource limits
==> Exposing LXD documentation
Closed liblxcfs.so
Running destructor lxcfs_exit
Running constructor lxcfs_init to reload liblxcfs
mount namespace: 6
hierarchies:
0: fd: 8: cpuset,cpu,io,memory,hugetlb,pids,rdma,misc
Kernel supports pidfds
Kernel does not support swap accounting
api_extensions:
- cgroups
- sys_cpu_online
- proc_cpuinfo
- proc_diskstats
- proc_loadavg
- proc_meminfo
- proc_stat
- proc_swaps
- proc_uptime
- proc_slabinfo
- shared_pidns
- cpuview_daemon
- loadavg_daemon
- pidfds
Reloaded LXCFS
=> Re-using existing LXCFS
==> Reloading LXCFS
=> Starting LXD
time="2024-04-16T19:24:42Z" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
time="2024-04-16T19:24:42Z" level=error msg="Failed to start the daemon" err="Failed to start dqlite server: raft_start(): io: closed segment 0000000000182044-0000000000182065 is past last snapshot snapshot-1-181248-10500816582"
Error: Failed to start dqlite server: raft_start(): io: closed segment 0000000000182044-0000000000182065 is past last snapshot snapshot-1-181248-10500816582
Killed
=> LXD failed to start
snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE
snap.lxd.daemon.service: Failed with result 'exit-code'.
snap.lxd.daemon.service: Scheduled restart job, restart counter is at 4.
Stopped Service for snap application lxd.daemon.
Started Service for snap application lxd.daemon.
looks as my database broken again, why its happened so often ?
Has your computer/server been stopped/restarted?
Please can you show the output of sudo snap changes
as well as sudo snap info lxd
too?
Whatβs the contents of your database directory?
$ sudo ls -lah /var/snap/lxd/common/lxd/database/global
Has your computer/server been stopped/restarted?
No, its βdigital oceanβ instance
sudo snap changes
I just tried to refresh lxd to fix my issues, but itβs not helped me.
[sudo] password for andrei:
ID Status Spawn Ready Summary
336 Done today at 09:07 UTC today at 09:07 UTC Refresh "lxd" snap from "latest/stable" channel
337 Done today at 09:49 UTC today at 09:49 UTC Refresh "lxd" snap from "5.20/stable" channel
338 Done today at 09:49 UTC today at 09:49 UTC Refresh "lxd" snap from "5.21/stable" channel
snap info lxd
andrei@quasarapp:~$ sudo snap info lxd
name: lxd
summary: LXD - container and VM manager
publisher: Canonicalβ
store-url: https://snapcraft.io/lxd
contact: https://github.com/canonical/lxd/issues
license: AGPL-3.0
description: |
LXD is a system container and virtual machine manager.
It offers a simple CLI and REST API to manage local or remote instances,
uses an image based workflow and support for a variety of advanced features.
Images are available for all Ubuntu releases and architectures as well
as for a wide number of other Linux distributions. Existing
integrations with many deployment and operation tools, makes it work
just like a public cloud, except everything is under your control.
LXD containers are lightweight, secure by default and a great
alternative to virtual machines when running Linux on Linux.
LXD virtual machines are modern and secure, using UEFI and secure-boot
by default and a great choice when a different kernel or operating
system is needed.
With clustering, up to 50 LXD servers can be easily joined and managed
together with the same tools and APIs and without needing any external
dependencies.
Supported configuration options for the snap (snap set lxd [<key>=<value>...]):
- ceph.builtin: Use snap-specific Ceph configuration [default=false]
- ceph.external: Use the system's ceph tools (ignores ceph.builtin) [default=false]
- criu.enable: Enable experimental live-migration support [default=false]
- daemon.debug: Increase logging to debug level [default=false]
- daemon.group: Set group of users that have full control over LXD [default=lxd]
- daemon.user.group: Set group of users that have restricted LXD access [default=lxd]
- daemon.preseed: Pass a YAML configuration to `lxd init` on initial start
- daemon.syslog: Send LXD log events to syslog [default=false]
- daemon.verbose: Increase logging to verbose level [default=false]
- lvm.external: Use the system's LVM tools [default=false]
- lxcfs.pidfd: Start per-container process tracking [default=false]
- lxcfs.loadavg: Start tracking per-container load average [default=false]
- lxcfs.cfs: Consider CPU shares for CPU usage [default=false]
- lxcfs.debug: Increase logging to debug level [default=false]
- openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
- openvswitch.external: Use the system's OVS tools (ignores openvswitch.builtin) [default=false]
- ovn.builtin: Use snap-specific OVN configuration [default=false]
- ui.enable: Enable the web interface [default=false]
For system-wide configuration of the CLI, place your configuration in
/var/snap/lxd/common/global-conf/ (config.yml and servercerts)
commands:
- lxd.buginfo
- lxd.check-kernel
- lxd.lxc
- lxd
services:
lxd.activate: oneshot, disabled, inactive
lxd.daemon: simple, disabled, inactive
lxd.user-daemon: simple, disabled, inactive
snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
tracking: 5.21/stable
refresh-date: today at 09:49 UTC
channels:
5.21/stable: 5.21.1-d46c406 2024-04-29 (28460) 108MB -
5.21/candidate: 5.21.1-d46c406 2024-04-26 (28460) 108MB -
5.21/beta: β
5.21/edge: git-f1fea03 2024-04-29 (28503) 108MB -
latest/stable: 5.21.1-2d13beb 2024-04-30 (28463) 107MB -
latest/candidate: 5.21.1-2d13beb 2024-04-26 (28463) 107MB -
latest/beta: β
latest/edge: git-89828eb 2024-04-30 (28526) 107MB -
5.20/stable: 5.20-f3dd836 2024-02-09 (27049) 155MB -
5.20/candidate: β
5.20/beta: β
5.20/edge: β
5.19/stable: 5.19-8635f82 2024-01-29 (26200) 159MB -
5.19/candidate: β
5.19/beta: β
5.19/edge: β
5.0/stable: 5.0.3-d921d2e 2024-04-23 (28373) 91MB -
5.0/candidate: 5.0.3-5e9b586 2024-04-26 (28461) 91MB -
5.0/beta: β
5.0/edge: git-8cd0db9 2024-04-24 (28440) 117MB -
4.0/stable: 4.0.9-a29c6f1 2022-12-04 (24061) 96MB -
4.0/candidate: 4.0.9-a29c6f1 2022-12-02 (24061) 96MB -
4.0/beta: β
4.0/edge: git-407205d 2022-11-22 (23988) 96MB -
3.0/stable: 3.0.4 2019-10-10 (11348) 55MB -
3.0/candidate: 3.0.4 2019-10-10 (11348) 55MB -
3.0/beta: β
3.0/edge: git-81b81b9 2019-10-10 (11362) 55MB -
installed: 5.21.1-d46c406 (28460) 108MB -
ls -lah /var/snap/lxd/common/lxd/database/global
log available here
Which snap channel were you tracking when the issue occurred?
Suggest staying on 5.21/stable now though.
Thanks! That listing doesnβt seem to correspond to the error you reported before (there is no 0000000000182044-0000000000182065
nor snapshot-1-181248-10500816582
), did you already delete some files? If you try to start now what is the error message?
I detect this issue today on the latest/stable channel, but containers continue to work fine. So I does not know when is happenedβ¦
I am not sure, because I still receive the same error message. where i can check configuration ? Probably LXD tried to load other database
Error: LXD unix socket "/var/snap/lxd/common/lxd-user/unix.socket" not accessible: Get "http://unix.socket/1.0": dial unix /var/snap/lxd/common/lxd-user/unix.socket: connect: connection refused
SorryβI meant the end of the LXD log (where it mentions dqlite and raft_start).
Please provide contents of /var/snap/lxd/common/lxd/logs/lxd.log
Also please advise what kernel (uname -a
) and what filesystem (findmnt
) you are running?
Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-102-generic x86_64)
log
andrei@quasarapp:~$ sudo cat /var/snap/lxd/common/lxd/logs/lxd.log
[sudo] password for andrei:
time="2024-04-16T20:48:34Z" level=warning msg=" - Couldn't find the CGroup network priority controller, per-instance network priority will be ignored. Please use per-device limits.priority instead"
So is LXD running now? Is sudo lxc list
working?
And findmnt
@endrii ?
my server that located inner lxc container works fine, but any commands from lxd or lxc returns error message.
lxc list too
andrei@quasarapp:~$ findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vda1 ext4 rw,relatime
ββ/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime
β ββ/sys/kernel/security securityfs security rw,nosuid,nodev,noexec,relatime
β ββ/sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime
β ββ/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime
β ββ/sys/fs/bpf bpf bpf rw,nosuid,nodev,noexec,relatime,mo
β ββ/sys/kernel/debug debugfs debugfs rw,nosuid,nodev,noexec,relatime
β ββ/sys/kernel/tracing tracefs tracefs rw,nosuid,nodev,noexec,relatime
β ββ/sys/fs/fuse/connections fusectl fusectl rw,nosuid,nodev,noexec,relatime
β ββ/sys/kernel/config configfs configfs rw,nosuid,nodev,noexec,relatime
ββ/proc proc proc rw,nosuid,nodev,noexec,relatime
β ββ/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=29,pgrp=1,timeout=0
β ββ/proc/sys/fs/binfmt_misc binfmt_misc binfmt_m rw,nosuid,nodev,noexec,relatime
ββ/dev udev devtmpfs rw,nosuid,relatime,size=4005976k,n
β ββ/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mo
β ββ/dev/shm tmpfs tmpfs rw,nosuid,nodev,inode64
β ββ/dev/hugepages hugetlbfs hugetlbf rw,relatime,pagesize=2M
β ββ/dev/mqueue mqueue mqueue rw,nosuid,nodev,noexec,relatime
ββ/run tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,si
β ββ/run/lock tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,si
β ββ/run/credentials/systemd-sysusers.service
β β none ramfs ro,nosuid,nodev,noexec,relatime,mo
β ββ/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatime,size=8127
β ββ/run/snapd/ns tmpfs[/snapd/ns] tmpfs rw,nosuid,nodev,noexec,relatime,si
β ββ/run/snapd/ns/lxd.mnt nsfs[mnt:[4026532528]]
β nsfs rw
ββ/boot/efi /dev/vda15 vfat rw,relatime,fmask=0022,dmask=0022,
ββ/mnt/cloud /dev/sda ext4 rw,noatime,discard
ββ/snap/core/16574 /dev/loop0 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core/16928 /dev/loop1 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core20/2182 /dev/loop4 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core18/2812 /dev/loop3 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core20/2264 /dev/loop5 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core22/1122 /dev/loop7 squashfs ro,nodev,relatime,errors=continue
ββ/snap/mc-server-installer/937 /dev/loop11 squashfs ro,nodev,relatime,errors=continue
ββ/snap/snapd/21184 /dev/loop12 squashfs ro,nodev,relatime,errors=continue
ββ/snap/snapd/21465 /dev/loop13 squashfs ro,nodev,relatime,errors=continue
ββ/snap/speed-test/31 /dev/loop14 squashfs ro,nodev,relatime,errors=continue
ββ/snap/mc-server-installer/951 /dev/loop6 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core22/1380 /dev/loop17 squashfs ro,nodev,relatime,errors=continue
ββ/var/snap/lxd/common/ns tmpfs tmpfs rw,relatime,size=1024k,mode=700,in
β ββ/var/snap/lxd/common/ns/mntns nsfs[mnt:[4026532368]]
β β nsfs rw
β ββ/var/snap/lxd/common/ns/shmounts nsfs[mnt:[4026532371]]
β nsfs rw
ββ/snap/lxd/27049 /dev/loop8 squashfs ro,nodev,relatime,errors=continue
ββ/snap/lxd/28460 /dev/loop10 squashfs ro,nodev,relatime,errors=continue
ββ/snap/core18/2823 /dev/loop18 squashfs ro,nodev,relatime,errors=continue
a