Out of memory, identify LXD container


I have several LXD containers including ‘docker1’ and ‘voip2’. I am seeing out of memory error like so,

[Thu Apr 18 05:36:08 2024] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=lxc.payload.voip2,mems_allowed=0,global_oom,task_memcg=/lxc.payload.docker1/system.slice/docker-816c3cb5297917b685bfc2fff9c4ac60fadcd6be3a108703ee4657cf289d8291.scope,task=python3,pid=1312687,uid=1000000
[Thu Apr 18 05:36:08 2024] Out of memory: Killed process 1312687 (python3) total-vm:68766072kB, anon-rss:35771300kB, file-rss:256kB, shmem-rss:0kB, UID:1000000 pgtables:130124kB oom_score_adj:0
[Thu Apr 18 05:36:22 2024] oom_reaper: reaped process 1312687 (python3), now anon-rss:104kB, file-rss:256kB, shmem-rss:0kB

# free -m
               total        used        free      shared  buff/cache   available
Mem:           47981       14156        7218         195       27376       33825
Swap:          32767       10498       22269

The oom-kill line references both the voip2 and docker1 LXD instances. However the subsequent line shows python3 was killed. Is it possible to find out which LXD instance was this python3 process part of?


What does snap list show?

Did this just start happening?

If you’re running LXD 5.0/stable you may be seeing the changes due to LXD 5.0.3 LTS interim snap release 5.0.3-d921d2e


# snap list
Name    Version         Rev    Tracking       Publisher   Notes
core    16-2.61.2       16928  latest/stable  canonical✓  core
core18  20231027        2812   latest/stable  canonical✓  base
core20  20240227        2264   latest/stable  canonical✓  base
core22  20240111        1122   latest/stable  canonical✓  base
lxd     5.21.1-10f4115  28322  latest/stable  canonical✓  -

I have not set any memory limit on the containers. Everything should be default.

1 Like

That is like 64GiB which is pushing the envelop a little considering the host has ~48GiB + some swap.

1 Like