After this year's update, host isolated cpu cores are no longer being used on lxd vm that isolated cores have been pinned to

Making sure the host don’t schedule any work on specific cores by updating grub and passing the following parameters:

sudo vi /etc/default/grub

Then updating the following line:

GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory systemd.unified_cgroup_hierarchy=false isolcpus=3,8"

Updating grub and reboot to load the new kernel parameters to isolate specific cores:

sudo update-grub

Makes the host don’t schedule any job on those cores by looking at the workload per core using:

htop

Starting a LXD vm pinning it on those cores does not seems to be scheduling jobs on them anymore, as it used before the latest lxd release/kernel update.

lxc launch ubuntu:24.04 test --vm -c limits.cpu=3,8
lxc shell test
root@test:~# for i in $(seq $(getconf _NPROCESSORS_ONLN)); do yes > /dev/null & done
# Output the following
#[1] 1232
#[2] 1233
#[3] 1234
#[4] 1235

htop on the vm shows all 2 cores at 100%.
htop on the host show cores (other than 3 or 8) spiking at 100%, it seems like lxd vm is now scheduling the workload on the cores available to the host instead of the isolated one reserved exclusively for the lxd vm.

This used to be working fine earlier this year 2024. Any ideas how I can make the lxd queue vm workload on isolated cores?

Please show snap list?

What LXD version were you running previously?

@amikhalitsyn is going to take a look at this next week.

Experienced it on 6.1-78a3d8f.
Then rolled back to latest version 5.21 (5.21.2-2f4ba6b) and it seems like its also showing the same issue. In the meanwhile, I removed isolcpu parameter from kernel 5.15.0-119-generic running Ubuntu 22.04.

Great, thanks @tomp and @amikhalitsyn !

1 Like

Hi,

See also VM CPU auto pinning causes slowdowns and stealtime · Issue #14133 · canonical/lxd · GitHub which may be the cause of the issue you’re experiencing.

1 Like