All CPUs busy when no user processes

Ubuntu Version:

Ubuntu 24.04.03 LTS

Desktop Environment (if applicable):

GNOME

Problem Description:

Over the last week or so, my system has become unbearably slow for the first several minutes after boot-up. With no processes of my own running, all 4 CPUs are running at nearly 100%. After a few minutes, responsiveness increases but is still well below normal, and all 4 CPUs are running at 25-50%, as shown in the 1st screenshot, below.

Despite the busy CPUs, the monitor’s resources page (2nd screenshot) shows very littlle going on.

It does seem strange that the file-systems page shows /usr/snap/firefox/common/host-hunspell mounted on /dev/sda2 (3rd screenshot) and occupying 154 GB, though I don’t see what that would have to do with the slowness, since I’m not running Firefox at the moment.

Relevant System Information:

ASUS Q550LF

Screenshots or Error Messages:


NON-COMMENT LINES IN /etc/fstab:

UUID=3221e8c2-eb9e-4cc7-8d7e-5f522b044bea /               ext4    errors=remount-ro 0       1
UUID=80CE-E957  /boot/efi       vfat    umask=0077      0       1
UUID=141e5e5c-d48d-4cab-9809-1a6d5dd791fc none            swap    sw              0       0
UUID=60f76407-8e2c-4daf-a298-cd3ce8473d1e  /mnt/mhd  ext4  noatime,lazytime,rw,nofail,noauto,x-systemd.automount
/dev/disk/by-id/usb-Flash_USB_Disk_3.0_3727123CE35C6C1189942-0:0-part1 /mnt/usb-Flash_USB_Disk_3.0_3727123CE35C6C1189942-0:0-part1 auto nosuid,nodev,nofail,x-gvfs-show 0 0

What I’ve Tried:

I’m afraid I haven’t a clue where to begin… I’d be grateful for any ideas!


See if you have any processes in loop.
journalctl -n 100 > n100.txt
and then look at the result

Here’s the result, though it does not mean much to me:

(base) ~ 25> cat n100.txt 
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-flush-frequency /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q logtostderr /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q alsologtostderr /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q one-output /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q stderrthreshold /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-file-max-size /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q skip-log-headers /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q add-dir-header /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q skip-headers /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-backtrace-at /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q address /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q port /var/snap/microk8s/8474/args/kube-scheduler
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + sanitise_args_kube_controller_manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + args=('log-dir' 'log-file' 'log-flush-frequency' 'logtostderr' 'alsologtostderr' 'one-output' 'stderrthreshold' 'log-file-max-size' 'skip-log-headers' 'add-dir-header' 'skip-headers' 'log-backtrace-at' 'address' 'port' 'experimental-cluster-signing-duration')
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + local args
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + remove_args kube-controller-manager log-dir log-file log-flush-frequency logtostderr alsologtostderr one-output stderrthreshold log-file-max-size skip-log-headers add-dir-header skip-headers log-backtrace-at address port experimental-cluster-signing-duration
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + local service_name=kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + shift
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + args=('log-dir' 'log-file' 'log-flush-frequency' 'logtostderr' 'alsologtostderr' 'one-output' 'stderrthreshold' 'log-file-max-size' 'skip-log-headers' 'add-dir-header' 'skip-headers' 'log-backtrace-at' 'address' 'port' 'experimental-cluster-signing-duration')
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + local args
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-dir /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-file /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-flush-frequency /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q logtostderr /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q alsologtostderr /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q one-output /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q stderrthreshold /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-file-max-size /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q skip-log-headers /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q add-dir-header /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q skip-headers /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q log-backtrace-at /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q address /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q port /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + for arg in "${args[@]}"
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + /snap/microk8s/8474/bin/grep -q experimental-cluster-signing-duration /var/snap/microk8s/8474/args/kube-controller-manager
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237564]: ++ cat /var/snap/microk8s/8474/args/kube-proxy
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237565]: ++ grep cluster-cidr
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237566]: ++ tr = ' '
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237567]: ++ gawk '{print $2}'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + pod_cidr=10.1.0.0/16
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + '[' -z 10.1.0.0/16 ']'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + '[' -z 10.1.0.0/16 ']'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + iptables -C FORWARD -s 10.1.0.0/16 -m comment --comment 'generated for MicroK8s pods' -j ACCEPT
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237570]: ++ cat /var/snap/microk8s/8474/args/kubelet
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237571]: ++ grep -- --resolv-conf
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237572]: ++ tr = ' '
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237573]: ++ gawk '{print $2}'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + resolv_conf=/run/systemd/resolve/resolv.conf
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + '[' -z /run/systemd/resolve/resolv.conf ']'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + is_strict
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237574]: + /snap/microk8s/8474/bin/cat /snap/microk8s/8474/meta/snap.yaml
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237575]: + /snap/microk8s/8474/bin/grep confinement
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237576]: + /snap/microk8s/8474/bin/grep -q strict
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + return 1
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237578]: + ufw version
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"warn","ts":"2025-09-30T09:35:05.556666+0100","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"warn","ts":"2025-09-30T09:35:05.556836+0100","caller":"embed/config.go:1320","msg":"it isn't recommended to use default name, please set a value for --name. Note that etcd might run into issue when multiple members have the same default name","name":"default"}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.556870+0100","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["/snap/microk8s/8474/etcd","--data-dir=/var/snap/microk8s/common/var/run/etcd","--advertise-client-urls=https://192.168.68.110:12379","--listen-client-urls=https://0.0.0.0:12379","--client-cert-auth=true","--trusted-ca-file=/var/snap/microk8s/8474/certs/ca.crt","--cert-file=/var/snap/microk8s/8474/certs/server.crt","--key-file=/var/snap/microk8s/8474/certs/server.key"]}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.556987+0100","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/snap/microk8s/common/var/run/etcd","dir-type":"member"}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"warn","ts":"2025-09-30T09:35:05.557023+0100","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"warn","ts":"2025-09-30T09:35:05.557047+0100","caller":"embed/config.go:1320","msg":"it isn't recommended to use default name, please set a value for --name. Note that etcd might run into issue when multiple members have the same default name","name":"default"}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.557075+0100","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.557765+0100","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://0.0.0.0:12379"]}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.558274+0100","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":true,"name":"default","data-dir":"/var/snap/microk8s/common/var/run/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/snap/microk8s/common/var/run/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["https://192.168.68.110:12379"],"listen-client-urls":["https://0.0.0.0:12379"],"listen-metrics-urls":[],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-etcd[237513]: {"level":"info","ts":"2025-09-30T09:35:05.558987+0100","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/snap/microk8s/common/var/run/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc0000d01f8}"}
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237612]: + ufw status
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237613]: + grep -q 'Status: active'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + '[' -e /var/snap/microk8s/8474/var/lock/skip.ufw ']'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + echo 'Found enabled UFW: adding rules to allow in/out traffic on '\''cali+'\'' and '\''vxlan.calico'\'' devices'
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: Found enabled UFW: adding rules to allow in/out traffic on 'cali+' and 'vxlan.calico' devices
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + ufw allow in on vxlan.calico
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237621]: Skipping adding existing rule
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237621]: Skipping adding existing rule (v6)
Sep 30 09:35:05 dude-Q550LF microk8s.daemon-kubelite[237407]: + ufw allow out on vxlan.calico

Your system partition /dev/sda2 is almost full
154.0GB total
138.3GB used
7.8GB available

First of all, examine the directories and identify if you can safely remove anything.
This terminal command will provide a list of the 20 largest :-

sudo du -ahx / | sort -rh | head -20

Using

top -s1

is a better tool for judging CPU usage, during a high-usage event.

Potentially, the transient processes that could cause such spikes are

  • updatedb
  • tracker-miner-fs

If either, of those does show up at those “peak” times, you could look at how often they are run, or limit their CPU usage by configuring some kind of “cgroup” quota.



As for disk usage levels, I don’t experience CPU spikes the way you do, even though my usage is as follows:

SNAPSHOT__DiskUsage

1 Like

Your screenshot of gnome-system-monitor’s Processes tab is showing only processes of one user. Under gnome-system-monitor’s “hamburger menu” (the triple-horizontal-bar to the left of the Minimize button), was “All processes” selected at the time that screenshot was taken?

If your question is why you’re not seeing anything using a lot of CPU in gnome-system-monitor’s Processes tab while high CPU usage is occurring, make sure the selection is set to “All processes”?

If you’re seeking to find out and evaluate what is using the CPU, starting points for investigation would be:

  • While the high CPU is ongoing, run the Terminal command suggested by @ericmarceau and note which process(es) it shows are using high CPU
  • Page through journalctl -b output to right at the time when the high CPU usage started and/or slightly before the high CPU usage started, do you see any related messages? (If journalctl -b output goes directly to your terminal, it will present it in a pager, which you can navigate with arrow keys, PageUp/PageDown, and Home/End keys, and type q to quit.)
  • If the high CPU usage is caused by a system service (as opposed to a user service or cron job), you can check systemctl list-timers output to see if any of the listed timers correlate with the times of the high CPU usage.
1 Like

Thank you, everybody, for your replies.

Here’s the latest… When the only process that I’m running is the system monitor, all 4 CPUs are reasonably active:


For the entire minute of history shown, only the monitor was running. Note the occasional peaks near 100% usage on all CPUs, despite lack of user activity – not even moving the cursor. Am I correct in believing that is an excessive CPU load under such conditions?

Below you see all processes that were running at the end of the minute’s history.


I don’t see anything that would keep the machine so busy.

Your new screenshot shows a bash process, owned by root, that is taking high CPU usage. I’m not sure whether the “24.37%” shown is percentage of a single CPU core, or whether it’s percentage of your entire CPU in which case since you have a 4-core CPU it would explain the CPU usage graph.

The first and simplest step to investigate it further is mouse over the entry for that bash process, to see its full command-line in the tooltip.

If that doesn’t provide any obvious clues, maybe check whether its parent process is meaningful: In the “hamburger menu”, enable “Show Dependencies”, try to find this bash process in that view, and see what process it’s listed immediately under. If you have trouble finding it by just looking, you can click the magnifying glass to the left of the hamburger menu to help search the process list, you can either search based on command-line or type its exact PID in the search text field.

It’s far from the only thing you’re running. There’s a root bash terminal in your process list using 24% CPU.

There’s all the processes that make the actual desktop run. They’re not idle.

In my humble opinion, if you want to see the idle state of your system, a graphical environment running an animated graph is the absolute worst way to do it.

I use other tools (which also will consume some CPU. My favourite is something like btop or dool in a terminal.

Here’s my ThinkPad T450, with “nothing” running, while logged into the desktop. I used SSH to log in over the network, so some CPU will be used there, but it won’t be using CPU time to draw those graphical charts.

This is btop, which I started after booting up my laptop and then SSH’ing in from another machine. I have zero desktop applications running. The main thing to note is the CPU graph at the top, and the little graphs next to each process (between “MemB” and “Cpu%” in the bottom right. Loads of things wake up and use a little CPU.

Notice that after 5 minutes, snapd woke up and looked for updates to snaps that I have installed. The Firmware updater also woke up, and a significant amount of CPU utilization occurred as a result.

Here’s dool running. What’s notable here is that, although the system is tranquil (the CPU idle column is a high 90’s percentage), there are still things waking up now and then to do stuff. This can be seen in the “most expensive CPU process” column on the right-hand side.

Now, Tailscale makes sense because I am doing this over SSH through Tailscale VPN. Udisks, accounts-daemon, upowerd, and others were not my doing.

I think what I’m getting at is that there will always be background activity on a modern OS. Unless you remove everything, and decide you don’t need a graphical user interface, power management, disk management, logging, network monitoring and so on.

2 Likes

Consider the load of the system-monitor itself.

Agreed.

Example: I have Ubuntu Desktop running as a quest VM on an otherwise incredibly idle Ubuntu 24.04 server. It only has system-monitor running, as well as the GUI and such. Using turbostat on the host server, observe the load and then terminate system-monitor on the guest VM after awhile:

doug@s19:~$ sudo turbostat --quiet --Summary --show Busy%,Bzy_MHz,IRQ,PkgWatt,PkgTmp --interval 20
Busy%   Bzy_MHz IRQ     PkgTmp  PkgWatt
4.08    3433    23381   34      10.29  << system-monitor running on guest VM
3.98    3407    21702   34      10.29  << 4%, but 12 CPUs, so up to 48% of one CPU.
4.49    3439    66573   34      10.01
0.36    2008    5096    33      1.78   << system-monitor on VM terminated.
0.19    1389    3359    33      1.46   << Load is not very different than with no VM at all.
0.18    1258    3245    33      1.39
0.21    1641    3187    32      1.46
0.18    1415    3053    33      1.47
0.15    816     2248    32      1.35
0.18    1911    3349    32      1.44
0.21    1637    3165    32      1.41
0.22    1490    3273    32      1.45
0.25    2077    4075    32      1.49