Gathering Baseline And Workload Facts Using Bash

Lubuntu 24.04 LTS

How might I be able to gather base/workload using bash?
I need to gather facts for…

I’ve used Monitorix before, but I’ve never been able to configure it per my personal needs.
I need help in getting that done.

Yes, I think this is doable using Bash :+1:
I don’t have very long hands-on experience with Bash for monitoring, but based on what’s available in Lubuntu, what you’re asking for should be possible using standard system tools.

Here’s a simple starting script that gathers basic CPU, per-core usage, RAM, disk, and network stats. You can customize it as you like:

#!/bin/bash

echo "=== CPU (overall load) ==="
uptime

echo "=== CPU per core ==="
mpstat -P ALL 1 1

echo "=== RAM ==="
free -h

echo "=== Storage ==="
df -h

echo "=== Network ==="
ip -s link

You can modify this script freely or extend it.
Also, if you have a specific way you want the data stored (plain text, CSV, periodic logging, etc.), let me know and we can try to implement it that way.

1 Like

Basics needs are plain text with periodic logging ported to a desktop directory.
Please guide me to get this set up then we can expand on it into logging Historical data and alerts, etc according to hourly, daily, weekly.
I want to start simple then expand on it.

Here’s a very basic example to get you started:

#!/bin/bash

LOGDIR="$HOME/Desktop/system-logs"
LOGFILE="$LOGDIR/baseline.log"

mkdir -p "$LOGDIR"

{
  echo "===== $(date) ====="
  echo "CPU load:"
  uptime

  echo
  echo "RAM:"
  free -h

  echo
  echo "Disk:"
  df -h

  echo
  echo "Network:"
  ip -s link
  echo
} >> "$LOGFILE"

Make it executable:

chmod +x baseline.sh

Run it manually first to confirm it works:

./baseline.sh

Periodic logging (simple & reliable)

Use cron to run it automatically, for example every 10 minutes:

crontab -e

Add:

*/10 * * * * /home/youruser/baseline.sh

This will continuously append data to a plain-text log on the Desktop.


Once this is working, you can easily expand it.

1 Like

Using bash for this might not be such a good idea because you’ll be starting lots of processes to collect the data you want which in itself produces a noticeable load. Using sar , sadc and sadf from the package sysstat might be better.

sysstat…hmmm

Can we please see a simple info-gathering script and it’s results?

From sysstat manpage…

sysstat already runs in the background (sadc / sa1 / sa2) and continuously collects system statistics with very low overhead.
What we do with Bash is not collecting new data, but simply reading what sysstat has already stored and generating a report from it.

Below is a simple Bash script that gathers a plain-text report from sysstat’s stored data and saves it to the Desktop.

#!/bin/bash

LOGDIR="$HOME/Desktop/sysstat-reports"
DAY=$(date +%d)
SRC="/var/log/sysstat/sa$DAY"
OUT="$LOGDIR/report-$(date +%F).txt"

mkdir -p "$LOGDIR"

{
  echo "===== System Report: $(date) ====="
  echo

  echo "== CPU (overall) =="
  sar -u -f "$SRC"
  echo

  echo "== CPU per core =="
  sar -P ALL -f "$SRC"
  echo

  echo "== Memory =="
  sar -r -f "$SRC"
  echo

  echo "== Disk I/O =="
  sar -d -f "$SRC"
  echo

  echo "== Network =="
  sar -n DEV -f "$SRC"
  echo

} > "$OUT"

This script:

  • :heavy_check_mark: uses sysstat’s background-collected data
  • :heavy_check_mark: does not introduce extra system load
  • :heavy_check_mark: produces a plain-text report
  • :heavy_check_mark: can be run manually or via cron (hourly/daily/weekly)

So the workflow is:

sysstat collects → Bash formats & stores reports

Once this basic setup is working, it’s easy to expand it with historical reports, CSV export, or alerts.

1 Like