Ubuntu HPC Meeting Notes: 2024/2/14

Greetings comrades! The Ubuntu HPC community team meets every Wednesday at 17:30 UTC via Jitsi to discuss current initiatives within our team. We’ve been doing this since January 2023, but not many people actually know that we do this. Therefore, to help raise awareness about what the team is doing, we’re going to start publishing our meeting notes after every community call so that anyone who missed the call can catch up. Thank you @aaronprisk for creating the meeting notes template and @ilvipero for the suggestion to publish our notes publicly.

Meeting participants:

@nuccitheboss, @billy-olsen, @jamesbeedy, Tucker Beck, @arif-ali, Jeffery Borck

Spack snap

  • The Spack snap is now part of the upstream Spack project.
    • Attended the Spack community call to initiate the developer verification process. This way new users of Spack - or those using Spack for the first time - will know that the snap is coming from a trusted source.
    • Going to re-license the Spack snap to Apache-2.0 + MIT per request of the upstream project. Dual licensing makes the snap compatible with GPL-2.

Slurm snap

  • Jason is now a collaborator on the private Slurm snap on the Snap Store.
    • Going to transfer the snap Jason while we figure out who it should officially be published under on the store.
    • Going to request a 23.11 track for the latest release of Slurm.
    • Going to request snap aliases for the the MUNGE and Slurm command line utilities.
    • The Slurm snap will be transferred from Jason’s GitHub to the Charmed HPC organisation.
    • Call for testing on the Slurm snap is currently open. Please report all issues on the forum thread.


  • Jason to transfer the slurmutils library to the Charmed HPC organisation.
    • slurmutils is a Python library for orchestrating various bits of the Slurm workload manager’s lifecycle. Currently includes editors for modifying slurm.conf, include, and slurmdbd.conf files. https://pypi.org/project/slurmutils/


  • Brief discussion about potentially transferring the Pluto demo program to the Charmed HPC organisation on GitHub. Need further discussion about what the envisioned future of Pluto is. Is it a good program for quickly spinning up demo HPC clusters, or do we want it to be a larger program?

Including Apptainer in slurmd-operator

  • Discussed adding Apptainer, an HPC container runtime, to the slurmd-operator of Charmed HPC.
    • Investigating creating a subordinate operator that adds Apptainer to our compute nodes. A couple approaches were proposed such as adding a routine into our installation hook, or using cloud-init to set model configuration.

Set configuration per node on slurmd-operator

  • Discussed adding a set-node-config action to slurmd-operator so that we can set configuration per unit rather than per application.
    • Something like juju run slurmd/0 set-node-config="weight=1".
    • James to look more into implementation.

Discussion on publishing meeting notes

  • Brief discussion on publishing meeting notes going forward. Not so Ancient Elders attending the call were in agreement that we should publish our notes, so here we are :wink:

Getting Involved

Next Ubuntu HPC community is next Wednesday, February 21st, at 17:30 UTC over Jitsi. Want to get involved or just generally interested in our community? Join our Matrix server!