Ai tutorials on running ROCm, PyTorch, llama.cpp, Ollama, Stable Diffusion and LM Studio in Incus / LXD containers

I’ve written four AI-related tutorials that you might be interested in.

Quick Notes:

  • The tutorials are written for Incus, but you can just replace incus commands with lxc.
  • I’m using an AMD 5600G APU, but most of what you’ll see in the tutorials also applies to discrete GPUs. Whenever something is APU specific, I have marked it as such.
  • Even though I use ROCm in my containers, Nvidia CUDA users should also find these guides helpful.

Tutorials:

  1. Ai tutorial: ROCm and PyTorch on AMD APU or GPU
  2. Ai tutorial: llama.cpp and Ollama servers + plugins for VS Code / VS Codium and IntelliJ
  3. Ai tutorial: Stable Diffusion SDXL with Fooocus
  4. Ai tutorial: LLMs in LM Studio
6 Likes

You also got another appimaged ai tool here:

And you might like this about coqui:
https://gist.github.com/suoko/1adb865bb0635bccd2153156c17cda28

By the way did you try to run opendevin with ollama ?
I’m struggling to do it, following this guide:

I ended up with this command:

 export WORKSPACE_BASE=$(pwd)/workspace \
 sudo chmod 777 /var/run/docker.sock \
 docker run --rm  -e SANDBOX_USER_ID=$(id -u) --add-host host.docker.internal=host-gateway     -e LLM_API_KEY="ollama"     -e LLM_BASE_URL="http://host.docker.internal:11434"     -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE     -v $WORKSPACE_BASE:/opt/workspace_base     -v /var/run/docker.sock:/var/run/docker.sock     -p 3000:3000     ghcr.io/opendevin/opendevin:0.5

But still I have PEXPECT issues.

Thanks for the tip about EverythingLLM and coqui. As for OpenDevin, I haven’t tried it yet, but it’s on my TODO list.

Good, please share the word, we need more testers for OpenDevin

A similar project is here:

1 Like