qkiel
April 22, 2024, 3:48pm
1
I’ve written four AI-related tutorials that you might be interested in.
Quick Notes:
The tutorials are written for Incus , but you can just replace incus
commands with lxc
.
I’m using an AMD 5600G APU, but most of what you’ll see in the tutorials also applies to discrete GPUs. Whenever something is APU specific, I have marked it as such.
Even though I use ROCm in my containers, Nvidia CUDA users should also find these guides helpful.
Tutorials:
Ai tutorial: ROCm and PyTorch on AMD APU or GPU
Ai tutorial: llama.cpp and Ollama servers + plugins for VS Code / VS Codium and IntelliJ
Ai tutorial: Stable Diffusion SDXL with Fooocus
Ai tutorial: LLMs in LM Studio
6 Likes
suoko
May 9, 2024, 7:24pm
2
You also got another appimaged ai tool here:
AnythingLLM is the ultimate enterprise-ready business intelligence tool made for your organization. With unlimited control for your LLM, multi-user support, internal and external facing tooling, and 100% privacy-focused.
And you might like this about coqui:
https://gist.github.com/suoko/1adb865bb0635bccd2153156c17cda28
By the way did you try to run opendevin with ollama ?
I’m struggling to do it, following this guide:
# Local LLM with Ollama
Ensure that you have the Ollama server up and running.
For detailed startup instructions, refer to the [here](https://github.com/ollama/ollama)
This guide assumes you've started ollama with `ollama serve`. If you're running ollama differently (e.g. inside docker), the instructions might need to be modified. Please note that if you're running wsl the default ollama configuration blocks requests from docker containers. See [here](#4-configuring-the-ollama-service-wsl).
## Pull Models
Ollama model names can be found [here](https://ollama.com/library). For a small example, you can use
the `codellama:7b` model. Bigger models will generally perform better.
```bash
ollama pull codellama:7b
```
you can check which models you have downloaded like this:
```bash
~$ ollama list
This file has been truncated. show original
I ended up with this command:
export WORKSPACE_BASE=$(pwd)/workspace \
sudo chmod 777 /var/run/docker.sock \
docker run --rm -e SANDBOX_USER_ID=$(id -u) --add-host host.docker.internal=host-gateway -e LLM_API_KEY="ollama" -e LLM_BASE_URL="http://host.docker.internal:11434" -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 ghcr.io/opendevin/opendevin:0.5
But still I have PEXPECT issues.
qkiel
May 9, 2024, 8:56pm
3
Thanks for the tip about EverythingLLM and coqui . As for OpenDevin , I haven’t tried it yet, but it’s on my TODO list.
suoko
May 10, 2024, 5:10am
4
Good, please share the word, we need more testers for OpenDevin
A similar project is here:
Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective...
1 Like