So…with all this openclaw stuff, I was wondering, what’s the FOSS status for something to run locally? Can I get my own locally run agent to which I can ask to perform simple tasks (go and find this, download that, summarize an article) or things like this? I’m just kinda curious about all of this.
Thanks!


https://wiki.archlinux.org/title/Ollama
Ollama is an application which lets you run offline large language models locally.
Ollama is a VC-backed copy/paste of llama.cpp.
They have a history of using llama.cpp’s code (and bugs) without supporting the project or giving credit. llama.cpp is easy to use, more performant, and truly open source.
Ollama is in the Arch Linux package repository, whereas llama.cpp is in the AUR. Both options are available.
https://wiki.archlinux.org/title/Llama.cpp
Also, looks like Ollama is mostly written in Go and C, versus C and C++.
VC backed or not, both packages are under the MIT license.
Thanks! I have an understanding of being able to run these models as LLM you can chat with, using tools like ollama or GPT4All. My question would be, how do I go from that to actually do things for me, handle files, etc. As it stands, if I run any of these locally, it’s just able to answer offline questions, and that’s about it…how about these “skills”, where it can go fetch files, or go find an specific URL, or say a summary of what a youtube video is about based on what’s being said in it?
Sorry, wish I was able to share more. I honestly JUST started diving into this stuff after your post. Learning a lot from the various other comments though. Hopefully some of the other commenters can help you get the answers you’re looking for.
There’s also a community for it here on the fediverse, to those interested: !Ollama@lemmy.world
Also, from my tests, it works decent enough even on Android’s Termux, though a powerful phone seems needed.
I’ve had better luck with llama.cpp for opencode. I’m guessing it does formatting better for tool use.
deleted by creator