this post was submitted on 10 Jan 2026
1 points (100.0% liked)

LocalLLaMA

4003 readers
25 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

The ollama 0.14-rc2 release is available today and it introduces new functionality with ollama run --experimental for in this experimental mode to run an agent loop so that LLMs can use tools like bash and web searching on your system. It's opt-in for letting ollama/LLMs make use of bash on your local system and there are at least some safeguards in place.

Sure to rub some people the wrong way, the new experimental agent loop with ollama 0.14 allows for LLMs to leverage some tools like Bash. Bash command execution can be driven via LLMs but for making sure the LLMs don't go too wild, there is an interactive approval user interface. There is also an auto-allowlist for "safe" commands as well as a deny list for blocking potentially dangerous commands like "sudo" and "rm -rf". Though among the safe commands are npm run, pwd, git status, and others. The experimental agent loop will also provide a warning box for commands working on paths outside the project directory

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here