Ollama + Claude Code making coding free.
-
Wow !
More to play with !I use ollama desktop with their cloud qwen 3 coder 480b. Iām hoping that connecting Claude code to ollama can make use of that. Will give it a shot.
-
I had a go with that lately also on my local setup with a GPU. ollama works well also with openwebui for the models I can run, however I never succeeded to get anything out of claude. If anyone succeeds I would be very happy to hear about the steps to do so. Using qwen3-coder, claude was endlessly spinning the GPU even on small tasks, eventually giving up without ever producing anything

-
I got some Intel Arc Battlemage B580 to test local llm for real. Nvidia is quite trivial to play with, since one can get a VPS with some nvidia card and mostly things are set up ready to use regarding GPU support (it can get very expensive though!)
So it can very likely be the case that this card is woefully underpowered for that use-case, though other similarly sized general purpose models work just fine with ollama. But maybe I miss something obvious with claude code here.
-
I got some Intel Arc Battlemage B580 to test local llm for real. Nvidia is quite trivial to play with, since one can get a VPS with some nvidia card and mostly things are set up ready to use regarding GPU support (it can get very expensive though!)
So it can very likely be the case that this card is woefully underpowered for that use-case, though other similarly sized general purpose models work just fine with ollama. But maybe I miss something obvious with claude code here.
@nebulon said in Ollama + Claude Code making coding free.:
VPS with some nvidia card
I use Koyeb for this kind of thing.
It's a good proposition but I am not needing it much, so may discontinue. Others might find it helpful.Generally I am happy with my TRAE and desktop Ollama client using one of their cloud models (to avoid pulling it locally).
But everyone raves about Claude Code, so will check them with an Ollama out.
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
-
@nebulon said in Ollama + Claude Code making coding free.:
VPS with some nvidia card
I use Koyeb for this kind of thing.
It's a good proposition but I am not needing it much, so may discontinue. Others might find it helpful.Generally I am happy with my TRAE and desktop Ollama client using one of their cloud models (to avoid pulling it locally).
But everyone raves about Claude Code, so will check them with an Ollama out.
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
@timconsidine said in Ollama + Claude Code making coding free.:
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
Yes, only the API needs a sub.
-
@timconsidine said in Ollama + Claude Code making coding free.:
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
Yes, only the API needs a sub.
-
@robi cool !
Reluctant to switch dev setup mid-project but next one will try it out.
Hoping to get Appflowy and ZeroNet out the door soon.@timconsidine don't switch. Add another instance to test.
I would like to see AirLLM performance with medium size models for Ralph Wiggum usage.
-
Got Claude Code working with Ollama

Thank you for the information @robiYou are welcome @timconsidine
Last night I experimented with having Grok manage an agent I installed in a LAMP container.
The setup was through a custom PHP that could read a custom Python script which could invoke an agent via API.
The idea is to have it be autonomous and self manage itself.
Ran out of steam due to Grok limitations and complications with google libraries.
A useful new Cloudron app could be all this set up as a custom app with ollama, airllm, and a local model so it cannot run out of steam so easily.
The just give it a goal.. and let it Ralph out.