Ollama + Claude Code making coding free.
-
š¢š¹š¹š®šŗš® + šš¹š®šš±š² šš¼š±š² / OpenCode
šš»š± š¶š šæšš»š š°š¼šŗš½š¹š²šš²š¹š š¹š¼š°š®š¹.
No subscriptions. No limits. No data leaving your machine.
Claude Code used to cost $3-$15 per million tokens.
Now you can run it with free open-source models on your computer.
January 16th, 2026: Ollama version 0.14.0 became compatible with Anthropic's messages API.
That means Claude Code now works with any Ollama model.
Zero ongoing costs. Your code never leaves your computer.
Here's the setup:
Download Ollama. Run
ollama pull qwen3-coder. Install Claude Code.Set two environment variables to connect them.
Then run:
claude --model qwen3-coderGive it tasks in plain English:
"Write a function to process images." "Debug why my API calls are failing."
It reads your files. Makes changes. Tests the code. Shows you what it did.
Top models: Qwen 3-Coder for everyday tasks. GPT-OSS 20B for complex projects.
Have an older GPU or lower VRAM, use AirLLM to load only needed parts and run bigger models on the same HW!
This is completely open and free.
Explicitly supported by both tools.
AnyLLM works too. -
Wow !
More to play with !I use ollama desktop with their cloud qwen 3 coder 480b. Iām hoping that connecting Claude code to ollama can make use of that. Will give it a shot.
-
I had a go with that lately also on my local setup with a GPU. ollama works well also with openwebui for the models I can run, however I never succeeded to get anything out of claude. If anyone succeeds I would be very happy to hear about the steps to do so. Using qwen3-coder, claude was endlessly spinning the GPU even on small tasks, eventually giving up without ever producing anything

-
I got some Intel Arc Battlemage B580 to test local llm for real. Nvidia is quite trivial to play with, since one can get a VPS with some nvidia card and mostly things are set up ready to use regarding GPU support (it can get very expensive though!)
So it can very likely be the case that this card is woefully underpowered for that use-case, though other similarly sized general purpose models work just fine with ollama. But maybe I miss something obvious with claude code here.
-
I got some Intel Arc Battlemage B580 to test local llm for real. Nvidia is quite trivial to play with, since one can get a VPS with some nvidia card and mostly things are set up ready to use regarding GPU support (it can get very expensive though!)
So it can very likely be the case that this card is woefully underpowered for that use-case, though other similarly sized general purpose models work just fine with ollama. But maybe I miss something obvious with claude code here.
@nebulon said in Ollama + Claude Code making coding free.:
VPS with some nvidia card
I use Koyeb for this kind of thing.
It's a good proposition but I am not needing it much, so may discontinue. Others might find it helpful.Generally I am happy with my TRAE and desktop Ollama client using one of their cloud models (to avoid pulling it locally).
But everyone raves about Claude Code, so will check them with an Ollama out.
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
-
@nebulon said in Ollama + Claude Code making coding free.:
VPS with some nvidia card
I use Koyeb for this kind of thing.
It's a good proposition but I am not needing it much, so may discontinue. Others might find it helpful.Generally I am happy with my TRAE and desktop Ollama client using one of their cloud models (to avoid pulling it locally).
But everyone raves about Claude Code, so will check them with an Ollama out.
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
@timconsidine said in Ollama + Claude Code making coding free.:
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
Yes, only the API needs a sub.
-
@timconsidine said in Ollama + Claude Code making coding free.:
My quick look suggested you still need a subscription for Clause Code even if you hook up Ollama model. Did I misunderstand this ?
Yes, only the API needs a sub.