Can interact with VSCodium, both open source
Setting Up Ollama
Install Ollama (Free Local AI) and Download a free model
curl -fsSL https://ollama.com/install.sh
Install AI model llama3.2 (or other) in smallest size
ollama pull llama3.2:1b
Verify the model appears in API
curl http://localhost:11434/api/tags
Test a simple chat
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2:1b",
"messages": [{"role": "user", "content": "Hello, are you working?"}],
"stream": false
}'
or
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.2:1b",
"messages": [{"role": "user", "content": "Hello"}]
}'
Quick Interactive Test
ollama run llama3.2:1b
Connect to VSCodium
You need to install extension called Continue. Then CTRL/L launches the AI sidebar. From there, you an open Models and VSCodium should detect your running agents.
There are a few ways; the simplest is manual config for full control.
In VSCodium, click the Continue sidebar icon (🧠).
At the top of the Continue panel, click the gear icon ⚙️ → Open config.json (or sometimes it says "Configure" / "Open Settings").
This opens ~/.continue/config.json (or config.yaml in newer versions — either works; many guides now use YAML).
Replace or add to the file with something like this (minimal setup for your model):
Using JSON format (most common):
JSON
{
"models": [
{
"title": "Llama 3.2 1B (local)",
"provider": "ollama",
"model": "llama3.2:1b",
"apiBase": "http://localhost:11434"
}
],
"tabAutocompleteModel": {
"title": "Llama 3.2 1B Autocomplete",
"provider": "ollama",
"model": "llama3.2:1b",
"apiBase": "http://localhost:11434"
}
}
Or using YAML format (newer/recommended in some docs):
YAML
name: Local Setup
version: 0.0.1
schema: v1
models:
- name: Llama 3.2 1B Chat
provider: ollama
model: llama3.2:1b
apiBase: http://localhost:11434
tabAutocompleteModel:
name: Llama 3.2 1B Complete
provider: ollama
model: llama3.2:1b
apiBase: http://localhost:11434
Save the file.
Continue may prompt you to reload / restart the extension — do so (or press Ctrl+Shift+P → "Developer: Reload Window").
Chat: In the Continue sidebar, select your model from the dropdown at the top. Highlight code → right-click → "Ask Continue" (or press Ctrl+L / Cmd+L). Or just type in the chat box.
Autocomplete: Start typing code → suggestions appear inline (accept with Tab). You can toggle it in settings if needed.
Switch models: If you pull more (e.g. ollama pull qwen2.5-coder:3b), add them to the models array the same way — they'll appear in the dropdown.
Test it: Ask something simple like "Explain this function" on selected code, or "Write a Python function to sort a list".
Model not appearing / connection error?
Check Ollama is running: curl http://localhost:11434/api/tags should list models.
Restart VSCodium.
In Continue config, add "apiBase": "http://127.0.0.1:11434" if localhost has DNS issues (rare).