LLM Providers
All supported AI model providers and how to configure them.
Supported providers
| Provider | Type | Notes |
|---|---|---|
| Ollama | Local | Free, private. Any model you pull — Llama, DeepSeek, Qwen, Mistral, CodeGemma, Phi, etc. |
| OpenAI | Cloud | Any OpenAI model — GPT-4o, GPT-4.1, o3-mini, etc. |
| Claude | Cloud | Any Anthropic model — Claude Sonnet, Opus, Haiku |
| Gemini | Cloud | Any Google model — Gemini Pro, Flash, etc. |
| Grok | Cloud | Any xAI model — Grok 2, Grok 3, etc. |
| Qwen | Cloud | Any Alibaba model — Qwen 2.5, Qwen-Max, etc. |
| Fireworks | Cloud | Any model on Fireworks — Llama, Mixtral, DeepSeek, etc. |
| Together | Cloud | Any model on Together — Llama, Code Llama, etc. |
✓
You type the model name yourself in Settings — Codeteel doesn't restrict which models you can use. Any model available from the provider will work, as long as it supports the OpenAI-compatible chat completions API.
Web vs Platform
Codeteel has two separate LLM configurations:
| Web LLM | Platform LLM | |
|---|---|---|
| Used by | Browser chat interface | Slack, Telegram, Discord |
| Ollama | Yes (direct browser → localhost) | No (server can't reach localhost) |
| Cloud providers | Yes (via SSE proxy) | Yes (via Lambda) |
| API key storage | Encrypted (AES-256-GCM) | Encrypted (AES-256-GCM) |
i
Ollama for web: When using Ollama, LLM calls go directly from your browser to
localhost:11434. No proxy, no API key, no server involved. Your code and prompts never leave your machine.Setting up Ollama
Install Ollama from ollama.com, then:
# Start the server ollama serve # Pull a model (example) ollama pull deepseek-coder-v2:16b # Codeteel auto-discovers available models in Settings
!
CORS required for web app. If you're accessing Codeteel from a deployed URL (not localhost), you must enable CORS on Ollama:
Windows:
Mac/Linux:
Without this, the browser cannot reach Ollama due to cross-origin restrictions.
Windows:
set OLLAMA_ORIGINS=* && ollama serveMac/Linux:
OLLAMA_ORIGINS=* ollama serveWithout this, the browser cannot reach Ollama due to cross-origin restrictions.
Setting up cloud providers
For any cloud provider:
- Go to Settings
- Click "Add Provider"
- Select the provider
- Paste your API key
- Enter the model name (e.g.,
gpt-4o) - Set as active
✓
API keys are encrypted with AES-256-GCM before storage. The UI only shows the first 7 characters.