Settings & Configuration
Configure LLM providers, embeddings, and platform connections.
Web LLM Provider
The AI model used for the web chat interface. Choose from 8 providers:
Ollama (local)
OpenAI
Claude
Gemini
Grok
Qwen
Fireworks
Together
i
Ollama runs locally — zero API cost and full privacy. Your code never leaves your computer. Install from
ollama.com and run ollama serve. Codeteel auto-discovers available models.For cloud providers, enter your API key. Keys are encrypted with AES-256-GCM before storage — never stored in plaintext. The UI shows only the first 7 characters.
Platform LLM Provider
A separate LLM configuration for Slack, Telegram, and Discord. Cloud providers only — Ollama cannot work for platforms because the server cannot reach your local machine.
One active platform provider at a time. Required if you want to use platform integrations.
Embedding Provider
Required for code indexing and semantic search. All providers output 1536-dimension vectors.
| Provider | Model | Price |
|---|---|---|
| OpenAI | text-embedding-3-small | $0.02/1M tokens |
| Gemini | text-embedding-004 | $0.015/1M tokens |
| Mistral | mistral-embed | $0.10/1M tokens |
| Voyage | voyage-code-2 | $0.12/1M tokens |
| Cohere | embed-english-v3.0 | $0.10/1M tokens |