LLM Providers

All supported AI model providers and how to configure them.

Supported providers

ProviderTypeNotes
OllamaLocalFree, private. Any model you pull — Llama, DeepSeek, Qwen, Mistral, CodeGemma, Phi, etc.
OpenAICloudAny OpenAI model — GPT-4o, GPT-4.1, o3-mini, etc.
ClaudeCloudAny Anthropic model — Claude Sonnet, Opus, Haiku
GeminiCloudAny Google model — Gemini Pro, Flash, etc.
GrokCloudAny xAI model — Grok 2, Grok 3, etc.
QwenCloudAny Alibaba model — Qwen 2.5, Qwen-Max, etc.
FireworksCloudAny model on Fireworks — Llama, Mixtral, DeepSeek, etc.
TogetherCloudAny model on Together — Llama, Code Llama, etc.
You type the model name yourself in Settings — Codeteel doesn't restrict which models you can use. Any model available from the provider will work, as long as it supports the OpenAI-compatible chat completions API.

Web vs Platform

Codeteel has two separate LLM configurations:

Web LLMPlatform LLM
Used byBrowser chat interfaceSlack, Telegram, Discord
OllamaYes (direct browser → localhost)No (server can't reach localhost)
Cloud providersYes (via SSE proxy)Yes (via Lambda)
API key storageEncrypted (AES-256-GCM)Encrypted (AES-256-GCM)
i
Ollama for web: When using Ollama, LLM calls go directly from your browser to localhost:11434. No proxy, no API key, no server involved. Your code and prompts never leave your machine.

Setting up Ollama

Install Ollama from ollama.com, then:

# Start the server
ollama serve

# Pull a model (example)
ollama pull deepseek-coder-v2:16b

# Codeteel auto-discovers available models in Settings
!
CORS required for web app. If you're accessing Codeteel from a deployed URL (not localhost), you must enable CORS on Ollama:

Windows: set OLLAMA_ORIGINS=* && ollama serve
Mac/Linux: OLLAMA_ORIGINS=* ollama serve

Without this, the browser cannot reach Ollama due to cross-origin restrictions.

Setting up cloud providers

For any cloud provider:

  1. Go to Settings
  2. Click "Add Provider"
  3. Select the provider
  4. Paste your API key
  5. Enter the model name (e.g., gpt-4o)
  6. Set as active
API keys are encrypted with AES-256-GCM before storage. The UI only shows the first 7 characters.