AI Providers
Supported AI providers and how to configure them.
Supported Providers
Transcribe works with any OpenAI-compatible API. Built-in presets:
| Provider | Endpoint | Key Required |
|---|---|---|
| Ollama | http://localhost:11434 |
No |
| OpenAI | https://api.openai.com/v1 |
Yes |
| OpenRouter | https://openrouter.ai/api/v1 |
Yes |
| Mistral | https://api.mistral.ai/v1 |
Yes |
| Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
Yes |
| Groq | https://api.groq.com/openai/v1 |
Yes |
| Together AI | https://api.together.xyz/v1 |
Yes |
| Custom | You specify | Configurable |
Ollama (Local/Free)
Ollama runs AI models locally on your Mac. No API key needed, completely private.
- Install from ollama.com.
- Download a model (e.g.
ollama pull llama3). - Transcribe auto-detects when Ollama is running.
OpenAI
Get API key from platform.openai.com. Recommended models:
- gpt-4o — best quality
- gpt-4o-mini — fast and cheap
Custom Providers
Select "Custom" as your provider. Enter the base URL for any OpenAI-compatible API. The chat completions endpoint should be at {base_url}/v1/chat/completions or {base_url}/chat/completions.