Supported AI Models¶
TranslateBot Django supports 100+ AI models through LiteLLM. This page covers the most popular options.
OpenAI¶
OpenAI models provide excellent translation quality and are the default choice.
import os
TRANSLATEBOT_API_KEY = os.getenv("OPENAI_API_KEY")
TRANSLATEBOT_MODEL = "gpt-4o-mini" # Default, recommended
Available Models¶
| Model | Quality | Speed | Cost |
|---|---|---|---|
gpt-4o-mini |
Good | Fast | Low |
gpt-4o |
Excellent | Medium | Medium |
gpt-4-turbo |
Excellent | Medium | Medium |
Recommendation
Start with gpt-4o-mini for most projects. It offers the best balance of quality, speed, and cost.
Anthropic Claude¶
Claude models excel at nuanced translations and context understanding.
import os
TRANSLATEBOT_API_KEY = os.getenv("ANTHROPIC_API_KEY")
TRANSLATEBOT_MODEL = "claude-sonnet-4-5-20250929"
Available Models¶
| Model | Quality | Speed | Cost |
|---|---|---|---|
claude-sonnet-4-5-20250929 |
Excellent | Medium | Medium |
claude-opus-4-5-20251101 |
Best | Slow | High |
claude-3-5-sonnet-20240620 |
Excellent | Medium | Medium |
claude-3-haiku-20240307 |
Good | Fast | Low |
Recommendation
claude-sonnet-4-5-20250929 offers the best quality-to-cost ratio for most translation tasks.
Google Gemini¶
Google's Gemini models offer competitive translation quality, especially for Asian languages.
import os
TRANSLATEBOT_API_KEY = os.getenv("GEMINI_API_KEY")
TRANSLATEBOT_MODEL = "gemini/gemini-2.5-flash"
Available Models¶
| Model | Quality | Speed | Cost |
|---|---|---|---|
gemini/gemini-2.5-flash |
Good | Fast | Low |
gemini/gemini-3-flash-preview |
Excellent | Fast | Low |
Azure OpenAI¶
Use Azure-hosted OpenAI models for enterprise deployments.
import os
TRANSLATEBOT_API_KEY = os.getenv("AZURE_API_KEY")
TRANSLATEBOT_MODEL = "azure/gpt-4o-mini"
# Additional Azure configuration
os.environ["AZURE_API_BASE"] = "https://your-resource.openai.azure.com/"
os.environ["AZURE_API_VERSION"] = "2024-02-15-preview"
Other Providers¶
LiteLLM supports many additional providers:
- AWS Bedrock:
bedrock/anthropic.claude-3-sonnet - Cohere:
command-r-plus - Mistral:
mistral/mistral-large-latest - Ollama (local):
ollama/llama3
See the LiteLLM Providers Documentation for the complete list.
Choosing a Model¶
By Use Case¶
| Use Case | Recommended Model |
|---|---|
| General translation | gpt-4o-mini |
| High-quality production | gpt-4o or claude-sonnet-4-5-20250929 |
| Budget-conscious | gpt-4o-mini or claude-3-haiku-20240307 |
| Asian languages | gpt-4o or claude-sonnet-4-5-20250929 |
| European languages | Any model works well |
By Priority¶
| Priority | Recommended Model |
|---|---|
| Quality | claude-opus-4-5-20251101 or gpt-4o |
| Speed | gpt-4o-mini or claude-3-haiku-20240307 |
| Cost | gpt-4o-mini |
| Balance | claude-sonnet-4-5-20250929 |
Cost Estimates¶
Approximate costs per million tokens (input):
| Model | Cost |
|---|---|
gpt-4o-mini |
~$0.15 |
claude-3-haiku |
~$0.80 |
gpt-4o |
~$2.50 |
claude-sonnet-4-5 |
~$3.00 |
claude-opus-4-5 |
~$15.00 |
Typical Project Cost
A small-to-medium Django app with ~500 translatable strings (~10,000 words) typically costs under $0.01 per language with gpt-4o-mini.
Testing Different Models¶
Try different models without code changes: