Meta description: Translation quotes look cheap until they hit your Django release process. Here's how to budget translation costs without wrecking CI/CD.
You run django-admin makemessages, open locale/fr/LC_MESSAGES/django.po, and realize your app has a lot more text than you thought.
Not just buttons. Validation errors. email templates. onboarding copy. admin labels. plan names. model verbose names you forgot were user-facing.
Then you send the file out for a quote, and the cost of translation services suddenly stops being an abstract ops problem. It becomes a release problem.
That First Translation Quote
The first quote often teaches that document pricing and software pricing are not the same thing.
A translator or agency sees word count. You see a living codebase that changes every sprint. Those are different cost models, and that mismatch is where teams bleed money.

What the quote usually looks like
For a typical 10,000-word project, professional translation at an industry-average rate of $0.15 per word costs $1,500, according to Localizera's translation cost breakdown.
That number isn't weird. It's normal in the traditional market.
What's weird is pretending a Django app behaves like a static handbook.
Your .po files don't arrive once, get translated, and disappear. They keep changing. A product manager edits a CTA. Support wants a clearer billing message. Legal updates consent text. You rename a setting in the admin. Now the old quote is already stale.
Why software teams hate this model
Per-word pricing sounds tidy until you put it next to a Git workflow.
You don't want to buy translation in large batches every time strings drift. You want to:
- Translate changed strings only: not re-buy the whole locale
- Review diffs in Git: not compare screenshots in a portal
- Preserve placeholders: not ship broken
%(name)sor{0} - Keep deploy cadence: not wait on a handoff queue
Practical rule: If translation can't follow the same change-based workflow as code, the invoice is only part of the cost.
The expensive part isn't only the quote. It's the stop-and-start process around it.
A static quote works for a brochure. It doesn't fit a project where makemessages runs again next week.
The Three Pricing Models for Translation
Most translation buying falls into three buckets. If you don't separate them, it's easy to compare the wrong things and end up with a tool that fights your workflow.
Per-word agency pricing
This is the classic model. You send files. Someone counts source words. You get billed on volume, language pair, and content type.
In the US, general professional translation rates hold at $0.15 to $0.30 per word, while premium AI-assisted services can drop to $0.01 to $0.05 per word, as noted in the earlier source. That's a real gap, but the workflow matters as much as the rate.
Per-word billing works best when content is stable:
- Handbooks: low change frequency
- Contracts: formal review path
- Marketing pages: fewer release cycles
- Help centers: batch updates
It works poorly for app strings because small edits still create coordination overhead.
Per-seat SaaS TMS pricing
A translation management system changes what you pay for. You're no longer buying only translated words. You're buying workflow, access control, review UI, screenshots, integrations, vendor routing, and usually a monthly bill.
That can be fine if your localization process is large enough to justify a dedicated platform.
It can also be overkill for a Django team that already has the pieces it trusts:
- Git
- pull requests
- CI
.pofiles- a glossary in the repo
- release automation
A TMS often solves organizational problems by adding another system to maintain. If your team is small, that monthly layer can cost more in attention than in money.
Per-token AI API pricing
This is the developer-native model. You send only the text that changed to an API, get translated output back, write it to locale/<lang>/LC_MESSAGES/django.po, review the diff, and ship.
The bill follows usage, not seats.
That matters because software translation is incremental. Most releases don't rewrite the whole app. They touch a subset of strings. A usage-based model maps to that reality much better than a bulk document quote.
If you want a concrete example of what this model looks like in product form, the pricing page at https://translatebot.dev/en/pricing/ shows the kind of developer-oriented packaging teams now expect instead of a portal-first plan.
Comparison by workflow fit
| Attribute | Per-Word (Agency) | Per-Seat (SaaS TMS) | Per-Token (AI API) |
|---|---|---|---|
| Billing unit | Source words | Seats, projects, subscription | API usage per run |
| Best fit | Static documents | Larger localization operations | Fast-moving app strings |
| Cost predictability | Good for one-off batches | Good monthly, weaker for tiny teams | Good if you translate incrementally |
| Git friendliness | Low | Medium, depends on integration | High |
| CI/CD fit | Weak | Mixed | Strong |
| Handles tiny changes well | No | Sometimes | Yes |
| Placeholder safety | Depends on vendor process | Depends on platform rules | Depends on your tooling and tests |
| Team overhead | Handoffs and PM cycles | Tool admin and process setup | Prompting, review, and automation setup |
Buy the pricing model that matches your change pattern, not the one that sounds most professional.
For Django work, the wrong model isn't always the most expensive invoice. It's the one that adds friction every time strings change.
Understanding Your Primary Cost Drivers
Two apps can have the same word count and very different translation bills.
The quote climbs when the content gets harder, the language pair gets rarer, or the workflow around the text gets messy.

Complexity changes the rate fast
SaaS teams often underestimate how much of their content counts as technical.
Technical translation for content like SaaS documentation can cost €0.15 to €0.35 per source word, which is a 50 to 100 percent premium over general translation rates because it needs subject matter expertise and tighter terminology control, according to Circle Translations on technical translation pricing.
That tracks with what developers already know. Translating this:
msgid "Save"
msgstr ""
is one job.
Translating this is another:
msgid "Your export is still processing. We'll email %(name)s when the CSV is ready."
msgstr ""
Now context matters. So do tone, placeholders, and product vocabulary.
The hidden driver is workflow friction
Most quotes focus on language work. Your team pays for process failure too.
Common sources of extra cost:
- Context switching: engineers stop feature work to answer translation questions
- Manual reconciliation: someone copies text from a portal back into
.pofiles - Rework: edited English strings invalidate fresh translations
- Review lag: untranslated strings miss the release window
- Formatting risk: placeholders or HTML come back damaged
A lot of teams don't notice this because those costs are spread across engineering, product, and release management instead of appearing on a translation invoice.
Rare languages and provider pricing volatility
Language pair still matters. Common pairs are easier to source. Rare or specialist pairs get more expensive and usually slower.
The same thing happens on the AI side, just with a different shape. You stop worrying about per-word quotes and start caring about model usage, prompt design, retries, and review policy. If you're comparing provider bills, Claude's code pricing is a useful reference point for understanding how usage-based pricing shifts cost analysis from people-hours to request patterns.
That shift doesn't remove cost discipline. It changes where discipline lives.
Developer tooling can reduce one kind of complexity and add another
A portal hides model prompts, retries, and file diffs behind a UI. A repo-based workflow exposes them.
That's usually a good trade for engineering teams, but only if the tooling respects localization constraints. It needs to preserve formatting, work with Django locale files, and avoid broad retranslations. If you're comparing machine-first providers at a high level, the DeepL and Google trade-offs in a Django context are covered well in https://translatebot.dev/en/blog/deepl-vs-google-translate/.
You don't control translation cost by shopping for the lowest rate alone. You control it by cutting unnecessary work out of the path from changed string to merged PR.
A Practical Cost Calculation for a Django App
Here's the part many organizations skip. They compare vendor promises instead of mapping the work to their repository.
Use the codebase as the unit of analysis.

Start with actual Django output
A normal extraction cycle looks like this:
django-admin makemessages --all
You end up with files like:
locale/fr/LC_MESSAGES/django.po
locale/de/LC_MESSAGES/django.po
locale/es/LC_MESSAGES/django.po
And your entries look more like this than like brochure copy:
msgid "Welcome back, %(name)s"
msgstr ""
msgid "You have {0} unread notifications"
msgstr ""
msgid "Billing address"
msgstr ""
The important detail isn't total text alone. It's that next sprint you may only add five new strings and edit three old ones.
Why per-word logic breaks down
Traditional per-word rates of $0.08 to $0.40 per word don't fit incremental .po maintenance well. Developer tools aimed at software localization handle only changed strings, preserve placeholders like %(name)s, and fit into CI/CD instead of forcing bulk file workflows, as described by Unbabel's discussion of in-house versus outsourced translation costs.
That difference matters more than people admit.
A document shop assumes every batch starts from scratch. Your repository already contains translated history, stable strings, and exact diffs. If the pricing model ignores that, you pay for repeated work.
Use a repo-first cost checklist
Before you ask for any estimate, count work in these buckets:
| Bucket | What to inspect in the repo | Why it affects cost |
|---|---|---|
| New strings | Fresh msgid entries |
These need first-pass translation |
| Changed strings | Existing entries with edited English | These may need retranslation |
| Stable strings | Unchanged approved entries | These shouldn't be re-billed in a good workflow |
| Risky strings | Placeholders, HTML, plural forms, short UI labels | These need tighter review |
| High-visibility copy | Billing, auth, checkout, onboarding | These deserve stronger QA |
That gives you a budget shape the team can use.
What to run after translation
Once strings are translated, compile and test them like code:
django-admin compilemessages
Then verify the app code still references translated strings sanely:
from django.utils.translation import gettext_lazy as _
class BillingLabels:
invoice_email = _("Invoice email")
tax_id = _("Tax ID")
And in templates:
{% load i18n %}
<label>{% translate "Billing address" %}</label>
If your translation process ends outside the repo, the expensive part begins when someone has to put it back into the repo by hand.
The practical budget isn't "how much to translate the app." It's "how much to translate only what changed without creating release drag."
Developer Tactics to Radically Reduce Translation Costs
The biggest mistake teams make is turning translation into a parallel operation. Separate portal. Separate reviewer queue. Separate source of truth.
That setup looks organized. It usually creates stale files and manual cleanup.

Reuse old translations aggressively
Translation memory and CAT-style matching matter because app text repeats constantly.
For a 14,070-word project, using CAT tools reduced total cost from $2,814 to $2,053.75 by applying fuzzy matches and repetitions, according to the earlier Localizera data. The lesson isn't the spreadsheet. It's that reused language shouldn't be billed like brand-new language.
For Django teams, the equivalent is obvious:
- keep approved translations in version control
- don't wipe
msgstrvalues unless source text changed - detect fuzzy or untranslated entries only
- avoid full-file retranslations
If your process retranslates the whole locale every run, you're throwing away the main cost advantage of machine-assisted workflows.
Put terminology in the repo
A portal glossary no one version-controls becomes tribal knowledge with a UI.
A repo glossary is reviewable.
A lightweight TRANSLATING.md or similar file can hold rules like:
- "Workspace" stays as the product term, not "office"
- "Billing" refers to invoices and payment settings
- Preserve placeholders like %(name)s, %s, and {0}
- Keep button labels short
- Don't translate plan names
That doesn't replace review. It reduces avoidable inconsistency.
Tier your review, don't flatten it
Human review still matters. Just not equally for every string.
Good candidates for faster machine-first handling:
- Routine UI copy: nav items, settings labels, empty states
- Low-risk support text: generic instructional copy
- Repeated strings: reused actions and statuses
Keep extra scrutiny for:
- Authentication flows: errors, passwords, account recovery
- Billing: invoices, taxes, payment failures
- Legal or compliance text: anything with regulatory impact
- Locale-sensitive grammar: plural-heavy and gendered UI
That review policy does more for cost control than arguing abstractly about "AI quality."
After you've got the basics in place, it's worth looking at a translation memory workflow built for software strings. This overview of a translation memory program is useful because it frames reuse as an engineering concern, not just a linguist feature.
Watch token spend like any other API bill
Usage-based translation is cheap when the requests are small and targeted.
It gets sloppy when teams send giant prompts, include unchanged content, or retry blindly. The same budgeting habits you use for model-backed product features apply here too. If you're tightening API spend across the board, OpenAI cost management is a good practical reference for setting limits, understanding request patterns, and avoiding surprise bills.
Here's where teams usually go wrong:
- Over-broad runs: translating every locale file every deploy
- No diffing: sending approved strings again
- Weak prompting: forcing retries because output isn't constrained
- No review gates: catching format issues late
A disciplined translation run should look boring. Small diff. deterministic output. clean PR.
Keep translation inside CI
At this point, software teams finally beat the old model. Translation becomes another build step, not a separate project.
A healthy flow looks like:
django-admin makemessages --all
python manage.py translate --locale fr
python manage.py translate --locale de
python manage.py compilemessages
Then CI runs tests, and the PR shows exactly what changed.
Put the video below in that context. The value isn't "AI can translate text." The value is that a string change can move through the same delivery path as any other code change.
Treat locale files like code artifacts. The moment they leave your normal review path, cost goes up and trust goes down.
How to Budget for Translation Before Your Next Sprint
Budgeting translation well has less to do with forecasting a big annual number and more to do with setting rules the team can repeat.
You want a model that survives normal product churn.
Audit string churn, not just total volume
Start with the repo you already have.
Check:
- How many locales are active: not aspirational, active
- Which files change often: app UI, templates, emails, docs
- Which strings are brittle: placeholders, HTML, plural forms
- Which paths are high-risk: signup, checkout, billing, auth
That gives you a release-oriented estimate instead of a brochure-style estimate.
Split locales by business value
Not every language needs the same spend or review depth.
Use tiers such as:
| Locale tier | Typical handling | Review expectation |
|---|---|---|
| Primary revenue locales | Machine-first plus human review on key flows | High |
| Secondary growth locales | Machine-first with targeted spot checks | Medium |
| Long-tail support locales | Incremental automation with issue-driven fixes | Focused |
That structure prevents the common failure mode where every locale inherits the most expensive process.
Budget by workflow, not vendor category
When you compare options, ask operational questions first.
| Decision area | What to ask |
|---|---|
| Change frequency | Are strings updated every sprint or occasionally? |
| Source of truth | Does the approved text live in Git or in a portal? |
| Review path | Can engineers and product review diffs in PRs? |
| Risk control | How are placeholders, HTML, and plural forms checked? |
| Ongoing maintenance | Can the team translate only changed strings? |
A low quoted rate with lots of manual handling is often more expensive in practice than a usage-based setup that stays inside the repo.
Write a policy before you need one
Only after a bad release do teams typically discuss translation quality.
Write down the operating rules now:
- Use machine-first for routine strings: keep humans for riskier copy
- Require Git-visible diffs: no opaque portal-only edits
- Preserve formatting tokens: block merges on broken placeholders
- Translate incrementally: never reprocess stable strings without a reason
- Compile before merge: catch locale file issues early
That turns localization from ad hoc purchasing into maintenance work your team can reason about.
What to run before the next deploy
A practical pre-deploy loop looks like this:
django-admin makemessages --all
python manage.py translate --locale fr
python manage.py translate --locale de
python manage.py compilemessages
Then review the diff and manually inspect the sensitive paths in the app.
Keep an eye on:
- Short labels: easiest place for bad context
- Plural forms: common source of awkward output
- Gendered phrasing: often needs review in Romance languages
- CJK spacing and segmentation: easy to miss in code review
- Error messages: users see these under stress
The best budgeting move for the cost of translation services is to stop treating translation as a periodic purchase. Treat it as a change-based engineering workflow with selective human review.
That's the part that scales.
If your team wants that workflow without a portal, TranslateBot is built for it. It translates Django .po files and model fields from your codebase, runs as a management command, preserves placeholders and HTML, and keeps outputs in Git so you can review them like any other change.