You know the part of Django i18n that nobody enjoys. Not makemessages. Not compilemessages. The bad part starts after extraction, when your .po files fill with empty msgstr entries and you become a human bridge between your editor and some translation tab.
That workflow is bad for the same reason manual deployment is bad. It depends on memory, patience, and not making one tiny syntax mistake at the end of a long day.
If you searched for how to do a translation, you probably noticed another problem. The internet keeps answering a different question. A lot of search results are about geometric translation, not linguistic translation for software. Even the existing demand around Django automation shows up in developer discussions. Searches for django makemessages automate translation on Stack Overflow have garnered significant upvotes but still lack a solid CLI-first answer that fits normal developer workflow, as noted in this gap between common search intent and the content that ranks.
Django Translation Involves Repetitive Work

A familiar release goes like this.
You add a few strings to templates, forms, model admin messages, and maybe one management command. You run:
python manage.py makemessages -a
Django does its job. Your .po files update. Then you open locale/fr/LC_MESSAGES/django.po and see a wall of this:
msgid "Reset password"
msgstr ""
msgid "Invite sent successfully"
msgstr ""
msgid "%(count)s user selected"
msgid_plural "%(count)s users selected"
msgstr[0] ""
msgstr[1] ""
Nothing is broken yet. Nothing is solved either.
The bad manual loop
For a lot of teams, the next step is still copy, paste, translate, paste back, repeat.
That sounds manageable with five strings. It gets ugly with fifty. It gets worse when the strings contain placeholders, HTML, or product terms that shouldn't change. A generic translator often turns technical text into something that reads fine to a human and fails at runtime.
Common failure points look like this:
- Placeholders break:
%(name)sbecomes% (name) s, disappears, or moves into a sentence that no longer matches the format. - HTML gets mangled: tags come back reordered, escaped, or translated when they should stay untouched.
- Terminology drifts: one release translates "workspace" one way, the next release picks another.
- You redo work: unchanged strings get reviewed again because nothing in the process tracks what changed.
Portals solve one problem and create another
SaaS localization platforms can help, but for many Django projects they add friction.
You push strings to a web UI. Someone reviews there. Someone exports later. Somebody remembers to pull the result back into Git. The source of truth becomes split between your repo and a portal. That is fine for larger orgs with localization staff. It's annoying for a solo developer shipping from one branch on a Friday evening.
If your app already lives in Git, your translations should live there too.
The developer-native answer is a CLI flow that works with the files Django already uses. No copy-paste. No browser tab as the center of the process. No weird export step before deploy.
One command should detect changed strings, write back into locale/.../django.po, and leave you with a normal diff to review. That is the difference between "translation as a side task" and "translation as part of shipping code."
Your First Automated Translation in Under Two Minutes

If you want the fastest path from empty msgstr values to usable translations, keep it boring. Install the CLI, set your API key, run the command, inspect the diff.
Install the tool
pip install translatebot
If your project uses uv or Poetry, install it the same way you install any other dev dependency.
Then set your API key:
export OPENAI_API_KEY="your-api-key"
If you want the exact setup steps in one place, the TranslateBot quickstart is the shortest path.
Start with a normal Django extraction
Run your usual extraction first:
python manage.py makemessages -l fr
Now translate the file:
translate-po-files locale/fr/LC_MESSAGES/django.po --target-language fr
That keeps the workflow simple. Django extracts. The CLI fills translations. You review the diff. Then you compile:
python manage.py compilemessages
Before and after
A small .po file before translation:
#: templates/account/login.html:12
msgid "Sign in"
msgstr ""
#: templates/account/login.html:18
#, python-format
msgid "Welcome back, %(name)s"
msgstr ""
#: templates/billing/upgrade.html:22
msgid "Upgrade plan"
msgstr ""
After one command:
#: templates/account/login.html:12
msgid "Sign in"
msgstr "Se connecter"
#: templates/account/login.html:18
#, python-format
msgid "Welcome back, %(name)s"
msgstr "Bon retour, %(name)s"
#: templates/billing/upgrade.html:22
msgid "Upgrade plan"
msgstr "Mettre à niveau l’abonnement"
That is the whole point. No browser. No export. No manual patching in a spreadsheet.
What works well in practice
The nicest part of a CLI-first flow is that it matches how Django developers already think. You don't need a new source of truth. You just operate on django.po files directly.
A simple routine looks like this:
Extract strings
python manage.py makemessages -aTranslate target locales
translate-po-files locale/fr/LC_MESSAGES/django.po --target-language fr translate-po-files locale/de/LC_MESSAGES/django.po --target-language deReview the diff
git diff locale/Compile messages
python manage.py compilemessages
What doesn't work
A few habits create more pain than they save.
| Approach | What happens |
|---|---|
| Copying each string into a chat window | You lose file structure, comments, and formatting context |
Translating whole .po files with a generic prompt |
Placeholders and plural rules become easy to break |
| Reviewing only in a web portal | Your deploy artifact still lives in Git, so review gets split |
| Re-translating every locale from scratch each release | You waste time on unchanged content |
A translation workflow should behave like formatting or tests. Run it locally, run it in CI, inspect the result in Git.
That is the reason a command-line tool fits Django better than most localization products aimed at content teams. The code already tells you where strings live. Your repo already tells you what changed. Your translator should work with both instead of pulling you out of them.
Achieving Consistency with a TRANSLATING.md Glossary
One command gets you from empty strings to translated strings. It does not guarantee consistent language across releases.
Consistency is where most AI-assisted setups fall apart. The model sees one string in isolation, chooses a reasonable translation, then chooses a different reasonable translation next week for the same concept. Users notice. Reviewers notice faster.
Professional translation workflows treat pre-translation analysis as mandatory. Translators first determine context, tone, and domain terminology. That analysis step is described as foundational in this overview of the translation process and ISO 17100-style preparation. In a Django repo, the practical version of that step is a version-controlled TRANSLATING.md file.

Why a glossary beats prompts in your head
A developer usually knows which terms are special. The problem is that this knowledge stays informal.
You know "workspace" is a product object, not office space. You know "commit" refers to Git. You know "staff" means admin users in Django, not employees in a hotel. If you don't write that down, every translation run starts cold.
A TRANSLATING.md file fixes that by keeping decisions next to the code.
# Translation rules
## Product terms
- TranslateBot stays untranslated.
- Workspace means a project area inside the app.
- Staff refers to Django admin-enabled users.
## Technical strings
- "Commit" refers to a Git commit, not a legal commitment.
- "Push" refers to sending commits to a remote Git repository.
## Style
- Use informal second person in French.
- Keep button labels short.
- Do not translate placeholder names.
## Protected text
- Never translate HTML tags.
- Preserve all `%s`, `%(name)s`, `{0}`, `{count}` placeholders exactly.
That file is more useful than a long one-off prompt because it survives the next release, the next teammate, and the next language.
What to put in the file
Often, teams overthink this. Start with the terms that break trust when they drift.
Use entries like these:
- Brand names: product names, module names, plan names, internal feature labels.
- Ambiguous words: terms like "commit", "draft", "issue", "queue", "token", "plan".
- UI style rules: formal vs informal voice, sentence case, punctuation rules.
- Protected syntax: placeholders, HTML, Markdown, template syntax.
- Audience notes: admin-facing text often needs different wording than customer-facing text.
A good glossary is small and opinionated. It should solve recurring mistakes, not document every noun in the app.
For more examples of what belongs in a glossary, this glossary guide is useful reading.
A concrete example
Take the string:
msgid "Commit changes"
msgstr ""
Without context, a translator can reasonably choose a legal or moral meaning in some languages. In a developer tool, that is wrong. Add one note to TRANSLATING.md:
- "Commit" refers to a Git action. Translate with the standard Git term used by developers in the target language.
Now the model has a decision rule, not just text.
Another example:
msgid "Plan"
msgstr ""
Is that a billing plan, a roadmap, or a user intention? The source string is weak. A glossary helps, but sometimes the ideal fix is to improve the source itself:
msgid "Subscription plan"
msgstr ""
That is a good reminder. Better source strings produce better translations. AI is not magic. It still depends on the quality of your input.
If a string is ambiguous in English, it will stay ambiguous in every other language. Fix the source before you blame the translator.
Keep it in Git and review it like code
A key advantage of TRANSLATING.md is not just quality. It is repeatability.
A note added today affects future translations without another meeting or another portal setting hidden in a UI. You can review glossary changes in pull requests. You can branch them. You can revert them. That is exactly how configuration should work in an engineering team.
Translation memory and terminology management are important here in practical terms. Proper TM management can reduce translation time by up to 40% on repeated content, according to the earlier BayanTech reference already noted above. In a Django app with recurring UI strings, a stable glossary and reuse of existing translations save real review time.
Safely Translating Django Placeholders and HTML
The biggest translation bug in a Django app is rarely a slightly awkward sentence.
It is a broken placeholder.
A button label that sounds clumsy is annoying. A translated string that changes %(name)s, {0}, %s, or an HTML tag can break output, throw formatting errors, or produce malformed pages. That is why developer-facing translation tooling has to treat syntax as untouchable.
Where generic AI tools fail
Paste this into a generic chat app:
msgid "<strong>%(name)s</strong> invited you to %(workspace)s."
msgstr ""
You might get lucky. You might also get a translation that changes spacing, moves variables into a bad order, or rewrites HTML in a way your app didn't expect.
That risk is not theoretical. Recent benchmarks reported error rates as high as 40% in AI translations that break app functionality because format strings and HTML were mishandled, according to Lokalise's discussion of AI translation quality.
Django developers already know this pain. It shows up in issue threads, bug reports, and code reviews where someone has to repair a translated format string by hand.
What safe output looks like
Safe translation keeps structure intact while changing only human-readable text.
For example:
msgid "%(count)s file deleted"
msgid_plural "%(count)s files deleted"
msgstr[0] "%(count)s fichier supprimé"
msgstr[1] "%(count)s fichiers supprimés"
Or with HTML:
msgid "Click <a href=\"%(url)s\">here</a> to reset your password."
msgstr "Cliquez <a href=\"%(url)s\">ici</a> pour réinitialiser votre mot de passe."
The placeholders remain exact. The tag remains exact. Only the visible text changes.
That sounds simple. It is not simple if your process relies on ad hoc prompts or manual copy-paste.
Production-safe rules
A translation step for .po files should enforce a few essential requirements:
- Preserve placeholders exactly:
%(name)s,%s,{0},{count}must survive unchanged. - Preserve markup: HTML should stay structurally intact.
- Respect plural blocks: singular and plural entries need matching grammatical treatment.
- Write back to the original file format: comments, flags, and ordering should stay reviewable.
If your current workflow cannot guarantee those four things, it is not safe enough for unattended use.
The reason I prefer a CLI over generic chat prompts is simple. A proper CLI can be built around .po file constraints instead of pretending translated UI strings are just free text. The .po file usage docs show the kind of file-level workflow developers need.
Treat placeholders like code, not prose. They are part of your program.
One more rule that saves pain
Do not ask the translator to guess what is translatable inside mixed strings. Split source strings when you can.
Bad:
msgid "Your %(plan)s plan expires on <strong>%(date)s</strong>."
Better if the UI allows it:
msgid "Your %(plan)s plan expires on"
msgstr ""
msgid "%(date)s"
msgstr ""
You should not over-fragment everything, but dense mixed strings are harder to review and easier to break. Cleaner source text makes safer translation possible.
Automating Your Workflow with CI/CD Integration
Running translation manually is fine for a small release. Running it in CI is better because the process becomes repeatable.
That matters more than convenience. If translations happen in the same pipeline as tests and builds, they stop being a side task somebody remembers later. They become part of shipping.

A practical GitHub Actions setup
A simple workflow can do four things:
- Install dependencies
- Extract new strings
- Translate updated
.pofiles - Commit the result back to the branch
Example:
name: Update translations
on:
push:
branches:
- "feature/**"
- "main"
jobs:
translate:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install translatebot
- name: Extract Django messages
run: |
python manage.py makemessages -l fr
python manage.py makemessages -l de
- name: Translate PO files
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
translate-po-files locale/fr/LC_MESSAGES/django.po --target-language fr
translate-po-files locale/de/LC_MESSAGES/django.po --target-language de
- name: Compile messages
run: python manage.py compilemessages
- name: Commit translation updates
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add locale/
git diff --cached --quiet || git commit -m "Update translations"
git push
This is not fancy. That is why it works.
Two useful patterns
Some teams want translation on every feature branch. Others want it only on merge to main. Both are valid.
Use branch-level automation if you want contributors to see translated diffs early. Use merge-time automation if you want to keep API use centralized and reduce noise in feature branches.
A good split looks like this:
| Team setup | Better trigger |
|---|---|
| Solo project | On push to main |
| Small team with active review | On pull request or feature branch push |
| Open source project | Manual workflow dispatch plus maintainer review |
Keep review in the loop
CI should not mean auto-merge and forget.
A healthy translation pipeline still gives humans a chance to inspect changes before deploy. The advantage of file-based translation is that review stays in normal code review. You do not need a second approval process in a vendor portal.
That also keeps the feedback cycle clean. If a reviewer sees an odd translation, they can fix the source string, update TRANSLATING.md, or patch the .po entry in the same pull request.
Why this setup ages well
Most i18n workflows get painful over time because they add hidden state. A portal holds one set of rules. A spreadsheet holds another. The repo holds the deployable files. Nobody is sure which one is current.
CI with repo-based translation avoids that problem. Your branch contains the source, the glossary, and the generated output. That is easy to audit and easy to reproduce.
If you want the deployment process to feel boring in the best way, this is how to do a translation workflow that fits engineering instead of fighting it.
Reviewing Translations and Managing Costs
Automation helps with typing. It does not replace judgment.
You still need review. The good news is that review becomes easier when translations arrive as plain text changes in Git rather than hidden updates in a portal.
Review in a pull request, not in a separate system
A .po diff is readable enough for normal code review if you keep the process tight.
For example:
-msgid "Archive project"
-msgstr ""
+msgid "Archive project"
+msgstr "Archiver le projet"
That gives reviewers context they already understand. They can comment inline. They can request a wording change. They can compare the source string with nearby template or Python changes in the same pull request.
A simple review checklist works well:
- Check meaning first: does the translation say the same thing as the source?
- Check product terminology: does it match the glossary?
- Check syntax: placeholders, plural entries, HTML, and punctuation.
- Check UI fit: does the string look too long for the component?
Use quality criteria that matter
Professional translation review often uses an 8-point checklist that includes source comprehension, natural flow in the target language, and precision for numbers and units. It also treats accuracy and cultural appropriateness as the two most important factors, with clear examples of how mistakes in technical or financial content can create serious downstream problems, as described in Blend's translation evaluation checklist.
For Django apps, the same idea applies in simpler form.
If your app shows measurements, money, dates, or compliance language, review those strings with extra care. "Close enough" is not enough there. For generic UI labels, lighter review is usually fine.
Review should focus on risk. Billing, legal, and technical strings need more scrutiny than "Save" or "Cancel."
Keep costs under control by translating only what changed
The fastest way to waste money is to reprocess stable strings every release.
A good workflow detects untranslated or changed entries and leaves the rest alone. That keeps API usage tied to product changes instead of full-file churn. It also means reviewers spend time on fresh text, not on old strings that were already accepted.
Translation memory and terminology management are important here in practical terms. Proper TM management can reduce translation time by up to 40% on repeated content, according to the earlier BayanTech reference already noted above. In practical Django terms, apps repeat themselves a lot. Buttons, validation errors, labels, and account flows recur across pages.
A small operating model that works
For small teams, I recommend this split:
Developers own extraction and generation Run
makemessages, generate translations, open the PR.Native speakers review high-impact locales Focus review effort where users notice wording.
Glossary updates happen in the same repo Fix the rule once. Avoid repeating comment-thread debates.
That model is cheap, transparent, and easy to maintain. It also scales better than manual copy-paste because the review burden stays attached to actual changes, not to the whole translation corpus.
Stop Copying and Pasting, Start Committing Translations
The useful mindset shift is simple. Treat translations like code artifacts.
They belong in the repo. They should be generated by repeatable commands. They should be reviewed in pull requests. They should move through CI the same way migrations, assets, and tests move through CI.
Manual translation by copy-paste is not just annoying. It creates hidden work, inconsistent terminology, and fragile syntax handling. A repo-based workflow fixes those problems by keeping the whole process visible. It also gets cheaper over time. Proper terminology and translation memory management can reduce translation time by up to 40% on repeated content, based on BayanTech's explanation of TM reuse in translation workflows.
Pick one Django project with a locale/ directory. Run extraction, generate one target language, inspect the Git diff, and compile messages. That ten-minute test will tell you more than another month of manual copy-paste ever will.
If you want a CLI that translates Django .po files, preserves placeholders, writes back to your locale files, and fits a Git-based workflow, try TranslateBot. Start with one locale, add a small TRANSLATING.md, and review the diff like any other code change.