Back to blog

Automate Django i18n: Technical Translation Services 2026

2026-04-20 15 min read
Automate Django i18n: Technical Translation Services 2026

Meta description: Technical translation services for Django apps should live in Git, not a portal. Build a repeatable i18n workflow for .po files and CI.

You run python manage.py makemessages, open locale/fr/LC_MESSAGES/django.po, and now your release is blocked by a wall of empty msgstr "" entries.

That’s the point where “technical translation services” stops being a generic business term and becomes your problem. Not in the abstract. In your repo, in your deploy pipeline, with real strings that can break placeholders, plural rules, and UI layout if they’re handled badly.

Django already does its part well. gettext_lazy, pgettext, ngettext, locale directories, and LocaleMiddleware are solid. The friction starts after extraction. This gap is frequently patched with copy-paste, spreadsheets, contractor handoffs, or a TMS portal that nobody on the engineering side wants to open.

That Moment After You Run Makemessages

The failure mode is familiar. You add a few features, wrap strings correctly, regenerate messages, and discover that “a few features” touched dozens of strings across validation errors, buttons, emails, admin labels, and onboarding flows.

python manage.py makemessages -l fr -l de

Now you own a translation backlog.

For engineering teams, technical translation services is really about handling structured product text without breaking the code around it. That includes UI strings, developer-facing docs, transactional emails, and model content that changes with the product. It’s closer to build tooling than to brochure translation.

There’s a reason this category matters. Tech/Engineering accounts for 34% of expertise areas among translators using CAT tools, and the broader translation services market reached USD 72.95 billion globally in 2025, with a projection to USD 96.21 billion by 2032, according to Redokun’s translation statistics roundup. That doesn’t make your .po files less annoying. It does confirm the problem is large, recurring, and expensive when handled badly.

What usually breaks first

The common failure isn’t language quality. It’s workflow.

Practical rule: If your translation process can’t survive a normal sprint with multiple merges to main, it isn’t production-ready.

A good workflow treats localization as part of delivery. Strings get extracted, translated, reviewed, compiled, and shipped with the same repeatability as migrations or static asset builds.

What Technical Translation Actually Means for Code

A marketing page can tolerate stylistic freedom. A Django app usually can’t.

When you translate UI strings, you’re not just converting words. You’re preserving placeholders, plural logic, context, and sometimes markup. That’s the part many non-technical translation workflows get wrong.

A diagram comparing original Spanish code with translated English code, highlighting logical adjustments during technical translation.

A Django string is data with rules

Take a normal greeting:

#: apps/accounts/views.py:42
msgid "Welcome back, %(name)s"
msgstr ""

A valid translation must keep %(name)s intact. If that token changes, your runtime formatting breaks.

Plural forms add another layer:

#: apps/billing/views.py:88
#, python-format
msgid "%(count)s invoice"
msgid_plural "%(count)s invoices"
msgstr[0] ""
msgstr[1] ""

That isn’t just a sentence pair. It’s locale-specific grammar wired into Django’s gettext machinery. If you treat it like plain copy, you’ll ship bad output or invalid message files.

Context matters too. “Open” in a file menu and “Open” as an adjective aren’t the same string in many languages. That’s where pgettext earns its keep.

from django.utils.translation import pgettext, gettext_lazy as _

status_label = pgettext("support ticket status", "Open")
cta_label = _("Open")

What works and what doesn't

General translation tools often do fine on long prose. They struggle more on short UI labels where there’s no context. They also tend to mishandle tiny but expensive details like %s, {0}, HTML tags, or sentence fragments that only make sense inside a template.

If your team still blurs the line between transcription and translation, this short guide on transcribe vs translate is useful because it frames the basic distinction clearly before you get into software-specific localization.

For code-facing work, the better pattern is:

According to Translators USA’s overview of technical document translation services, 88% of professional translators employ at least one CAT tool, and those tools boost productivity by at least 30%. The same source notes that combining subject-matter experts with glossaries can deliver 99%+ terminology accuracy. For Django teams, the practical takeaway is obvious. Context and terminology control matter more than having a fancy portal.

The strings that need extra care

Some categories need review even if the rest of your pipeline is automated:

If you want a broader non-marketing view of where technical content translation fits, this piece on technical document translation is worth skimming.

The translation is only half the job. The rest is preserving the contract between your source string and the code that renders it.

The Spectrum of Translation Options

Teams typically end up in one of four camps. None is universally wrong. They just fit different constraints.

An infographic showing the spectrum of translation services ranging from machine translation to professional interpretation services.

Translation Workflow Comparison

Approach Cost Model Speed Developer Workflow Consistency Control
Manual copy-paste Team time Slow Leaves terminal, often edits .po by hand Weak unless someone polices terms
Human agency or freelancer Project or word-based pricing Slower for iterative releases File handoff, review cycles, external coordination Good when the same linguists stay on the account
Full TMS platform Subscription plus usage Good once configured Usually portal-centric, with Git as a secondary concern Strong if glossaries and TM are maintained
AI CLI workflow Usage-based API spend Fast for recurring updates Runs in terminal and CI, writes back to repo Good when glossary and validation are enforced

Manual copy-paste is fine until it isn't

Small side projects often start here. You export strings, paste them into a chat window or doc, then paste results back into .po files.

That works for one locale and a small surface area. It falls apart when your app changes weekly.

The hidden cost isn’t money. It’s interruption. Every translation update becomes a mini project. Nobody remembers what changed last time. Nobody wants to review language diffs mixed with hand edits from a browser tab.

Human translators still matter

For regulated docs, legal text, support center content, or high-visibility landing pages, human review is still the right call. If the string carries compliance risk or brand nuance, buy the review.

The problem is fit. Most agencies are built around documents and batches, not around a dev team pushing product changes every day. You can absolutely make that model work, but you’ll need clean exports, stable glossaries, and someone on your side managing the loop.

TMS platforms are good at management, less good at living in your repo

A proper TMS gives you translation memory, review workflows, glossary management, permissions, and vendor coordination. That’s useful if you have many stakeholders, many locales, and a non-engineering localization team.

For a Django team, the trade-off is usually workflow friction. Your source of truth stops being the repo and starts being a web app. Engineers now have to sync strings across systems, deal with web editors, and trust another layer to preserve placeholders correctly.

That’s not a deal-breaker. It just changes the ownership model.

Decision filter: If engineers own localization end to end, pick a workflow that starts in the terminal and ends in Git.

AI CLI tooling fits the way Django teams already work

This is the under-served middle. You want automation, but you don’t want a subscription platform and you don’t want to hand every sprint’s strings to an external vendor.

A CLI-based workflow is a better fit when:

That model also maps well to self-hosted environments and restricted networks. If your app sits behind a firewall, a browser-first localization process often becomes annoying fast.

Some teams go further and experiment with custom model integrations instead of fixed vendor stacks. If you’re evaluating that route, this primer on the Hugging Face API is a useful reference for understanding how hosted model access differs from traditional translation tooling.

The trade-off nobody mentions enough

Portal workflows optimize for coordination. CLI workflows optimize for change.

If you release quarterly and have a localization manager, a TMS can make sense. If you deploy often and your app strings are part of active development, you want something that behaves like the rest of your toolchain. That usually means commands, diffs, checks, and automation instead of tickets and portal state.

Core Components of a Modern Workflow

The enterprise terms sound heavier than they are. Under the hood, the useful pieces are familiar engineering ideas.

A diagram illustrating the core components of a modern workflow: input, process, output, and iterative feedback loops.

Translation memory is a cache

Translation Memory, or TM, is just a store of approved source and target pairs. If “Billing” was already translated last month, you shouldn’t pay to translate it again this week.

That matters because repetitive technical content benefits heavily from reuse. According to Wxrks’ translation industry trends summary, TM can reduce translation time and costs by up to 50%, and high-repetition sectors can reach 80% savings. The same source says translators using translation software report productivity increases of at least 30%.

For Django projects, repeated strings are everywhere. Navigation labels, validation messages, admin actions, email footers, and account status text all recur across releases.

A glossary belongs in version control

“Terminology management” sounds like a vendor feature. In practice, it should be a file your team owns.

Something like TRANSLATING.md or a locale glossary in YAML is enough if it answers the core questions:

A repo-based glossary wins because engineers can review changes with code, product can propose edits, and nobody is locked into a portal.

Keep your glossary close to the source strings. If it lives in another system, it goes stale.

QA is mostly about not breaking things

The first QA layer for software strings is structural, not stylistic.

Check for:

Then use Git for the human part. A pull request review is a better place to catch “that wording feels off in onboarding” than a buried comment in a TMS portal.

Human review still has a place

Even the best automated flow should leave room for selective review. You don’t need a linguist on every string. You do need one on strings that are sensitive, ambiguous, or customer-facing enough to carry real risk.

That’s the healthy split. Let automation handle extraction, fill, reuse, and consistency. Let humans handle judgment.

Automating i18n in Your Django Project

A production-ready localization loop should look boring. That’s a compliment.

You want one repeatable path from changed source strings to committed locale files. No side channels. No hidden web edits. No “someone updated French in the portal but forgot to sync the repo.”

A hand-drawn diagram illustrating the Django project internationalization process from settings configuration to localized application output.

Start with standard Django i18n

None of this works if your extraction layer is messy. Stick to Django’s normal patterns from the official internationalization docs.

Use gettext_lazy, pgettext, and ngettext correctly.

from django.db import models
from django.utils.translation import gettext_lazy as pgettext_lazy
from django.utils.translation import gettext_lazy as _

class Invoice(models.Model):
    class Status(models.TextChoices):
        DRAFT = "draft", _("Draft")
        PAID = "paid", _("Paid")

In templates, use {% translate %} when appropriate.

{% load i18n %}
<button>{% translate "Save changes" %}</button>

Your locale layout should stay conventional:

locale/
  fr/
    LC_MESSAGES/
      django.po
  de/
    LC_MESSAGES/
      django.po

Keep extraction and compilation explicit

The baseline loop is still Django’s own commands:

python manage.py makemessages -l fr -l de
python manage.py compilemessages

The missing piece is filling untranslated entries in a way that respects placeholders, context, and diffs.

A developer-first approach to technical translation services is still rare in mainstream content. Most material focuses on static documents and human-led handoffs. That gap matters because software strings need reproducible automation. As noted by GTS in its technical translation services discussion, developer tooling for code-embedded strings is under-served, while AI-driven CLI workflows can reduce costs to pennies per string and keep localization reviewable in Git.

Build the loop around changed strings only

You don’t want to retranslating an entire locale on every commit. You want to touch only entries that are new or changed.

That gives you three benefits:

  1. smaller diffs
  2. lower API usage
  3. less review noise

A practical sequence looks like this:

python manage.py makemessages -l fr -l de
python manage.py translate --locale fr --locale de
python manage.py compilemessages

The exact translation command depends on your tool, but the pattern is stable. Extract. Translate only deltas. Compile. Commit.

Put glossary rules in the repo

A lightweight glossary file beats tribal knowledge. For example:

# TRANSLATING.md

## Product terms
- TranslateBot stays untranslated
- Workspace = Espace de travail in French
- Billing = Facturation in French, not Paiement

## Tone
- Use informal second person in French UI copy
- Keep button labels short

## Placeholder rules
- Never alter %(name)s, %s, or {0}
- Preserve HTML tags exactly

That file is useful even if you switch providers later. The asset is your terminology, not the vendor.

A GitHub Actions example

If your team ships through GitHub, the automation can live in CI. The key is to avoid loops and commit only when locale files changed.

name: i18n

on:
  push:
    branches: [main]

jobs:
  translate:
    runs-on: ubuntu-latest
    permissions:
      contents: write

    steps:
      - name: Check out repository
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install gettext
        run: sudo apt-get update && sudo apt-get install -y gettext

      - name: Install dependencies
        run: pip install -r requirements.txt

      - name: Extract messages
        run: python manage.py makemessages -l fr -l de

      - name: Translate changed strings
        run: python manage.py translate --locale fr --locale de
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

      - name: Compile messages
        run: python manage.py compilemessages

      - name: Commit updated locale files
        run: |
          git config user.name "github-actions[bot]"
          git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
          git add locale
          git diff --cached --quiet || git commit -m "Update locale files"
          git push

A few hard-won notes:

Your localization workflow should produce normal Git diffs. If changes only exist in a portal, you’ve made review harder than it needs to be.

Model fields need a separate decision

.po files cover interface strings. User-generated or CMS-like model content is a different problem.

You can automate model field translation too, but treat it separately from Django’s gettext pipeline. Product copy, help text, and seeded content often need different review rules than buttons and errors. Don’t mix the two just because they both involve languages.

An Honest Look at Costs and Throughput

Cost discussions get distorted because teams compare subscription software, agency work, and API-driven automation as if they were the same thing. They aren’t.

A TMS usually charges for platform access and workflow features. Agency work charges for human labor and review time. AI-first CLI workflows charge for usage. The cheapest option depends less on theory and more on how often your strings change.

The real bill includes engineering time

When people compare localization costs, they often ignore the part engineers feel:

That time is real. It just doesn’t show up as a line item on an invoice.

For teams evaluating the wider budget trade-off, this breakdown of the cost of translation services is a useful companion read.

Throughput favors automation for active codebases

If your product team changes strings every sprint, the winning workflow is usually the one that handles small deltas well.

Human review still makes sense for selected content. But using human-only throughput for every changed msgid is usually wasteful when many strings are repetitive, low-risk, or structurally constrained.

The same pattern shows up in industry data. Machine-assisted workflows with post-editing are gaining ground because teams accept faster first-pass output when the review surface is controlled. The quality trade-off is real, but it’s manageable when you scope review to the strings that deserve it.

Where AI translation still needs supervision

You shouldn’t pretend all locales and string types behave the same.

Watch these closely:

That’s why the best setup isn’t “translate everything and trust it blindly.” It’s “automate the boring path, review the risky path.”

How to Wire This Up Before Your Next Deploy

Don’t turn this into a quarter-long tooling project. Run a narrow experiment in your current app.

The minimum viable setup

What I'd do this week

Start with a branch, not main.

Run the full loop on one language and inspect the output carefully. Pay attention to placeholders, plurals, fuzzy entries, and the places where short labels lost context. Those are your process defects, not just translation defects.

Then wire CI only after the local loop is stable. If you want a reference for that automation step, the CI usage guide shows the shape of a repo-driven pipeline clearly.

Small win first. If your team can trust one locale update landing cleanly through CI, adoption gets much easier.

The point isn’t to remove humans. It’s to remove avoidable drag. Technical translation services for Django apps should feel like part of software delivery, because that’s what they are.


If you want to test this workflow without adopting a portal, TranslateBot is a good place to start. It plugs into Django as a manage.py translate command, works with .po files in your repo, preserves placeholders and HTML, and fits the extract, review, compile loop you already use. Run it on one locale, inspect the diff, and decide from evidence instead of marketing.

Stop editing .po files manually

TranslateBot automates Django translations with AI. One command, all your languages, pennies per translation.