Meta description: Django templates are translated, but your JavaScript UI stays in English. Fix the Django-to-JS handoff with a CI-friendly i18n workflow.
You shipped gettext_lazy, ran makemessages, translated django.po, and switched your browser to German. The navbar changes. The forms change. Then a toast pops up from your JavaScript and says, “Profile updated successfully.”
That’s the moment most Django teams realize their i18n setup only covers half the app.
If you need to translate in javascript inside a Django project, the hard part isn’t extracting one or two strings. The hard part is the handoff between Django’s gettext workflow and the browser code that now owns modals, chart labels, validation messages, and async UI state.
When Your Translated App Is Still Stuck in English
The failure mode is always the same. Server-rendered templates look fine, but client-side text is frozen in the source language.
A few examples that usually slip through:
- Toast messages from Alpine.js, Stimulus, or vanilla JS
- Chart labels built in the browser after an API call
- Confirmation dialogs created from dynamic state
- Inline validation added after the initial page render
Django didn’t “miss” those strings by accident. Your default extraction flow is centered on Python, templates, and gettext domains. Once UI copy moves into JavaScript, you need a second path for extraction, translation, and delivery.
The break usually appears after your frontend gets interactive
A small CRUD app can hide this for a while. Then you add a dropdown driven by fetch, an HTMX partial with follow-up client logic, or a JS widget for billing or analytics. The app is translated, except for the parts users click most.
That mismatch is worse than leaving the whole app in English. It feels broken.
Practical rule: If a user-facing string can appear after page load, treat it as part of your localization system from day one.
You can see the pattern in plenty of real UI examples at https://translatebot.dev/en/blog/examples-of-translations/. The common thread isn’t the framework. It’s that translation debt piles up wherever strings leave Django templates and enter browser code.
Why teams miss it in review
Backend review catches gettext_lazy. Template review catches {% translate %}. JavaScript copy often lands in a different PR, owned by a different person, with no extraction step wired in.
That’s why “translate in javascript” ends up feeling separate from Django i18n, even though it isn’t. It’s the same problem, just with a different file type and a worse failure surface.
Django’s Built-in Solution JavaScriptCatalog
A common handoff failure looks like this. Django templates are translated, QA signs off the page, then a toast, modal, or inline validation message fires from JavaScript and shows up in English. Django does ship an official answer for that case: JavaScriptCatalog.
For small server-rendered apps, it is a reasonable baseline. You expose one view, load the generated script, and call gettext() in browser code.

Wire the catalog into Django
In urls.py:
from django.urls import path
from django.views.i18n import JavaScriptCatalog
urlpatterns = [
path("jsi18n/", JavaScriptCatalog.as_view(), name="javascript-catalog"),
]
In your base template:
<script src="{% url 'javascript-catalog' %}"></script>
That script registers gettext helpers globally in the browser. Your JavaScript can then do this:
const message = gettext("Profile updated successfully.");
console.log(message);
Extract JavaScript strings into the right domain
This part trips teams up because Django does not put JavaScript strings in the default django domain. If you only run the usual extraction command for templates and Python, your frontend copy will be missing from the files translators touch.
Run:
python manage.py makemessages -d djangojs -l de
That creates the JavaScript catalog here:
locale/de/LC_MESSAGES/djangojs.po
A normal entry looks like this:
msgid "Profile updated successfully."
msgstr "Profil erfolgreich aktualisiert."
For placeholders:
#, python-format
msgid "Welcome back, %(name)s."
msgstr "Willkommen zurück, %(name)s."
What JavaScriptCatalog solves, and what it does not
JavaScriptCatalog handles the Django to browser handoff in the simplest possible way. That matters because it keeps backend and frontend strings in the same gettext workflow, with the same locale directories, translators, and review process.
The trade-off is that it stays simple. It does not enforce placeholder safety, it does not clean up inconsistent string usage across JS files, and it does not fit modern bundler workflows particularly well. If a team mixes %(name)s, template literals, and ad hoc interpolation in frontend code, the catalog will happily expose all of it. You still have to keep those strings disciplined.
Plural logic needs the same care. Django supports plural forms, but the frontend code still has to call the right APIs and pass counts correctly. If developers treat translated JS messages as plain string replacement, they create bugs that no catalog view can fix. MDN’s documentation for Intl.PluralRules is a good reference for language-specific plural categories when frontend logic gets more complex: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/PluralRules
That is the primary value of JavaScriptCatalog. It gives you an official starting point for browser-side gettext in Django. It is not a full workflow for extraction, validation, translation handoff, and frontend delivery.
Where the Standard JavaScript Workflow Breaks Down
The breakage usually shows up after the first serious frontend feature ships.
A Django team extracts strings, translators fill djangojs.po, JavaScriptCatalog is wired into the page, and everything looks fine in development. Then the frontend grows up. You add Vite, split bundles, lazy-loaded screens, client-side state, and cached assets. The translation handoff from Django to JavaScript starts becoming the part nobody trusts.

It loads in a way modern frontend code has to work around
JavaScriptCatalog was built for server-rendered pages that can accept a generated script and a few globals. Module-based frontend code wants the opposite. It wants explicit imports, predictable initialization, and translation data that can be loaded per route or per locale.
That mismatch creates real maintenance cost. A script tag that defines gettext on window is easy to bolt on, but awkward to test, awkward to type, and awkward to integrate with code splitting. Teams often end up writing a wrapper layer just to make the old delivery model look like a modern module API.
You also give up control over loading strategy. With file-based locale assets, the frontend can preload, cache, version, and lazy-load translations like any other static dependency. With JavaScriptCatalog, translation delivery is tied to a Django view, which is harder to fit into a bundler-first pipeline.
It makes asynchronous rendering harder to reason about
This is the failure point generic JavaScript i18n guides usually skip. The hard part is not extracting strings. Django already solves that. The hard part is getting the right translated payload into browser code at the right time, in a way that survives real deployment conditions.
If a page renders before the catalog is available, frontend code falls back to English or shows raw message IDs. If a lazy-loaded component expects translations that were only initialized on the first page load, you get inconsistent language state across screens. If caching is off, every page hit can rebuild or re-fetch the same catalog.
Those bugs are frustrating because they sit between teams. Backend developers see valid .po files. Frontend developers see untranslated UI. Translators assume their work shipped correctly.
It doesn’t fit bundlers cleanly
Frontend developers generally want this:
import { t } from "./i18n";
toast.success(t("profile.updated"));
That pattern is easier to test and easier to reason about in component code.
With JavaScriptCatalog, the usual result is extra glue:
export function t(message) {
return window.gettext(message);
}
That wrapper is not catastrophic. I have shipped it before. But it is still a compatibility layer around globals, not a delivery model designed for ES modules, chunking, and static asset versioning.
It adds friction to review, caching, and CI
A generated catalog behind a view is also harder to inspect in pull requests. Developers can review .po changes, but they cannot review the browser-ready artifact that the frontend will load unless they generate and commit another file anyway.
That matters in CI. Good i18n pipelines catch missing keys, stale generated assets, placeholder mismatches, and accidental English regressions before deploy. JavaScriptCatalog does not block that work, but it does not help much either. The browser output remains a runtime concern instead of a build artifact you can validate, diff, fingerprint, and ship through the same pipeline as the rest of the frontend.
Here is the trade-off in practice:
| Workflow | What developers see in Git | Frontend fit |
|---|---|---|
| JavaScriptCatalog | .po changes, runtime catalog output |
Weak for module-first apps |
| JSON locale files | .po changes plus generated JSON |
Better for bundlers and async loading |
JavaScriptCatalog is still useful for simple pages. Once the frontend becomes an application instead of a template with a little JavaScript, the Django-to-JavaScript handoff needs a build step, not a global.
A Modern Workflow for JS Translations in Django
The pattern that holds up is boring in the best way. Keep Django as the source of truth for extraction and translation. Serve JavaScript translations as files your frontend can load on demand.
Keep Django extraction, change delivery
You still use the gettext workflow:
python manage.py makemessages -l de -l es
python manage.py makemessages -d djangojs -l de -l es
python manage.py compilemessages
That gives you two domains:
django.pofor Python and templatesdjangojs.pofor browser-side strings
Your locale tree stays familiar:
locale/
de/LC_MESSAGES/django.po
de/LC_MESSAGES/djangojs.po
es/LC_MESSAGES/django.po
es/LC_MESSAGES/djangojs.po
Pass the active language from Django to the page
Don’t guess the locale in JavaScript if Django already knows it.
In your template:
<html lang="{{ LANGUAGE_CODE }}">
<body data-language-code="{{ LANGUAGE_CODE }}">
<script type="module" src="{% static 'js/app.js' %}"></script>
</body>
</html>
Then in app.js:
const languageCode = document.body.dataset.languageCode || "en";
No duplicate locale detection logic. No split brain between backend and frontend.
Load JSON, not a global catalog
Generate a JSON file for each locale from djangojs.po, then fetch it from static files or your CDN.
A minimal loader:
const cache = new Map();
export async function loadTranslations(locale) {
if (cache.has(locale)) {
return cache.get(locale);
}
const response = await fetch(`/static/i18n/${locale}.json`, {
headers: { "Accept": "application/json" },
});
if (!response.ok) {
throw new Error(`Failed to load translations for ${locale}`);
}
const messages = await response.json();
cache.set(locale, messages);
return messages;
}
export function interpolate(message, params = {}) {
return message.replace(/%\(([^)]+)\)s/g, (_, key) => {
return params[key] ?? `%(${key})s`;
});
}
export async function createI18n(locale) {
const messages = await loadTranslations(locale);
return {
gettext(key, params = {}) {
const message = messages[key] || key;
return interpolate(message, params);
},
};
}
Usage:
import { createI18n } from "./i18n.js";
const locale = document.body.dataset.languageCode || "en";
const i18n = await createI18n(locale);
const text = i18n.gettext("Welcome back, %(name)s.", { name: "Ada" });
console.log(text);
Use DOM replacement carefully
If you prefer data attributes, that can work well for static fragments:
<button data-i18n-key="Save changes"></button>
export function translatePage(messages) {
document.querySelectorAll("[data-i18n-key]").forEach((element) => {
const key = element.dataset.i18nKey;
element.textContent = messages[key] || key;
});
}
Verified data tied to the GitHub guide on translation techniques says data-attribute and Proxy-based JS translation methods exceed 95% success for static content but drop to 75% with interpolated strings because of placeholder mismatches. It also calls out Proxy failures in legacy browsers and closure breakage from runtime replacements: https://github.com/enndylove/translation-techniques-js
That matches what teams see in practice. Attribute-based replacement is good for static labels. It’s not the best fit for every dynamic string in app logic.
Generate JSON from your locale files
The missing piece is a build step that converts djangojs.po into frontend JSON. Teams usually handle that one of two ways:
- Custom management command that reads
locale/<lang>/LC_MESSAGES/djangojs.poand writesstatic/i18n/<lang>.json - Build script in CI that generates JSON before collecting static files
The important part isn’t the exact implementation. It’s the boundary:
- Django owns extraction
.pofiles remain reviewable- the frontend consumes JSON
- CI regenerates artifacts every time strings change
That boundary is what stops your JavaScript translation layer from turning into a one-off side system.
The Final Bottleneck Translating Hundreds of JS Strings
Once you have djangojs.po, the next problem is obvious. The file exists. The strings are extracted. Most msgstr entries are still empty.
That’s where teams get stuck.
Your real options
You can translate them yourself. You can send them to human translators. Or you can use an automated tool and review the output in Git.
Each choice has a different cost in time, context switching, and release friction.
| Method | Cost | Speed | Developer Workflow |
|---|---|---|---|
| Manual developer translation | Low cash cost, high engineering time | Slow | Stays in Git, but burns focus |
| Human translators or agency | Higher cash cost | Medium | Good quality with context, slower handoff |
| TMS platform | Recurring subscription cost | Medium to fast | Strong review features, extra portal and process |
| CLI-based automated translation | Low per-run cost | Fast | Best fit for code-first teams if output is reviewable |
What usually works in practice
For product copy, legal copy, and brand-sensitive landing pages, human review still matters. For repetitive UI strings, validation text, admin surfaces, and fast-moving SaaS features, teams usually want speed and version control first.
A translation workflow is only usable if developers will run it during normal feature work.
That’s why old SMT ideas still matter. Verified data tied to the ACM paper says SMT principles remain foundational in automated tooling, especially for glossary consistency and preserving placeholders like %(name)s and HTML tags through format-string statistics: https://dl.acm.org/doi/fullHtml/10.1145/3661167.3661233
That’s the piece generic copy-paste translation often breaks. The words may be readable. The app crashes because a placeholder changed shape.
The hidden cost isn’t the translation itself
The hidden cost is rework.
If your team has to:
- export strings manually,
- paste them into a portal,
- re-import them,
- fix placeholders by hand,
- then explain the diff in code review,
you don’t have a translation workflow. You have a recurring release tax.
For JavaScript strings, that tax gets worse because frontend copy changes constantly.
Automating JavaScript Translations with a Single Command
Once your djangojs.po files exist, the cleanest next step is a CLI that works next to makemessages and compilemessages.
That matters more than vendor features. If translation lives outside the terminal, most engineering teams stop treating it like build infrastructure.

The command should look boring
A good command looks like something your team would add to docs or CI:
python manage.py translate --target-lang de es fr
That’s it. No browser tab. No upload step. No “export from Django, import into another system” dance.
If you want a practical walkthrough of the terminal-driven approach, https://translatebot.dev/en/blog/how-to-do-a-translation/ shows the shape of that workflow.
Reviewable diffs beat opaque sync
The output should land back in your .po files, not disappear into a platform database.
That gives you normal code review:
#, python-format
msgid "Welcome back, %(name)s."
-msgstr ""
+msgstr "Willkommen zurück, %(name)s."
You can review terminology, catch awkward strings, and keep the history in Git where the rest of the code lives.
Format preservation is the non-negotiable part
JavaScript translation breaks when a tool treats placeholders, HTML, and plural markers like disposable text. A developer workflow has to preserve them by default.
What you want the tool to handle safely:
- Python-style placeholders like
%(name)s - Positional placeholders like
%sand{0} - Inline HTML in strings that render in trusted UI fragments
- Repeated updates where only changed strings are retranslated
That last point matters because frontend copy churns. Toast text changes. CTA labels change. Validation copy gets rewritten. The tool has to fit incremental work, not one big localization event.
Practical advice: If translation output can’t survive
git diff,compilemessages, and a smoke test without manual cleanup, don’t automate around it. Replace it.
What this fixes for JavaScript-heavy Django apps
A CLI-based flow removes the worst bottleneck in the Django-to-JS handoff:
- extraction still starts with
makemessages -d djangojs - translations stay in
locale/<lang>/LC_MESSAGES/djangojs.po - frontend JSON can still be generated from that source
- every change remains visible in version control
That’s the part most generic “translate in javascript” guides miss. They show browser libraries. They don’t solve the backend source-of-truth problem.
Building a CI Pipeline for Full-Stack Django i18n
If your app depends on JavaScript for core UI, translation can’t stay as a manual release chore. It has to become part of CI.
Verified data tied to the JavaScript statistics article says over 70% of web analytics tools are now powered by JavaScript, and that JS-based processing can reduce latency by up to 90% compared to server-side processing. The same verified summary connects that shift directly to Django teams needing CI validation of translation metrics and format preservation before deploys: https://statisticsblog.com/2013/02/28/statistical-computation-in-javascript-am-i-nuts/
That tracks with real app architecture. More UI lives in the browser now, so more user-visible text does too.
A good pipeline follows normal CI CD pipeline best practices. Keep steps deterministic, make artifacts reviewable, and fail early when generated files drift from source.
Here’s the shape that works:
The pipeline sequence
You want one job that does the same thing every time:
- Extract backend strings
- Extract JavaScript strings
- Translate changed entries
- Compile message files
- Generate frontend JSON artifacts
- Fail if placeholders or generated files are broken
A GitHub Actions example:
name: i18n
on:
pull_request:
workflow_dispatch:
jobs:
translations:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install gettext
run: |
sudo apt-get update
sudo apt-get install -y gettext
- name: Install dependencies
run: |
pip install -r requirements.txt
- name: Extract Django messages
run: |
python manage.py makemessages -a
- name: Extract JavaScript messages
run: |
python manage.py makemessages -d djangojs -a
- name: Translate locale files
run: |
python manage.py translate --target-lang de es fr
- name: Compile message files
run: |
python manage.py compilemessages
- name: Generate JS locale JSON
run: |
python manage.py export_js_catalogs
- name: Verify no uncommitted changes
run: |
git diff --exit-code
What to validate before merge
Don’t stop at successful translation output.
Check the things that break production:
- Placeholder integrity:
%(name)s,%s, and{0}must survive unchanged - Plural sanity: review entries with
msgid_pluraland locale-specific plural forms - Generated artifact drift: JSON catalogs should match the current
.pofiles - Compile step:
compilemessagesmust pass in CI, not on a developer laptop only
Broken translations are build failures. Treat them that way.
If you want a reference point for wiring translation into automation, the CI usage docs at https://translatebot.dev/docs/usage/ci/ are worth reading even if you adapt the exact steps to your own stack.
The payoff is boring releases. You merge code, CI updates locales, reviewers inspect the diff, and frontend strings stop lagging behind backend ones.
If your team is tired of copying .po strings into portals and fixing broken placeholders by hand, TranslateBot is worth a look. It keeps the workflow inside Django, runs from manage.py, writes changes back to your locale files, and fits naturally beside makemessages, compilemessages, and CI.