In the first article of this series, I argued that no single system can handle all localization needs. Nowhere is this tension more visible than in enterprise localization programs, where the promise of a unified translation management system collides with the messy reality of organizational complexity.
Let me be clear: TMSes are genuinely valuable. They’ve transformed how enterprises handle multilingual content. But understanding their limitations – and building around them – is what separates mature localization programs from those constantly fighting their own infrastructure.
The Evolution of Enterprise Localization
A decade ago, enterprise localization often meant someone from marketing or product who ‘also handled translations.’ They would export content, email it to an agency, wait, re-import it, and hope nothing broke. The agency owned the technology, the translation memories, the terminology – everything.
This created a dependency that many enterprises came to regret. Changing vendors meant starting over. Quality issues were hard to diagnose because you couldn’t see inside the process. And every negotiation happened with the vendor holding all the cards – your linguistic assets were hostage to the relationship.
The response was predictable: enterprises started bringing technology in-house. From 2015 onwards, we saw a wave of TMS deployments. Today, most localization programs with annual budgets exceeding 200,000 EUR operate their own translation management system. This shift fundamentally changed the power dynamic – enterprises now own their translation memories, control their workflows, and can switch vendors without losing institutional knowledge.
But it also created new problems.
What TMSes Do Well
Before examining the gaps, let’s acknowledge what modern TMSes genuinely excel at.
File processing is the core competency. Whether you’re dealing with XML, JSON, XLIFF, InDesign, or Word documents, a good TMS extracts translatable content, presents it to linguists in a workable format, and reassembles the translated files without breaking structure or formatting. This sounds simple but represents enormous engineering effort.
Translation memory leverages past work. When the same or similar sentence appears again, the TMS surfaces previous translations, reducing cost and improving consistency. For enterprises with repetitive content – software strings, product descriptions, legal boilerplate – this creates substantial savings.
Terminology management ensures brand voice survives translation. Term bases define how key concepts should be rendered in each language, and the TMS flags deviations. For companies where ‘cloud’ must always be ‘nube’ in Spanish (never ‘la nube’ or ‘computacion en la nube’), this is essential.
Workflow automation routes content through defined stages – machine translation, human post-editing, review, final approval – without manual handoffs. For high-volume programs, this keeps content flowing.
These capabilities are real and valuable. The question isn’t whether TMSes work – they do. The question is what happens at the boundaries.
The Integration Trap
Every enterprise has multiple systems where content originates: a CMS for the website, a PIM for product information, a documentation platform, a marketing automation tool, a support knowledge base. Each of these needs to connect to your TMS.
TMS vendors understand this, which is why they all offer ‘connectors’ – pre-built integrations with popular platforms. On paper, this sounds perfect. In practice, it creates a hidden dependency.
Consider what happens when you need to change TMS. Perhaps your current vendor raised prices, or a competitor offers better AI capabilities, or your company acquired another business using a different system. Suddenly, you face a migration where the hardest part isn’t moving translation memories – it’s rebuilding every integration.
I’ve seen migrations take eighteen months, not because of data complexity, but because of integration dependencies. The CMS connector needs reconfiguration. The PIM integration has to be rebuilt from scratch because the new TMS uses a different approach. The custom scripts that pushed content from the documentation platform? Those need complete rewrites.
The cruel irony is that enterprises adopted their own TMS to escape vendor lock-in. Instead, they traded one form of lock-in for another. The integrations that make your TMS useful are the same integrations that make it painful to leave.
The Vendor Management Gap
TMSes treat translation vendors as users with logins. They can assign work to vendors, track project status, and receive completed files. What they cannot do is manage vendors as business relationships.
Think about what vendor management actually requires: tracking negotiated rates that vary by language pair, content type, and volume tier. Managing minimum fees and rush surcharges. Evaluating quality across hundreds of projects to identify which vendors excel at which content types. Balancing workload to maintain relationships with backup vendors while not over-relying on any single provider. Monitoring on-time delivery rates. Handling capacity planning for peak seasons.
None of this fits neatly into a TMS. The typical TMS approach to vendor assignment is first-come-first-serve or round-robin – models that assume all vendors are interchangeable at identical prices. Reality is messier.
The result? Enterprises maintain parallel systems. I’ve seen localization programs running sophisticated vendor performance dashboards in Tableau, fed by data exports from the TMS combined with financial data from procurement systems. The TMS knows who did the work; the spreadsheet knows whether it was good and cost-effective.
The Quality Measurement Problem
TMSes offer quality assurance features: terminology checks, consistency verification, formatting validation. These catch mechanical errors. But translation quality – the kind that affects whether your message resonates in market – requires human evaluation.
When enterprises perform in-house quality reviews, they need to track results systematically. Which segments had errors? What type of errors? How does this vendor’s error rate compare to others? How has quality trended over time? Does quality vary by content type or language?
TMSes weren’t designed for this. They’re optimized for moving content through a workflow, not for building a quality database that drives vendor decisions. So quality data ends up in spreadsheets, disconnected from the TMS, requiring manual effort to correlate.
Some TMSes have added quality evaluation modules, but these often feel bolted on rather than integrated. The evaluation workflow interrupts the translation workflow instead of enhancing it. Results are hard to query or export for analysis. The connection to vendor management is weak or absent.
The Internal Resource Challenge
Many enterprises employ internal linguists – sometimes for quality control, sometimes for specialized content, sometimes for languages where external vendors are scarce or expensive. Managing these internal resources through a TMS designed for vendor outsourcing creates friction.
Internal linguists have capacity constraints that don’t map to the TMS model. They have other responsibilities. They take vacations. They have expertise in certain domains but not others. Optimizing their workload – ensuring they’re used for high-value work while routine content goes to vendors – requires visibility the TMS doesn’t provide.
I’ve watched localization managers toggle between TMS dashboards and Outlook calendars, trying to figure out who’s available this week and what they should prioritize. The TMS sees work as a queue; the manager sees people with varying skills, preferences, and constraints.
The Reporting Reality
Every localization program needs reporting: spend by language, by quarter, by content type. Turnaround times. Quality trends. Vendor utilization. Budget forecasting.
TMSes offer built-in reports, but they rarely match what leadership actually wants to see. The TMS reports what happened inside the TMS – project volumes, word counts, leverage rates. Leadership wants business context – how does localization spend compare to revenue by region? How does turnaround time affect time-to-market? What’s the correlation between quality scores and customer satisfaction?
Answering these questions requires combining TMS data with financial systems, sales data, customer feedback tools. This integration happens in business intelligence platforms like PowerBI or Tableau, fed by exports and APIs. The TMS is a data source, not the reporting solution.
This creates ongoing maintenance burden. Every TMS upgrade risks breaking your data pipelines. Every new report requires understanding both the BI tool and the TMS data model. The expertise to build and maintain these integrations often doesn’t exist in localization teams, creating dependencies on IT resources that are always in short supply.
When One TMS Actually Is Enough
After all this criticism, let me offer balance. There are scenarios where a single TMS genuinely serves an enterprise well.
If you work exclusively with LSP vendors and trust them to manage quality and resource allocation, the TMS functions as a handoff point. Content goes in, translations come out. You don’t need vendor management tools because your vendor manages that complexity. You don’t need quality tracking because that’s their responsibility.
If your source systems are limited and stable – say, one CMS and one documentation platform – the integration burden is manageable. Two connectors don’t create the same lock-in as twenty.
If your reporting needs are basic – word counts and turnaround times rather than complex business analytics – TMS built-in reports may suffice.
If you’re confident you’ll never change TMS – because you’re deeply integrated with a vendor ecosystem or because the TMS is part of a larger enterprise platform commitment – the switching cost matters less.
These conditions describe some programs accurately. But they’re the exception, not the rule. Most enterprises eventually outgrow the single-TMS model, usually discovering this at inconvenient moments.
Thinking Differently About Architecture
What if instead of asking ‘which TMS should we use?’ we asked ‘how should we architect our localization infrastructure?’
The key insight is separation of concerns. Integrations to source systems serve a different purpose than translation workflow management. Vendor relationship management serves a different purpose than project execution. Quality measurement serves a different purpose than quality assurance.
When these functions are conflated in one system, you get the gaps I’ve described. When they’re separated into purpose-built components that communicate through well-defined interfaces, you get flexibility and resilience.
Consider what changes if your source system integrations exist independently of your TMS. Changing TMS no longer means rebuilding integrations. You could even use different TMSes for different content types – one optimized for software strings, another for marketing content – with a unified integration layer handling the routing.
Consider what changes if vendor management lives in a dedicated system connected to but not part of your TMS. Your vendor data survives TMS changes. Your quality history stays intact. Your financial tracking integrates with procurement systems designed for that purpose.
This modular approach requires more upfront thinking but pays dividends in flexibility. It’s the difference between building on bedrock versus building on a platform that might shift.
Moving Forward
I’m not arguing that enterprises should abandon their TMS investments. Those systems provide genuine value and represent significant investment in configuration, training, and process development.
What I am arguing is that the TMS should be seen as one component in a larger architecture, not the architecture itself. The spreadsheets and workarounds that enterprises build around their TMS aren’t failures of discipline – they’re signals that real needs aren’t being met. Those needs deserve proper solutions, not guilt about not using the TMS ‘correctly.’
In the next article, I’ll examine how multilingual vendors approach this same challenge from a different angle – and why their solutions often create downstream problems for the rest of the supply chain.
Coming in this series
Part 3 The MLV Challenge – Project Ingestion and Vendor Optimization
Part 4 SLVs and Freelancers – The Downstream Impact
Istvan Lengyel
Founder & CEO, BeLazy