← All post

The Enterprise Dilemma: When Your TMS Isn’t Enough

In the first article of this series, I argued that no single system can handle all localization needs. Nowhere is this tension more visible than in enterprise localization programs, where the promise of a unified translation management system collides with the messy reality of organizational complexity.

Let me be clear: TMSes are genuinely valuable. They’ve transformed how enterprises handle multilingual content. But understanding their limitations – and building around them – is what separates mature localization programs from those constantly fighting their own infrastructure.

The Evolution of Enterprise Localization

A decade ago, enterprise localization often meant someone from marketing or product who also handled translations. They would export content, email it to an agency, wait, re-import it, and hope nothing broke. The agency owned the technology, the translation memories, the terminology – everything.

This created a dependency that many enterprises came to regret. Changing vendors meant starting over. Quality issues were hard to diagnose because you couldn’t see inside the process. And every negotiation happened with the vendor holding all the cards – your linguistic assets were hostage to the relationship.

The response was predictable: enterprises started bringing technology in-house. From 2015 onwards, we saw a wave of TMS deployments. Today, most localization programs with annual budgets exceeding 200,000 EUR operate their own translation management system. This shift fundamentally changed the power dynamic – enterprises now own their translation memories, control their workflows, and can switch vendors without losing institutional knowledge.

But it also created new problems.

What TMSes Do Well

Before examining the gaps, let’s acknowledge what modern TMSes genuinely excel at.

File processing is the core competency. Whether you’re dealing with XML, JSON, XLIFF, InDesign, or Word documents, a good TMS extracts translatable content, presents it to linguists in a workable format, and reassembles the translated files without breaking structure or formatting. This sounds simple but represents enormous engineering effort.

Translation memory leverages past work. When the same or similar sentence appears again, the TMS surfaces previous translations, reducing cost and improving consistency. For enterprises with repetitive content – software strings, product descriptions, legal boilerplate – this creates substantial savings. This may not be the most modern approach these days, but it serves the purpose of translation recycling.

Terminology management ensures brand voice survives translation. Term bases define how key concepts should be rendered in each language, and the TMS flags deviations. Again, knowledge graphs are the current trend here, but terminology checks are still useful.

Workflow automation routes content through defined stages – machine translation, human post-editing, review, final approval – without manual handoffs. For high-volume programs, this keeps content flowing. Some TMSes even support simply working with files (for pre-DTP, post-DTP and other purposes) in the different workflow steps, others immediately turn the formatting into bilingual data.

These capabilities are valuable. The question isn’t whether TMSes work – they do. The question is what happens at the boundaries.

The Integration Trap

Every company has multiple systems where content originates: a CMS for the website, a PIM for product information, a documentation platform, a marketing automation tool, a support knowledge base. Each of these needs to connect to your TMS.

TMS vendors understand this, which is why they all offer ‘connectors’ – pre-built integrations with popular platforms. On paper, this sounds perfect. In practice, it creates a hidden dependency.

Consider what happens when you need to change TMS. Perhaps your current vendor raised prices, or a competitor offers better AI capabilities, or your company acquired another business using a different system. Suddenly, you face a migration where the hardest part isn’t moving translation memories – it’s rebuilding every integration.

I’ve seen migrations take eighteen months, not because of data complexity, but because of integration dependencies. The CMS connector needs reconfiguration. The PIM integration has to be rebuilt from scratch because the new TMS uses a different approach. The custom scripts that pushed content from the documentation platform need complete rewrites.

The cruel irony is that enterprises adopted their own TMS to escape vendor lock-in. Instead, they traded one form of lock-in for another. The integrations that make your TMS useful are the same integrations that make it painful to leave.

The Linguistic Automation Trap

Linguistic automation, the application of automated translation, editing, or quality measurement, is partly a technical challenge and partly a distinct discipline. Which neural machine translation engine performs best for your content type? Which LLM, with which prompts? How do you implement fully automated workflows that maintain high quality? How do you automate quality assurance or style guide compliance?

This requires a very different mindset from process automation, but only works in conjunction with it. Many TMS providers try to integrate MT and LLMs seamlessly into their offering. Companies like Intento or Custom MT were established to separate the technical infrastructure from the linguistic tuning, providing access to multiple providers and tools to compare them. Unless you’re using such middleware, and a TMS integrated with it, you risk losing your linguistic automation capabilities if you ever change TMS. The alternative is relying on something as generic as ChatGPT or Google Translate that everything integrates with. However, it may not represent the best solution for specialized content in your language pairs.

The result is another form of lock-in. TMSes constrain customers to the linguistic automation capabilities they offer, so customers choose what’s available rather than what’s optimal. Your MT configuration, your quality estimation models, your prompt engineering all become TMS-specific assets that don’t migrate cleanly.

The Vendor Management Gap

TMSes treat translation vendors as users with logins. They can assign work to vendors, track project status, and receive completed files. What they cannot do is manage vendors as business relationships.

Think about what vendor management actually requires: tracking negotiated rates that vary by language pair, content type, and volume tier. Managing minimum fees and rush surcharges. Evaluating quality across hundreds of projects to identify which vendors excel at which content types. Balancing workload to maintain relationships with backup vendors while not over-relying on any single provider. Monitoring on-time delivery rates. Handling capacity planning for peak seasons.

None of this fits neatly into a TMS. The typical TMS approach to vendor assignment is first-come-first-serve or round-robin – models that assume all vendors are interchangeable at identical prices. Reality is messier.

The result? Enterprises maintain parallel systems. I’ve seen localization programs running sophisticated vendor performance dashboards in Tableau, fed by data exports from the TMS combined with financial data from procurement systems. The TMS knows who did the work; the spreadsheet knows whether it was good and cost-effective.

The Quality Measurement Problem

TMSes offer quality assurance features: terminology checks, consistency verification, formatting validation. These catch mechanical errors. But translation quality – the kind that affects whether your message resonates in the market – requires continuous evaluation.

When enterprises perform in-house quality reviews, they need to track results systematically. Which segments had errors? What type of errors? How does this vendor’s error rate compare to others? How has quality trended over time? Does quality vary by content type or language?

TMSes weren’t designed for this. They’re optimized for moving content through a workflow, not for building a quality database that drives vendor decisions. So quality data ends up in spreadsheets, disconnected from the TMS, requiring manual effort to correlate.

Some TMSes have added automated quality evaluation modules, but developing such modules requires a whole different mindset from the feature-based engineering approach. Creating the right linguistic tools is about iterative improvements and relentless measuring. To put it differently: a TMS provider can surely release in a week or two some sort of AI integration that submits every segment and a prompt to some LLM. But if you want to measure the quality across different AI providers, route the content to the right provider and prompt, employ automated quality checks afterwards, and be able to prove the results on a consistent set of human evaluations… that’s rather years of work, in the line of service optimization, with a cost of a million words checked at least. Automatically evaluating the content and connecting the results to vendor management is either weak or absent in the case of TMSes.

The Internal Resource Challenge

Many companies employ internal linguists – sometimes for quality control, sometimes for specialized content, sometimes for languages where external vendors are scarce or expensive. Managing these internal resources through a TMS designed for vendor outsourcing creates friction.

Internal linguists have capacity constraints that don’t map to the TMS model. They have other responsibilities. They take vacations. They have expertise in certain domains but not others. Optimizing their workload – ensuring they’re used for high-value work while routine content goes to vendors – requires visibility the TMS doesn’t provide.

It’s not rare that localization managers toggle between TMS dashboards and Outlook calendars, trying to figure out who’s available this week and what they should prioritize. The TMS sees work as a queue; the manager sees people with varying skills, availability, preferences, and constraints.

The Reporting Reality

Every localization program needs reporting: spend by language, by quarter, by content type. Turnaround times. Quality trends. Vendor utilization. Budget forecasting. Localization leaders talk much about creating value through linking localization to outcomes.

TMSes offer built-in reports, but they rarely match what leadership actually wants to see. Outcomes are not measured in TMSes. The TMS reports what happened inside the TMS – project volumes, word counts, leverage rates. Leadership wants business context – how does localization spend compare to revenue by region? How does turnaround time affect time-to-market? What’s the correlation between quality scores and customer satisfaction?

Answering these questions requires combining TMS data with financial systems, sales data, customer feedback tools. This integration happens in business intelligence platforms like PowerBI or Tableau, fed by exports and APIs. The TMS is a data source, not the reporting solution.

This creates an ongoing maintenance burden. Every TMS upgrade risks breaking your data pipelines. Every new report requires understanding both the BI tool and the TMS data model. The expertise to build and maintain these integrations often doesn’t exist in localization teams, creating dependencies on IT resources that are always in short supply.

When One TMS Actually Is Enough

After all this criticism, let me offer balance. There are scenarios where a single TMS genuinely serves an enterprise well.

If you work exclusively with LSP vendors, partitioning content in a simple way, and trust them to manage quality and resource allocation, the TMS functions as a handoff point. Content goes in, translations come out. You don’t need vendor management tools because your vendor manages that complexity. You don’t need quality tracking because that’s their responsibility.

If your source systems are limited and stable – say, one CMS and one documentation platform – the integration burden is manageable. Two connectors don’t create the same lock-in as twenty.

If your reporting needs are basic – word counts and turnaround times rather than complex business analytics – TMS built-in reports may suffice.

If you’re confident you’ll never change TMS – because you’re deeply integrated with a vendor ecosystem or because the TMS is part of a larger enterprise platform commitment – the switching cost matters less.

These conditions describe some programs accurately. But they’re the exception, not the rule. Most enterprises eventually outgrow the single-TMS model, usually discovering this at inconvenient moments.

Thinking Differently About Architecture

What if instead of asking ‘which TMS should we use?’ we asked ‘how should we architect our localization infrastructure?’

The key insight is separation of concerns. Integrations to source systems serve a different purpose than translation workflow management. Vendor relationship management serves a different purpose than project execution. Quality measurement serves a different purpose than quality assurance to detect errors before releasing. The portals offered to internal customers shall be optimized towards the users and use cases rather than towards translation management.

When these functions are conflated in one system, you get the gaps I’ve described. When they’re separated into purpose-built components that communicate through well-defined interfaces, you get flexibility and resilience.

Consider what changes if your source system integrations exist independently of your TMS. Changing TMS no longer means rebuilding integrations. You could even use different TMSes for different content types – one optimized for software strings, another for marketing content – with a unified integration layer handling the routing. And you could simplify integration in the case of mergers and acquisitions.

Consider what changes if vendor management lives in a dedicated system connected to but not part of your TMS. Your vendor data survives TMS changes. Your quality history stays intact. Your financial tracking integrates with procurement systems designed for that purpose.

This modular approach requires more upfront thinking but pays dividends in flexibility. It’s the difference between building on bedrock versus building on a platform that might shift.

Moving Forward

I’m not arguing that enterprises should abandon their TMS investments. Those systems provide genuine value and represent significant investment in configuration, training, and process development.

What I am arguing is that the TMS should be seen as one component in a larger architecture, not the architecture itself. The spreadsheets, middleware and workarounds that enterprises build around their TMS aren’t failures of discipline – they’re signals that real needs aren’t being met. Those needs deserve proper solutions, not guilt about not using the TMS ‘correctly.’ The expectation that the TMS should not only manage your translations but also do your laundry is not justified. Separating the different business concerns and building in a modular fashion does not only make you more resilient, it also decreases your licensing costs. And when being afraid of working with multiple systems, always think about microservice-based software architecture: a plethora of tiny black boxes generating scalable, loadable, modern solutions.

In the next article, I’ll examine how multilingual vendors approach this same challenge from a different angle – and why their solutions often create downstream problems for the rest of the supply chain.

 Istvan Lengyel

Founder & CEO, BeLazy

Keep reading

You're almost being lazy the right way. Sign in and let the workflows do the work.

You're almost being lazy the right way. Log in and let the workflows do the work.