Generative Artificial Intelligence (GenAI) provides new opportunities for translating content faster, at greater consistency and quality levels, while cutting costs. However, it also has potential for errors, inconsistencies, and runaway expenses. Part of the challenge is correctly harnessing this emerging technology as it continues to evolve rapidly. Technical communications and localization professionals often begin using GenAI tools without clearly defining the business outcome they want to achieve, making it harder to align with proven translation processes and measure success effectively.
The benefits – and risks – of using GenAI will only grow as more companies adopt the technology. So, it is important to start building healthy habits in applying GenAI to translation and localization now. Let’s look at ten habits that organizations can embrace today to maximize the utility of GenAI while maintaining control over quality and costs.
1. Translate structured source content
As technical communication and translation professionals, we tend to take certain best practices for granted. We understand the value of structuring source content as reusable chunks and recognize that translating these pieces fosters consistency and reuse in our localizations. As new technologies like GenAI emerge, it's essential to continue working directly with structured source content, rather than compiled output formats, to maintain single sourcing capabilities and ensure AI tools can interpret and respond to content with greater precision and contextual awareness.
Translating compiled outputs using GenAI may seem convenient, but it often comes at a cost. Without a clear source of truth, translations can vary across similar content types and outputs, undermining consistency and reuse. Just as single sourcing benefits source language content, it also strengthens translated versions by preserving structure and intent. While it is valuable to explore new use cases for GenAI, a good rule of thumb is to work with well-structured source texts to ensure quality and maintain control.
2. Set up centralized terminology management
High-quality and consistent translation depends on a well-managed terminology system that AI tools can reliably reference and interpret. If companies haven’t done so already, they need to consider introducing centralized terminology management when bringing GenAI into their localization workflows.
Terminology management is the process of systematically identifying, defining, and organizing terms and their corresponding translations. When properly implemented, it results in a glossary that GenAI LLMs and translation engines can reference to ensure the accuracy and consistency of content, particularly in multilingual contexts.
For example, once an organization has a glossary of standard terminology in place, an LLM can be used to compare a machine translation against the glossary to run terminology checks and harmonize key terms across the localized content.
3. Apply the right AI tool for the task
Applying the right AI tool for the task is essential to achieving high-quality translations. Traditional Neural Machine Translation (NMT) engines are often better suited for technical or structured content, where accuracy and consistency are critical. In contrast, LLMs excel at handling more creative or nuanced texts, where style, tone, and context play a larger role. By matching the tool to the content type, teams can maximize efficiency and ensure translations meet both linguistic and stylistic expectations.
4. Use GenAI to augment traditional translation technologies
The content produced by technical communication teams is often used to ensure safety, corporate or regulatory compliance, and proper usage or procedures. Therefore, localized content needs to be accurate and consistent. Given the natural variability of GenAI, it is usually best used to augment rather than replace traditional NMT engines.
One approach is to first use a machine translation engine to translate content and then use LLMs for AI-based quality estimation and automated post-editing. The AI tool checks the machine translation results and scores them, for example, reporting that 800,000 of 1 million words translated reach a certain accuracy threshold. This leaves only 200,000 translated words remaining that need to be reviewed by a human, which reduces turnaround times and cost.
Further efficiencies can be gained with translation memory (TM) systems, which reduce the amount of content that needs to be translated. Once translations are validated, by human or machine, they can be stored in the TM for reuse. This not only ensures higher quality and consistency but also allows AI tools to learn from existing translations, leading to better results in future projects.
Additionally, GenAI can be used to automate adjacent processes. For example, anonymizing data is a particularly valuable application of GenAI for highly regulated industries like banking and healthcare. Consider a healthcare organization that needs to anonymize data so that a translator cannot see the real names or illnesses of patients. GenAI can be used to detect all the personal information in a document; replace it with some meaningful string – perhaps “John Doe” or “Jane Doe” for the patient’s name in the version delivered to the translator – and save the original data string so it can be inserted back into the translated content. It is a complicated workflow to set up. But once it is in place, this type of automation can ensure compliance with privacy regulations while saving considerable time.
5. Evaluate whether to fine-tune an LLM or use a RAG
In producing quality translations cost-effectively, there are two key avenues that localization teams can take with GenAI. One is fine-tuning an LLM to improve translations. The other approach uses Retrieval-Augmented Generation (RAG) to obtain the best possible suggestion for a translation.
Fine-tuning an LLM involves taking a Large Language Model, training it, adjusting its internal parameters, and retraining it to better suit a specific task, such as translating specific content. This approach improves performance, but it is less flexible because the LLM has problems with new data unless it is trained beforehand. Therefore, a fine-tuned LLM is best suited for slowly changing data like fraud detection, translations for niche areas, or general sentiment analysis.
By contrast, instead of relying solely on pre-trained knowledge, RAG models retrieve relevant information from external sources, such as a terminology database, documents, or translation memories, to enable LLMs to access and utilize current information. For example, in a RAG workflow, there could be a prompt to check if the translation memory has similar translations for a particular sentence. If the LLM finds enough similar sentences, it retranslates the sentence to provide a more accurate translation. The real-time improvements enabled by a RAG approach make it a powerful tool for localizing content with frequently changing data, such as customer service bots, search engines, and dynamic question and answer systems.
6.Deploy the paid “pro” or "enterprise" version of a GenAI tool
Applying the free version of a GenAI tool for translation is problematic on two fronts: First, the free version analyzes a range of publicly available information that can limit the quality and accuracy of localizations. Secondly, running an organization’s content through a free GenAI version risks making proprietary content publicly available.
Therefore, companies should adopt the pro or enterprise version of a GenAI tool to support their translation efforts. This enables organizations to work with their own data and content sources in an environment, customize the tool for their needs, and apply advanced security and privacy protections. Widely adopted enterprise-class tools include OpenAI’s ChatGPT Enterprise, Microsoft 365 Copilot Enterprise, and Gemini for Google Workspace.
7. Use a "mixture of experts" model to focus LLMs on specific tasks
AI tools can be very resource-intensive, so having them regularly work across the entire network can quickly lead to runaway costs. For this reason, the best practice is to adopt a “mixture of experts” model where LLMs focus on specific tasks.
For instance, a translator might first use an NMT to pre-translate content, then send the pre-translated content to an LLM for quality estimation, and deliver certain results back to a different LLM for automated post-editing. Because the LLMs are very focused on specific tasks, they can be incorporated into the localization workflow without adding significant costs.
8.Consider employing MCP for your GenAI model
The “mixture of experts” approach, where different LLMs are incorporated into a translation workflow, makes a good case for adopting the Model Context Protocol (MCP). This open-source framework standardizes how LLMs interact with other LLMs, AI agents, and external tools and data sources. While MCP is an emerging technology, it is quickly being adopted by a broad range of enterprises.
9.Cultivate prompt engineering expertise
The responses that GenAI tools deliver are only as good as the prompts used to query them and the data available to them. This challenge has led to the discipline of prompt engineering, the process of creating effective instructions or queries to guide GenAI models in delivering the most relevant and accurate results. With localizations, for instance, this can make the difference between getting a literal translation of a guide and obtaining a translation that incorporates an organization’s voice and preferred terminology.
Some large organizations are hiring prompt engineers. However, as the use of GenAI grows, prompt engineering will evolve from a job description to another communication skill that all technical communications and localization professionals will need to acquire. To optimize the use of GenAI for translations today and prepare for the future, enterprises need to invest in building their teams’ skills now.
10.Incorporate human-led review
GenAI is rapidly proving to be an effective tool in translation workflows, but it is not perfect. So, most content will still require a human-led review to close any gaps or fix any errors introduced by automated translations. This is especially true for the localized versions of critical documentation produced by a company’s technical communications teams to ensure proper usage and procedures, safety, and compliance. Instead, GenAI used in combination with traditional solutions is freeing up localization experts to focus on the content and translations where they add the most value.