Large Language Models and the future of localization

New deep-learning models delivering high-quality texts have sprung up and advanced at a mind-boggling speed. What’s in store for the localization industry?

Text by Szymon Kaczmarek Rafał Jaworski Andrzej Zydroń

Inhaltsübersicht

Image: © sankai/istockphoto.com

Neural Machine Translation (NMT) burst onto the language technology scene in 2017, bringing substantial improvement to translation quality over previous statistical machine translation (SMT). NMT showed that, given enough data, it is possible to build an effective machine translation (MT) system. Nevertheless, NMT systems still had their drawbacks and weaknesses: Training them could take weeks or even months, depending on the amount of data to process. In addition, NMT systems were neither very good at handling long sentences nor at coping with words that had not been encountered during training, so-called out-of-vocabulary words (OVWs). This would result in either a totally inappropriate word being inserted (depending on the “beam search” parameter settings of the NMT engine) or the OVW source word being repeated multiple times in the target text.  

LSTM: enhancing accuracy, but at ...