Lengoo is permanently closed. Read more here.
The present and future of MT are much rosier than its past. What are we still working on today?
The first part of our series looked back at the tough beginnings and unsuccessful promises of Machine Translation. We’ve left the past behind us. Welcome to the present of Machine Translation!
Recent research in MT is focused largely on Neural Machine Translation (NMT). This method allows artificial intelligence to translate longer sentences and learn more about the grammatical dependencies of words. The network-like structures on which Neural Machine Translation is based are inspired by the structure and functioning of the human brain. This is what makes Deep Learning possible in the first place. Contrary to many of the original methods (direct Machine Translation, the Interlingua Method, Example-based Translation, etc.), AI systems can now flexibly respond to the texts they have to translate. They keep learning new things about grammatical structures, instead of working with available knowledge. Board games provide a good way to illustrate this. The Artificial Intelligence system AlphaGo celebrated a major success in 2016, beating South Korean go wizard Lee Sedol several times. In chess, a breakthrough in this battle between “man and machine” was realized in 1997, when chess computer Deep Blue beat then World Chess Champion Garry Kasparov — only a year after Kasparov managed to defeat the computer in a previous match. Deep Blue was still a relatively simple — if by no means compact — chess computer, working with algorithms. The AlphaGo software above all relies on its Neural Networks and Machine Learning capacities. Go is a more complex game, which is why this was considered a more remarkable achievement. Software merely based on simple algorithms wouldn’t stand a chance against human intelligence in a round of this game.
The immediate predecessor of NMT was Statistical Machine Translation (SMT), which was a substantial change from even earlier MT systems in that it didn’t require explicitly programmed semantic and grammatical information about the source and target languages. For SMT, statistical machine learning techniques are used to analyze large amounts of parallel texts in the two languages to then build two statistical models. The first of these models indicates which words and phrases are most likely to appear in the target language when a given word or phrase appears in a source sentence, and the second simply shows what words and phrases tend to appear near each other in target-language texts. The quality of the translations produced by SMT are dependent on the quality of the translated texts used to train the system and on how similar the new input sentences are to that training material. by AI to learn. After the text has been translated into the target language — regardless of whether by a Neural, Statistical, or any other type of Machine Translation — it is always recommended to have it edited by a human translator. Translations will have to be proofread by humans for a long period to come — not only because language and expressions are always a matter of taste as well.
Recurrent Neural Networks (RNN) are somewhat newer than NMT and SMT. Google is one of the companies to have invested a lot of effort into the technology in recent years. Their basic principle is similar to that of other neural networks, but — as the name suggests — they are recurrent: the neurons on each level are additionally connected to neurons of their own or previous levels, instead of only being connected to neurons of the next level. This structure is similar to the brains of mammals.
The most recent development in the field of Machine Translation is implementation of Transformer Models. The technology still works with neural networks, but the focus is now placed on individual parts of the sentences to be translated. Roughly speaking, this idea is based on the way in which humans direct their visual attention to their environment. Our focus is never placed on everything in sight, but rather on a certain point that seems most important. In line with this idea, neural networks can now independently decide on which part of the sentence they focus first. With languages that have similar structures, for instance, the process continues to be sequential. If languages have different structures, however, the network might first focus on the middle or end of a sentence.
The fact that Machine Translation is all around us is often overlooked (because people are so used to it that they don’t think about it), or viewed a little too critically (in fear that it will ruin the job market for translators). The global translation market creates roughly 40 billion dollars in revenue each year and is set to grow to over 56 billion dollars by 2021. Governments, companies, as well as private individuals keep growing closer together with advancing globalization, across national borders. The steady rise in the need for translation can no longer be met by translators alone. Let’s look at an example from everyday life: Machine Translation functionality in various social network platforms and e-mail programs are now widely available. Subtitles for online videos can already be generated automatically in the source language, and developers are working on having them displayed directly in other languages via MT as well, without the need for human intervention — not because companies don’t want to hire translators, but because there are not enough translators available for these tasks. Just consider that 400 hours of video material are uploaded to YouTube each minute. And the demand for translation is not only growing on social networks, but also in more “tangible” fields, such as technical documentation. The more countries products are exported to, the more languages in which all documents need to be available. Contrary to marketing texts — which generally require a creative human touch to convey subtle messaging to different cultures — many of these documents do not need to be particularly eloquent, and they don’t require a lot of cultural adaptation. Machine translations can take over some of these less nuanced tasks that take up a huge amount of time and thereby allow human translators to dedicate their attention to texts that do require their time, attention, and artistry.
Machine translation has certainly made great strides: from the first demonstration of MT in the 1950s, in which rudimentary word-replacement approaches were used to feign translations of Russian texts, to the self-learning Deep Neural Networks of today with their ability to make translations sound so natural. Next week – in the third and last installment of our series on Milestones of Machine Translation – we take a brief look at the future. Where will research take us next?