Milestones of Machine Translation: Part 1 — In the beginning, there was the dictionary

The story of Machine Translation has much in common with the epics of ancient Greece, including decisive trials by fire.

Milestones of Machine Translation: Part 1 — In the beginning, there was the dictionary
By:
Lengoo Marketing Team
Date:
Sep 15, 2020

The path of research

1930’s: In 1933, Frenchman Georges Artsrouni and Russian Peter Trojanskij started developing machine-supported translation systems, each separately from the other. Their ideas were based on a machine comparison of different dictionaries. This system could only take into consideration the roots of words, and not the various declinations, tenses, or person forms. Their work would remain largely unknown for some time to come. Trojanskij died at an early age, before he could finish his project, and his work only became known to the outside world in the 1950s, when his patent for what was certainly one of the first MT systems on the globe was rediscovered.

1950’s: in the 1950s, universities in the US started developing their first translation systems. IBM introduced an early result in 1954 in New York — an event that drew quite a bit of media coverage at the time. Nevertheless, the system of IBM was little more than a type of illustrative toy. It only comprised 250 words — for basic day-to-day communication, people need at least 750 — and translated 49 preselected Russian sentences into English. IBM’s early system did, however, reinforce the belief at that time that the dawn of comprehensive machine translation systems was upon mankind — a belief that turned out to be wrong.

1960’s: following the rather bumpy first attempts at Russian to English translations, the US Department of Defense reported in the 1960s that machine translations weren’t feasible, and that work on the project would be halted due to the massive costs. The majority of research in the US was subsequently suspended for many years. The United Kingdom and Soviet Union followed suit, abandoning swathes of work on machine translation following this bleak prognosis. Other countries — particularly France, Canada, and Germany — continued to invest a lot of time and money into the development of machine translation methods.

1980’s: in the 1980s, it was Japan that made significant technological progress. As part of the Japanese project to develop what was seen as a “supercomputer”, supported by the Japanese government’s Fifth Generation Computer Systems Initiative, significant resources were directed towards machine translation, but without achieving the desired results. At the end of the 1980s, the global research landscape was highly fragmented: different teams researching in different countries following widely disparate methods. Success was slow.

1990 up to today: with computers finally becoming more compact, affordable, and powerful in the form of PCs, developers could work more efficiently. In the last few years, there have already been reports of high quality translations at near-human levels. Systems employed at early stages of research only used two languages and had a limited vocabulary (usually covering the fields of technology and chemistry). Modern systems can often offer a variety of language combinations and work across multiple fields. In the developing field of computer linguistics, it’s now also possible to integrate linguistic knowledge into the programming process. Results from this hybrid method are of much higher quality than results based on older methods, which usually relied on pure statistics. The apex of research may not yet have been reached, but is now finally in sight.

Next week, in part 2 of our series, we’ll take a look at chess games and neural networks. Rest assured: MT’s present and future are much rosier than its past.