The world of artificial intelligence encompasses plenty of buzzwords. Machine learning, deep learning, neural networks and machine translation—all of these terms are on the same spectrum. Nevertheless, there are differences.

Artificial intelligence (AI)
A catch-all term that is charged with major expectations, a great deal of fascination, and also fear. First of all, it entails smart computer systems. As a subdomain of computer science, the field of artificial intelligence deals with the automation of behavior and machine learning. It essentially implies simulating human intelligence and learning, i.e. by collecting information and finding rules for using that information. Computers are programmed to solve problems and complete tasks in a relatively independent manner by combining data science, processing power, and algorithms. Artificial intelligence is difficult to define. Scientists still disagree on exactly how human intelligence can be described.

Machine learning (ML)
In science, machine learning refers to a subdomain under the research field of artificial intelligence. It involves the application of AI. Machine learning comprises a class of algorithms that allow computer systems to automatically learn and improve from "experience". Traditional machine learning applications learn from a large amount of sample data, establish correlations, and derive general rules. The data is mostly "structured". This means that ML algorithms are set up with specific features which allow them, for example, to classify dog breeds based on parameters such as height and length, while distinguishing the distinct dog breeds based on data extracted from images. Once the training phase is over, the adaptive machines can apply their findings to real cases and make predictions, for example. If a machine learning algorithm produces inaccurate assertions, data engineers intervene and make adjustments or corrections.

Deep learning (DL)
Deep learning is a subdomain of machine learning. Deep learning methods can be used to structure algorithms into layers to create an artificial neural network. The network learns on its own and can make intelligent decisions. Multiple layers of artificial neural networks are referred to as "models". Deep learning is like the next evolutionary stage of machine learning, and achieves particularly sound results when large amounts of data (so-called "big data") are available to train an artificial neural network. Unlike traditional machine learning applications, deep learning algorithms work remarkably well with huge amounts of unstructured data. The algorithms no longer need predefined attributes; instead, these classify the data on their own according to logical structures. This makes them similar to the logical thinking used by humans to reach conclusions. For example, a DL algorithm finds distinguishing attributes by examining unstructured images of dogs, and identifies which image shows which dog breed. Deep learning can also be used to classify texts or sounds in addition to images. The models are used in search engines to identify text or images, or to ensure that autonomous vehicles can recognize road signs.

Artificial neural networks
Artificial neural networks are a set of algorithms designed to recognize patterns. The neural networks are capable of reading and interpreting sensory data with a sort of machine perception, and can flag and group raw data. The recognized patterns are numeric and found in vectors that translate real-world data such as text, images, or sound into numbers. Artificial neural networks, like the model they are based on (human neural networks), consist of interconnected nodes. The nodes allow the networks to evaluate the data and assign a certain weight to each value. There are different classification methods, which can also be defined as "learning rules". Data engineers use them to train the neural network before it is implemented.

Natural language processing (NLP) & neural machine translation (NMT)
Natural language processing is a broad research field in which computer science, linguistics, and artificial intelligence overlap. The goal is to teach computers to process and "understand" natural language so that they can answer questions or translate, for example. Many different applications are emerging from this very active field of research. These include text chatbots and voice user interfaces, as well as machine translation.

Machine translation has been around for a very long time: the first trailblazer attempts date back to the 1930s. Substantial improvements originated from the use of deep learning and artificial neural networks. These methods allow computer systems to establish contextualized connections between words and phrases. Neural machine translation yields results that sound considerably natural from a linguistic angle, and belong to the broad field of artificial intelligence.