Training AI is like teaching children

Similar to young students, AI needs good teachers. Models require good training, they need guidance, and the learning process never ends.

Training AI is like teaching children
By:
Lengoo Marketing Team
Date:
Apr 28, 2021

The fascinating thing about Artificial Intelligence (AI) is that the technology is capable of learning. For a long time, we’ve thought that this was a skill only humans and animals could master. Now that machines can learn (yes, it’s called “machine learning” for a reason), they need good human teachers. When smart computers learn a human language for machine translation, the process is quite similar to that of teaching a child.

AI, the innocent child

Similar to a child being born into this world, AI is very much innocent in its nature. In the beginning, a language model and its neural networks are a blank slate. It hasn’t seen anything of the world yet, it has no vocabulary and it certainly doesn’t speak several languages. Before data scientists start training a model, it is nothing but a bunch of random numbers.

Imagine how a child learns a language. A child will grow up speaking the language that people speak around it. AI also soaks up the information that it's presented with. Like children, MT models process the information that they are exposed to. Engineers show them examples of language parts in the form of data. Then, neural networks start making connections and learn. Just as a child learns words first and then gradually comes to understand context and meaning.

Data is education

The quality of the data is crucial in machine learning. You wouldn’t expect a student who received a poor education to be the genius of the class. Language models, too, are only as smart as their teachers make them. To become a well-trained model, engineers improve the data quality by filtering and augmenting it before the language model gets to see it. Good input equals good output.

However, there is a significant difference to the way humans learn. AI only has a very narrow peek into our world and doesn’t get to explore it like we do. It won’t be exposed to anything else but to the data and the language examples their engineer teachers provide. There’s only so much that a language model can pick up. You have to keep in mind that language learning is very complex and has to do with human life. That’s why you have to train it with a vast amount of data, so it gets as much digital manifestation of our world as possible.

Learning bias

Studies suggest that preschoolers already show signs of bias. They pick up on social cues easily. While they are mirroring adults’ behavior, AI also learns bias from humans. More precisely, from data generated by humans. If a chatbot is trained on public data for example, it will learn how humans communicate which, unfortunately, includes discrimination. Like Microsoft's chatbot Tay, for instance, which learned language examples from public data on Twitter. The bot started picking up offensive language and became a racist misogynist very quickly.

The complex thing about eliminating bias in AI and language is that humans would have to define what is “right” or “fair”. The solution to this problem won’t be an easy one. So, just like teachers who can use their influential role to help students unlearn prejudices and bias, data scientists need to be aware of this problem and proactively work against it when they are selecting training data and hyperparameters.

Young students make mistakes and need guidance

Young students make mistakes. Yet humans tend to expect machines to be flawless and tolerate hick-ups, especially when it comes to artificial intelligence. This might be rooted in the fact that the term itself suggests that the machine must live up to our human capabilities minus the emotional parts. Many expect AI to work unsupervised perfectly - without exception. But much like their human creators, machines are not perfect.

When we are teaching children about something, we expect them to make mistakes before they get it right. Teachers correct their students and support them on their way to mastering a new concept. AI also needs to learn and making mistakes is a part of that process.

When a language model doesn’t perform well, MT Engineers have to adjust the learning parameters. That could mean you’re telling the system to favor one translation over the other by re-training models with improved data sets and different hyperparameters.

The learning process never ends

For humans, the learning process doesn’t suddenly stop. They have to adapt to their changing environments, broaden their horizons and adapt new skills to lead a successful life in this world. The same applies to AI. A trained model needs to be retrained if its purpose changes or if it’s used in a different environment. You cannot take the same model that was trained to fulfill one task and then expect it to work well in a different context. A language model that was set up to translate product descriptions from French to Swedish won’t perform equally well for translating marketing content from Chinese to French. You wouldn’t expect a student who has been taught only one language to be fluent in another one from one day to the other either. AI isn’t miraculously good at everything. Models need special training for each new kind of task or domain.