Computers are becoming more and more proficient at understanding human language. The field of Natural Language Processing (NLP) – a branch of artificial intelligence – provides us with methods for machines to independently translate texts, talk to people like Amazon’s Alexa does, or answer customers' questions as a virtual service chatbot. For this purpose, computer scientists train artificial neural network models with language data.
If you’re interested to learn how NLP works, read this: “Natural Language Processing advances digitalization and makes companies smarter”
Especially large language models like the GPT-3 text generator from OpenAI show that we can hold relatively good conversations between humans and machines. AI research has even gone one step further: If computers learn human language, can they also be funny?
Humor training is “AI-Complete” – so... “AI impossible”?
This isn't new for scientists, since they've been trying to train AI models to come up with jokes for some time now. Not an easy one. Language alone is taxing for machines because of its ambiguity, so you can imagine that humor is somewhat of a supreme discipline as its core is ambiguity. Scientists assign the “humor training” of AI models to the “AI Complete" category. This includes the most complex arithmetical problems, the solutions to which are just as difficult as finding an answer to the question: What is intelligence? Humorous AI requires a language model that works at a similarly high level as the human language center. This means that AI applications can’t develop the sense of humor that we humans know and enjoy.
Harvard Business School is aware of this, but isn’t ready to throw in the towel just yet. Project head Michael H. Yeomans and his researchers at the university have tried their hand at the topic of humorous AI. In one study, they tested whether humans or machines could better predict whether a joke would be considered funny by a person or not. The researchers used this to test how close AI could get to human judgment – which is what we humans use to initially assess how funny a joke is before we laugh.
AI predicts how funny a joke is
75 pairs of people competed against the computer in the study. What’s interesting is that 71% of the subjects had known each other for more than five years – so, they actually had an advantage over the machine. People who know each other well probably have a better idea of what makes the other person laugh. In the experiment, Person A rated 33 jokes on a scale from “incredibly funny” to “not funny at all.” Person B then looked at the ratings for four jokes and, based on this information, tried to predict how funny the other person would find eight more jokes.
Then the algorithm tried to predict the humor content following the same pattern. Instead of analyzing the linguistic structure of the jokes or being trained to recognize certain joke features, the computer's predictions were based on “collaborative filtering.” In other words, the computer evaluated the behavior of the subjects to identify patterns and thus infer the preferences they have when it comes to humor. The kicker: AI was more on point than the prediction of the subjects. The algorithm was right on the money 61% of the time, while humans predicted correctly 57% of the time. So, at the very least, artificial intelligence understands what people tend to find funny.
The AI study at Harvard Business School worked on a person-specific basis. You would have to fulfill one prerequisite if you wanted to create a joke-generating AI that many people find amusing: every person in the world would have to agree on what is considered funny and what is not. However, there is no universally valid humour formula that the AI can learn. Even if AI focused only on one specific cultural area, it is a person's individual perspective that ultimately determines what jokes make them laugh.
The issue of ethics
AI and humor also raise questions of ethics. “What one person finds funny can hurt another,” says our Machine Translation Engineer, Andrada Pumnea. Humor is a very sensitive topic because some people find racist and sexist remarks funny. “And since artificial intelligence uses data to learn from examples, the output naturally reflects that. Developers have a duty to think about possible discrimination and the reinforcement of prejudices.”
As is so often the case, AI needs humans in the process, who correct errors, adjust datasets, and further train models. Artificial intelligence can generate funny statements – but only to a limited extent and with a great deal of effort. However, since humor is too subjective and complex to establish generally suitable, computer-related rules, the reality is that for the time being we’re going to have to keep “trying our hand.” It is also questionable whether AI with a sense of humour is needed at all. “Currently, technology is best suited to support humans in clearly defined areas,” says Andrada.