Does it matter if computers don't really understand human language?

Does it matter if computers don't really understand human language?
By:
Jay Marciano
Date:
Mar 10, 2023

Lengoo is made up of great people whose shared goal it is to join our customers in creating virtuous cycles of using technology and the ever-important human touch to create high-quality translations that can, in turn, be used to train more and more sophisticated artificially intelligent solutions.

As we – and here I mean not only Lengoo but all of us human beings! – move along this path of increased use of AI, we are faced with substantial questions and criticisms about these technologies, some of which are extremely helpful, even necessary, to address and others of which are less so.

A criticism that is leveled regularly at Neural Machine Translation and, more recently, at Large Language Models (which are so regularly in the news these days that I wonder how much of the hyped-up content is generated, initially at least, by LLMs) is that these systems do not really understand language, don’t truly grasp the meaning of either the input material or the output that they produce. In other words, they don’t know what they’re talking about.

As human beings we have an innate and subjective sense of what we mean when we say, “I understand.” [And there are many colorful idioms – across languages – for that pleasant sensation of finally understanding something, the “aha moment”: for example, in English, “the light bulb finally went off in my head” or “everything finally added up” or “things clicked for me;” in German “es hat Klick gemacht (it clicked) or “der Groschen ist gefallen” (a Groschen being the old pre-euro 10 Pfennig coin, and the idiom referring to the coin finally being accepted by a vending machine, causing the mechanism to begin working),” similar to the Swedish “där trillade poletten ner” (the token fell); or in Italian, “ho avuto un'illuminazione” (I’ve had an epiphany).]

But as clear as the idea of understanding may seem to us, we obviously have great difficulty judging how well we ourselves understand something (“I know everything there is to know about the Dunning-Kruger Effect!”), or determining whether another person understands something (“When I asked you to empty the dishwasher, I assumed you understood that you should wait until it FINISHED RUNNING!”). The difficulty of understanding understanding is compounded by the self-evident fact that there are shades of comprehension. We all know what it’s like to understand someone’s words but not what they mean. And to all but the most deluded of us, it should be clear that we very often have only a partial understanding of something.

In other words, understanding is not a Boolean value but a fuzzy value.

So, if we have trouble discerning how well we understand something, how can we possibly assess whether another agent, whether organic or artificial, understands something? When you get right down to it, the way we measure whether someone else understands something is to test how effectively they are able to apply the concept in question: if, for example, you want to determine whether someone understands a mathematical principle, give that person a problem whose solution requires the application of the concept.

While the question of whether a computer can understand human speech is incredibly interesting to many people, including us at Lengoo, it is nevertheless not particularly helpful in trying to understand the strengths and weaknesses of AI.

Alan Turing famously suggested a more pragmatic approach to a similar question, whether computers can think, in a paper that he wrote in 1950, back when room-sized computers had a tiny fraction of the computational power of your smartphone. His remarkably simple proposal, the Turing Test, asked a person to judge if a series of responses was generated by a person or computer. If the person could not reliably tell the difference between the human responses and the computer-generated responses, then, he argued, the computer could be said to think because it was responding in a way indistinguishable from a presumably intelligent human being.

But let's go one step further down the path of pragmatism (and away from the hype around AI): we need to remember that software and computers are tools. We don’t generally ask whether a tool understands the task at hand: we instead measure how effectively we can apply that tool. We measure how much more efficiently we can work with the tool than without it.

What we require, then, are use-case-specific tests that determine how well AI systems perform on tasks that we want and need them to do. These tests must be repeatable and produce objective success metrics that can be tracked over time, providing truly valuable results that contribute to the refinement of tools that actually help us.

This prosaic approach to determining the efficacy of these tools does not at all diminish the massive questions that we must grapple with in order to live peaceably in a world marked by increasing use and further development of artificially intelligent systems. There are open and critical issues about environmental impact, data security, copyright protection, liability, fair compensation, the future of work, bias in training material, transparency of decision-making, fairness, and many others, including, of course, unforeseen questions as well.

We at Lengoo want to encourage all of us to focus on those issues – developing tools that measurably, fairly, and sustainably improve our lives – instead of becoming mired in the question of understanding, which we certainly have trouble enough answering about our own capacity to comprehend.