Lengoo is permanently closed. Read more here.
How can people make better decisions using data and AI? This is the question non-fiction author and business journalist Thomas Ramgea will be exploring in this guest contribution.
In 2009, at the beginning of the big data boom, the Silicon Valley visionary and editor-in-chief of Wired Magazine, Chris Anderson, wrote his widely acclaimed piece entitled »The End of Theory«. His prognosis at the time was as follows:
"This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology! Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves."
A good decade later, we can declare with either disappointment or relief – depending on perspective and application – that the world remains unpredictable. The data don't speak for themselves. The many hundreds of billions of dollars of public research funding and private sector investments in mass data analysis and machine learning are being countered by the complexity of life. They've tilted the balance between predictability and chaos a little in favor of statisticians.
We make decisions on uncertain footing
And Laplace's demon, which can fully calculate and predict the future, remains, for the time being, an epistemological mental game, a fantasy: in 1814, the French mathematician, philosopher, and astronomer Pierre-Simon Laplace asked himself whether all the laws of nature are to be comprehended within a closed mathematical world formula. These mathematically describable laws would then not only have to apply to all celestial bodies, but to every single atom in the universe as well. Ultimately, this would mean that our lives are predetermined. This would mean an omniscient demon could predict everything with the world formula. Heisenberg’s uncertainty principle puts it simply, saying that our observation has always changed the observed object: we can never get at the matter itself.
The same thing happens at a quantum level. One might fear that the vagueness that is typical of quantum theory constantly haunts us. We have to make decisions, even though, ultimately, we cannot know anything for sure. In terms of the laws of quantum mechanics, however, there is one thing of which we can be absolutely certain: that we have to make decisions based on uncertainty – and are in fact allowed to do so. We are still responsible for our decisions and, in turn, for our happiness. That should provide us with additional incentive to improve our decision-making. In this regard, data abundance and machine learning can, in some circumstances, help us on a completely different plane than is currently being discussed.
Instincts versus rationality
Since Herbert Simon's work in the 1950s, we humans, as beings with limited rationality, have been under a general suspicion of systematically making bad decisions. Since at least 2002, when Daniel Kahneman won his Nobel Prize, we have known the reason behind our weakness when it comes making decisions: System 1 and its distorting instincts, which is biologically encoded in our brain, constantly torpedoes the rational decision-making in system 2, which is much more objective in relation to our goals.
This narrative, raised by behavioral economics for the purposes of general education, is interesting in two respects with regard to the core question of this book, "How we can make better informed decisions with the help of data and AI?": AI systems are now generally suspected of reinforcing and amplifying human bias. They are considered »Weapons of Math Destruction«. The examples given to illustrate the disruptive power of algorithms on society are always the same.
An Amazon recruiting system discriminates against female programmers because it only finds men as programmers in the learning data and draws the wrong conclusions from the data. The American judicial software COMPAS, a tool used to predict the recidivism rates of prisoners who could be released prematurely from custody, compounds the number of racist decisions made by judges. Scoring algorithms discriminate against solvent bank customers who live at the wrong address. A French system sends high school graduates from run-down suburbs or banlieues to worse universities that are close to the socially unsteady suburbs and not to the middle-class quarters in the center. In individual cases, these negativity compounding effects may exist in actuality, even if, for example, the Amazon recruiting bot never actually recruited programmers, but was just a test that not a single programmer fell victim to as part of a real application process.
Machine learning reveals bias
The salient point here is this: machine learning is fantastic for exposing and correcting human bias present in decisions. Amazon's HR managers can test the system for bias at any time by feeding in CVs with the same level of qualifications and then determining whether they illicitly favor a gender, a certain skin color, or a sexual orientation. The prejudices of a human decision-maker cannot be systematically uncovered and remedied so easily.
On the other hand, AI systems could, in future, give researchers and innovators from all industries and areas a vigorous push towards questioning their firmly established assumptions and what they thought were scientifically proven models. When the Google machine AlphaGo defeated one of the best Go players Lee Sedol in March 2016, in a machine-versus-human series that was just as interesting as the erstwhile matchup between Deep Blue and Kasparov, the player was repeatedly surprised by the moves of the AI and his amazement was written on his face time and again. The learning system called into question what human players believed to be the irrevocable rules dictating promising game decisions. Whenever we mull over complicated or complex decisions rationally and in system 2, our system 1 automatically comes into play time and again. Our instincts then tell us that something is wrong: I have overlooked something, or an envisaged option that is intrinsically reasonable in a logical sense doesn't actually fit with my goals at all.
Questioning our decision-making in good time could develop into of the most important tasks of AI decision assistants. They can help us make our decisions more strongly in System 2 by supporting our intuition and helping us to more strongly question our decision-making with regard to our goals. This would bring us closer to the actual aim behind any decisions we make: to be happy with them. "Artificial Intelligence" then becomes "Artificial Intelligence Enhancement". And the abbreviation “AI” no longer stands for “Artificial Intelligence”, but “Augmented Intelligence” instead
This text is an adapted excerpt from Thomas Ramge's book "Augmented Intelligence – Wie wir mit Daten und KI besser entscheiden" (Reclam Verlag, 2020).
Related articles: