July 28, 2023
Thanks to Dave Seng, HMU alumnus, for today’s post.
To read the previous post in this series, visit hmu.edu
In our last post we looked at the importance of questions and why self-reflection as individuals and a society is important. It seems part of the human situation to ask questions in order to better understand who we are and how to navigate the world. In our discussion series, we explored the nature of reason, or human rationality and whether or not artificial intelligence will achieve human rationality. Two questions were discussed, what is human rationality? And, Can AI have human rationality? It is fitting to explore the Greek plays because the Greeks have a lot to tell us about who we are as humans. If we are going to attribute human rationality to machine learning systems, it might help to clarify what human rationality is. In full disclosure, I’m an old-school humanist who has somehow landed a teaching position centered around technology, information systems, and our digital world (I teach classes in digital economics and the history of computation). I’m skeptical machines will ever take on genuinely rational human characteristics, but perhaps I’m wrong. We’ll see. What follows are reasons why I am a bit skeptical.
As we learned in the last post, there are many questions that simply cannot be reduced to strict analytical or mathematical problems. Our time and place seem to be influenced by the idea that because computer technology can do a lot of neat things, all problems can ultimately be reduced to scientific or mathematical analysis. Do you have a difficult personal question? All you need is to discursively conduct a cost and benefit analysis. What could be more rational? Simply think it through. But, as we have seen, simple mathematical equations based on data, algorithms, or trade-offs, are of little use when facing personal or enduring questions. This narrow type of reasoning, called rationalism, has come to us from Descartes, Spinoza, Liebniz, and Hegel who think that the senses are untrustworthy, and it is our ideas alone which provide us with the content and structure to interpret reality correctly. In its strictest form, rationalism holds that the mind is ultimate in determining reality and often draws on the abstractions of mathematics and logic as the best conceptual scheme for understanding the world. On the other hand, classical empiricism such as those in the Aristotelian tradition suggest that the senses are our only connection to reality and ought not to be dismissed. The mind and the senses work together symbiotically to interpret the world. According to this approach, it would be foolish to deny the senses or there would be no end to skepticism. The Greeks would remind us that there is more to human rationality than an ability to think only in scientific and logical terms.
The Greek playwrights would remind us that humanity is a curious mixture of rational, nonrational, and irrational properties. A brief examination of each one of these elements will help us understand what it means to be human and whether or not AI might achieve something close to human rationality. The Greeks affirm human rationality as the ability to ask questions, give reasons, engage in debate, and make evaluations about something based on evidence—things that many would consider important elements of our shared humanity. The Greeks, however, would also remind us that there is more to rationality than abstraction and reflection. Aeschylus and Euripides remind us that there is a nonrational component to our humanity. Humans are broadly rational, to be sure, but also include intuition, our immediate awareness of ourselves and the world, intentionality, emotions, and self-evident truths such as the law of noncontradiction. This aspect of humanity is called nonrational. The nonrational element does not go against reason but is in some ways foundational to it. I would suggest, and I think the Greek playwrights would as well, that genuine human rationality must be seen to include intuition, the intellect, emotions, and the senses in our quest for knowledge and understanding of human rationality. In other words, human reason must be seen in light of the rational and nonrational and both must work together.[1] In contrast to rationality (that which affirms evidence, examination, and evaluation) and the nonrational (intuition, and that which is not based upon calculation or utility) is the irrational which is contrary to what we know from general experience or basic reasoning. Sometimes people do act in ways that are against reason, logic, and what is intuitively correct. The Greeks and many others in history have suggested that the irrational element of the human condition is not conducive to human flourishing or something to aspire to as individuals or as a civilization. The irrational is generally seen as something which should be contained or controlled in some way. Great artists, poets, and authors have a way of reminding us of the irrational side of humanity to illumine the limits of our shared situation.
Another way of looking at the human condition are the Greek ideas that people are political animals, social animals, and rational animals. Human beings are all of these things. However one chooses to look at the human condition, individuals and society are enormously complex and cannot be reduced to only one element. Of course, there are those who would eliminate the nonrational and irrational components of human life and would say that those things do not matter much at all when it comes to the nature of human rationality. If AI gets close enough to imitate human reason, is not that good enough? Do we really have to deal with the complexity of human beings? I think it would be a mistake to discard the importance of the whole person in our technological age. After all, even the most super complex computers and machine learning systems work correctly because they were programmed and designed by a human. Furthermore, without a proper understanding of our human condition, it will be very difficult to understand our technological society.
Just as it is very challenging to separate the rational from the irrational in the human family, it is similarly difficult to discern the beneficial and detrimental effects of technology. Medical technology, for example, has reduced disease and plagues and has allowed people to live longer and more comfortably than ever before. Among the results of these benefits, aging populations, Alzheimer’s disease, and additional economic burdens have become greater concerns. I am fairly certain AI will bring about similar blessings and curses to society. There are always trade-offs to every technology made by humans. This why William Barrett, in his work on the intersection of technology and philosophy, The Illusion of Technique, writes, “We seem to carry over into technology that deepest and most vexing trait of the human condition itself: that our efforts are always ineradicably a mixture of good and evil.”[2] Every technology that humans create can be used for good or bad purposes and technology always reflects our human condition. The artifact resembles its creator.
The irreducibility of human rationality is one reason why some computer scientists have become suspicious of the Turing test. The Turing test itself is simple. As Turing envisioned the test, he called it the “Imitation Game,” there is an interrogator, machine, and person. The interrogator is in a room separated from the person and the machine. The object of the game is for the interrogator to determine which is the machine and which is the person (the interrogator only knows the person and the machine as labels “X” and “Y” but does not know whether the person or the machine is “X” or “Y”). Through a series of questions, the interrogator must guess whether the machine is “X” or the person is “Y” or the other way around based on responses to the questions posed to each. The object of the machine is to make the interrogator think that it is the human. The object of the person is to help the interrogator identify the machine. If, based on the responses of each, the interrogator thinks the machine is the person, then the machine must have human rationality. The idea of the imitation game is that if a machine can imitate a human closely enough, there might be reason to think that the machine is conscious or intelligent enough to pass for human rationality.
Many objections to the Turing test have been circulated such as Searle’s Chinese Room experiment and arguments from mathematics and logic. Recently, Jaron Lanier, one of the founders of virtual reality and himself a computer scientist, suggests that the Turing test presents a truncated view of human rationality. In his book, You Are Not a Gadget, Lanier writes:
“But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you? … We have repeatedly demonstrated our species’ bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous.” [3]
The Turing test only seems effective if one dumbs down human rationality and, I would suggest, offers a very narrow and reductionistic view of human rationality. I’m not sure the Greeks would recognize this version of rationality. And I’m not sure AI will ever develop nonrationality.
I think a robust and wholistic view of humanity has practical applications when it comes to machine “intelligence”. And I hope it will further illuminate reasons why I am skeptical about AI achieving full (what computer scientists call “hard”) human rationality. There are worries about the degree AI will end some types of jobs. As with many technologies, Artificial Intelligence may indeed put an end to some types of work. If your employment is to develop greater automation or deliver an algorithm, your work might become a casualty of AI. However, if you are in an industry which involves the education, critical thinking, motivation, advising, helping, or serving individuals or society your job will be secure. I think the social aspect of humans will be as difficult to replicate as the nonrational elements of humanity. In addition, digital and media literacy will become more important than ever. I don’t think the human person will ever be totally replaced. Humans are just too complex. I realize I could be wrong about AI, but I am still a bit skeptical. To me, artificial intelligence programs are little more than a digital assistant.
[1] Intuition, of course, can go astray. It still must be checked and corrected with the senses and reason. However, if there are such things as self-evident truths, immediate awareness, and things which are basic and fundamental which must be assumed and indemonstrable (such as the law of noncontradiction), then these play a foundational role in human reasoning and the nonrational contributes to human reasoning.
[2] William Barrett, The Illusion of Technique: A Search for Meaning in a Technological Civilization (Garden City, New York: Anchor Books, 1979.), 25.
[3] Jeron Lanier, You Are Not a Gadget: A Manifesto (New York, NY: Alfred A Knopf, 2010), 32.
Works Cited
Barrett, William. The Illusion of Technique: A Search for Meaning in a Technological Civilization. Garden City, New York: Anchor Books, 1979.
Lanier, Jeron. You Are Not a Gadget: A Manifesto. New York, NY: Alfred A Knopf, 2010.
To leave a comment, click on the title of this post and scroll down.