December 10, 2021
Thanks to Alissa Simon, HMU Tutor, for today’s post.
Recently I attended a webinar hosted by Middlebury College on the subject of data literacy. This webinar was part of a program called Middlebury Initiative for Data and Digital Methods (or Midd:data for short) and functioned as an introduction to the importance of data literacy. The program is founded upon the principle that “Data and digital methodologies are as central to a 21st-century liberal arts education as reading and writing.”
The webinar consisted of a discussion between Eric Schmidt, former CEO of Google, and Caitlin Myers, Professor of Economics and co-founder of Midd:data. Schmidt gave a brief history of AI, claiming that about five years ago, AI was good at patterns, but little else. Today, however, AI offers all sorts of potential applications. For example, he spent time explaining the ideas and benefits of Generative Adversarial Networks (or GANs). One way to describe these networks is as follows: “Generative modeling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.” Schmidt explored how this type of program may help someone collect and synthesize all of the information within a specific field. This is particularly helpful for fields with a heavy reliance upon jargon or technical knowledge. Another example that was discussed is the fruitful potential of biology paired with machine learning. He did caution, however, that if computers move past merely compiling information, and move towards generating their own agenda, this might be cause for concern. Even so, Schmidt clearly feels that GANs and other AI tools will be very helpful as we progress in specific fields of study. Whether AI will remain a tool, however, is another question to ask.
As AI evolves, humans become more dependent upon it. In a recent article in The Atlantic, Kissinger, Schmidt and Huttenlocher predict: “[I]t is possible that in many parts of the world, from early childhood onward the primary sources of interaction and knowledge will be not parents, family members, friends, or teachers, but rather digital companions, whose constantly available interaction will yield both a learning bonanza and a privacy challenge. AI algorithms will help open new frontiers of knowledge, while at the same time narrowing information choices and enhancing the capacity to suppress new or challenging ideas.” In other words, it has already been normalized in most households and normalized customs often result in a lack of curiosity, understanding, or insight. Yet it is up to humans to understand and interpret what AI is and how it affects the human world.
I am unclear as to whether the data produced by AI might demonstrate a hidden bias in the way that it compiles data. The webinar speakers discussed this for a moment, but I do not understand the process well enough to understand if this is a potential issue or not. Also unclear to me is how programs like GANs might help in a field like literature. While it might be able to synthesize scholarly essays centered on a subject, the literature itself is difficult to summarize into any sort of unit. In fact, great literature often defies categorization. Of course, this is how AI may also be helpful – it creates inhuman connections, which might expand the fields of study in unknown ways. Myers mentioned that part of her attraction to the liberal arts is the element of discovery. Reading and wrestling with material is part of the charm of a liberal arts program because it activates a person’s creativity which intersects with personal experience. It can be highly rewarding to make connections and comprehend difficult theories. In fact, it seems important to bear in mind that AI probably won’t be able to respond to literature in the way any human would, which also reinforces the value of human responses. Precisely because liberal arts begin with deep learning and a respect for questions, Schmidt sees the potential for AI to help with summarization. He asks that we investigate how data drives subject matter and the way that we think about data in general. It may be worth our time to understand and define data in terms of the changing potential of AI.
The conversation also addressed the fact that over time, learning changes. AI might be able to help a student find their best learning style, or find an educational style that suits their needs. Moreover, Schmidt sees the potential for AI to help translate courses into understandable languages so that fields are more accessible. As we move more and more into technology, it might be necessary for all students to have a basic understanding of HTML programming, statistics, data literacy, and simple systems. He stressed this last point by adding that we may not need to know why the tools work, but we will definitely need to know how to use and analyze them. He used the metaphor of learning to read spreadsheets in the early years of software. Business schools first incorporated classes on how to read Excel spreadsheets because businesses were incorporating them at a fast rate. But universities did not necessarily teach how to write or understand the hidden macros in the spreadsheets. In other words, schools focused on how to use the tool, but not how the tool was made. Again, this leaves me with the question of how will we know when the system is not working? What will that look like? Will there be a glaring omission that we know should be included in the data set? Clearly, we must quickly improve our skills at understanding data, if nothing else, simply to know the benefits and faults of the information that we read and are fed. Is the information biased? If so, how? Is it complete, and what would completion ever look like? What are signs of incompleteness? How will we know if a system does not have a full picture of the information that we seek?
My questions are mostly ignorant starting points. It is clear that I rely upon systems everyday and therefore, I must come to terms with data, which brings me to one of the final points of the discussion. Most importantly, Schmidt calls for change in the relationship between humans and computers. Current media and social media platforms function on addiction-like patterns, which are highly damaging. He says: “If we don’t change what we’re doing, computers will become outrage machines because they function on addiction mentalities.” For this reason (and many others, I’m sure), Schmidt would like to incorporate ethics into future discussions. He tasked schools with the importance of developing ethics programs to better understand how AI enters into the world. What are our norms, principles and objectives? How does AI feed or reinforce these? How do our norms become visible (and invisible) in data sets?
According to the Preface of Schmidt’s new book, The Age of AI and Our Human Future (coauthored with Henry Kissinger and Daniel Huttenlocher), AI is not a single thing. They write, “AI is not an industry, let alone a single product. In strategic parlance, it is not a ‘domain.’ It is an enabler of many industries and facets of human life: scientific research, education, manufacturing, logistics, transportation, defense, law enforcement, politics, advertising, art, culture, and more. The characteristics of AI – including its capacity to learn, evolve, and surprise – will disrupt and transform them all. The outcome will be the alteration of human identity and the human experience of reality at levels not experienced since the dawn of the modern age.”
To leave a comment, click on the title of this post and scroll down.