Elizabeth Callaway: How do you envision the future advancement of AI?


Elizabeth Callaway, Assistant Professor of English

Callaway received an award from the National Humanities Center to develop a class on responsible AI. She is part of a cohort representing 15 universities around the country and is among the optimists who maintain their faith in humanity when it comes to the AI art conundrum.

What inspired you to focus on responsible AI in your research and teaching?

In 2014, as a new mother, I began doing personal research about screens and learning for making parenting decisions. As I read more and more books and articles, it slowly became clear that humanity’s first contact with AI—the AIs that sort social media feeds and make video recommendations—was going horribly wrong. I pursued this as a side interest for years while I finished my first book project. When that was published, I finally gave myself permission to combine my ongoing AI obsession with my expertise as a literary critic and environmental humanities scholar to teach and write about AI—especially about AI and the environment.

In your opinion, what are some of the biggest ethical challenges posed by the rise of AI?

One of the biggest ethical challenges of AI is the business model. If AI is developed through grants and prizes to help solve things like protein folding, then I think AI can do a lot of good. If AI is hooked up to the social media business model of surveillance advertising, then we’re building AI that is specifically designed to manipulate human behavior (whether that’s to keep users on platform or influence their purchasing patterns). That’s exactly the most dangerous kind of AI, and I think many of the ensuing problems we see with AI in terms of bias, polarization, radicalization, and mental health stem from the fact that it’s developed for the attention economy.

There are so many other ethical challenges I could talk about. For example, the black box nature of AI is a problem. It is difficult for even the people working on AI to figure out exactly what it’s doing. This allows for biases to go undetected for too long. There is also the fact that AI tends to act in wildly unpredictable ways when accomplishing a task set out for it. A famous example is the YouTube recommendation AI. When engineers at YouTube set the goal to increase watch hours on the platform, they had no idea that the AI would accomplish that by recommending conspiracy theories, or by recommending videos with incrementally more extreme content, but that’s what it did. And it worked—if the AI can get someone hooked on a conspiracy theory, they’ll watch for hours, but an AI also doesn’t have any of ethical principles that might prevent a human technologist from coming up with that strategy. And then, due to the black box nature of AI, it takes the human technologists a while to even figure out that the AI is even doing this. This unpredictability is also exemplified when AI operates in ways unthinkable to human players, like when chess-playing AIs relegate a queen to the corner of the board. It’s hard to predict the dangers or reverse-engineer what the AI is doing once it’s out there in the world. And it’s hard to predict what the ethical challenges will be, since AI accomplishes tasks in radically unforeseen ways.

How do you think we can ensure that AI is developed and used responsibly, rather than for harmful or unethical purposes?

If I had a magic wand, I’d get rid of the surveillance advertising business model that has taken over AI deployment in social media. I’d instead have more of a PBS-style, public space social media option that isn’t built on keeping eyeballs on platform and virality, but actually helps connect people. Barring that, I’d like to see more oversight on AI, so that anyone developing an AI and putting it out to be used with/on people has to 1) be able to explain how the AI is accomplishing the task it is set, and 2) has to provide the information needed to audit the AI for the kinds of ethical violations we see regularly in terms of bias, polarization, disinformation, and harm in general.

What role do you think humanities scholars can play in shaping the development and use of AI?

First of all, I think everyone should play a role in shaping the development and use of AI. We need educated and engaged publics in order to have a more democratic system for deciding what kinds of AI we want in our society. Right now these kinds of decisions are being left up to a few giant corporations, and we need vastly more democratic processes to make decisions that affect all of us. I view my primary role as educating global citizens, so they have the background knowledge to shape our AI future. In this vein, I’m excited by the opportunity Dean Hollis Robbins has given me to design an AI-themed, online writing course that might eventually be taken by thousands of students each year to fulfill their general education writing requirement. Most of these students will never work on AI, but they will be part of an engaged and informed public that can help steer our society’s AI decisions.

As an English professor, I also view my role as writing about the narratives that we tell about AI, especially when common narratives hinder efforts to use AI in ways that lead to thriving futures. For example, I’m working on an article about AI consciousness as a distraction from actual AI challenges. So many stories of AI are about either killer AI or AI that is essentially a human being but has no rights. They make it seem like the ethical questions about AI are either about how to prevent the singularity or how to give a new kind of being the protections it deserves. These stories about future possibilities of conscious AI only distract us, and make us think any ethical questions about AI can wait until we achieve conscious AI. It obscures the kinds of harm that we see right now.

How can we encourage more interdisciplinary collaboration between computer science and humanities scholars when it comes to AI?

I was selected for the National Humanities Center Program in Responsible AI and spent a week in Durham with about fifteen faculty from humanities disciplines and computer science departments. This shortly became my most cherished academic experience as I have never learned so much so quickly. I learned not only about the inner workings of AI from computer scientists, but also about ethics from philosophers and about teaching from gifted instructors across all fields. I would love to see more mixed conferences and retreats like this one.

I think co-teaching is another promising model. I’m co-teaching a course in responsible AI with Rogelio Cardona-Rivera (a professor in the School of Computing and Entertainment Arts and Engineering) for Honors College in the fall. This kind of teaching is only possible if department chairs and the upper administration value it by counting it as part of our course load. Individuals are eager to work together, but sometimes there are structural barriers to cooperation of this sort. I’d love to see interdisciplinary teaching counted as on-load teaching in departments across campus.

Read other thoughts on the future of AI from our panel of experts. 

Comments

Comments are moderated, so there may be a slight delay. Those that are off-topic or deemed inappropriate may not be posted. Your email address will not be published. Required fields are marked with an asterisk (*).

Leave a Reply

Your email address will not be published. Required fields are marked *