p panel

How do you envision the future advancement of AI?

Taylor Sparks BS’07, Associate Professor of Materials Science and Engineering

The role of AI in STEM fields will evolve significantly in the coming years with an explosion of new tools to assist with data analysis, simulation, optimization, and modeling. There is growing interest in generative modeling to create novel data instances and simulate complex systems, enabling researchers to explore new possibilities and generate valuable insights. AI can revolutionize data visualization so researchers can gain deeper insights from complex datasets. Additionally, unsupervised learning algorithms can uncover hidden patterns and structures in data. And finally, AI technologies have the potential to automate the extraction of relevant information from academic papers, enabling researchers to stay up to date with the latest advancements more efficiently. 

Read the full Q&A with Sparks.

Hollis Robbins, Dean of the College of Humanities

Scholars and writers in humanities disciplines have been grappling with concepts of artificial consciousness and mechanical humans for centuries. Sophisticated new technologies such as ChatGPT have suddenly made engaging with artificial voices a daily occurrence. Every literature professor has seen students who approach fictional characters as if they were real. Disturbing events in fictional texts often trigger real-life emotions. We bring this experience to AI conversations. Currently, AI writing is technically proficient but culturally evacuated. Until culturally inflected AI is developed, models such as ChatGPT will stand apart from culture. Knowledge production within culture will not fully be absorbed by AI. Specific and local cultural knowledge will become more valuable. 

Read the full Q&A with Robbins.

Vivek Srikumar, Associate Professor of Computer Science

I’m excited about AI’s wide-ranging applications. But we need to balance progress with prudence. AI models today struggle with factuality. For example, some can generate fake images or authoritative-looking text that is factually incorrect. We may be OK with a bit of creative truth-twisting for entertainment applications, but factual errors would be unacceptable if an AI system were to handle our tax preparations. There are also privacy issues to consider. For example, would we be okay if an AI-based system ingested our private health data? Ownership and liability present thorny issues with using AI. When AI produces content or operates a vehicle, determining ownership or liability becomes crucial. These are some of the complex issues we must navigate as we advance AI technology.

Read the full Q&A with Srikumar.

Elizabeth Callaway, Assistant Professor of English

I think everyone should play a role in shaping the development and use of AI. We need educated and engaged publics to have a more democratic system for deciding what kinds of AI we want in our society. Right now, these kinds of decisions are being left up to a few giant corporations, and we need vastly more democratic processes to make decisions that affect all of us. I’d like to see more oversight on AI to explain how the AI is accomplishing the task it is set and to provide information needed to audit the AI for ethical violations like bias, polarization, disinformation, and general harm. 

Read the full Q&A with Callaway. 


Comments are moderated, so there may be a slight delay. Those that are off-topic or deemed inappropriate may not be posted. Your email address will not be published. Required fields are marked with an asterisk (*).

Leave a Reply

Your email address will not be published. Required fields are marked *