Vivek Srikumar, Associate Professor of Computer Science
How would you describe the current state of AI technology and research?
In a certain sense, AI has become easily accessible in the last year or so. During this time, tools such as ChatGPT, GPT-4, Bard, Midjourney and their siblings have had something of a collective iPhone moment in the way they have captured the public consciousness. These developments are the outcome of an ongoing research and engineering program that has been steadily accelerating over the past decade.
Many of the AI advancements that frequently make headlines today rely on machine learning techniques, particularly deep learning, which employ artificial neural networks. These networks are trained on vast datasets encompassing various types of information, such as documents, images, and even artwork. Of course, it is important to note that this description oversimplifies the underlying technical details and challenges involved in the process.
Take large language models as a specific example: they are enormous neural networks, comprising billions, if not trillions, of adjustable parameters. These models are trained on massive corpora, consisting of billions of words. Additionally, some systems have safeguards in their training to prevent them from generating toxic content. Some systems are designed to mimic human abilities, such as following instructions. Due to these combined efforts, these systems can generate text that closely resembles human language, and even computer code, surpassing the capabilities of systems from just a year ago. In fact, they have reached a level of proficiency that enables automation of routine tasks and, to some extent, non-routine tasks as well.
Taking a broader perspective, AI should not be viewed as a singular entity or program. Rather, it encompasses a collection of diverse and pervasive tools that each perform specific tasks: e.g., detecting spam emails, identifying and labeling individuals, locations, and objects in photographs, conducting cancer screenings, sorting mail for postal services, providing movie and product recommendations, among others. In one way or another, we have all engaged with AI tools. The recent notable successes of AI systems have shed light on the wide-ranging areas where AI-based tools can potentially make a significant impact, including entertainment, finance, transportation, agriculture, education, and even health care.
In your opinion, what are some of the biggest challenges facing the development and use of AI today?
At a high level, there are two key questions:
- AI systems are increasingly adept at performing a wide range of tasks. But the neural networks they are built on are opaque blackboxes. Should increased competence warrant increased trust in an opaque decision-making system?
- We are increasingly willing to deploy and use AI tools because their potential benefits are seen as important. Should we do this in high-stakes situations, where they may pose immediate risks to individuals, to the society at large, or to the environment?
Let me give some concrete examples of challenges facing the use of AI today.
Many of the largest AI models today struggle with factuality. For example, large language models can generate authoritative-looking, but factually incorrect, text. Image generation systems can create fake images that are hard to spot. We can think of these as “information pollutants.” How much information pollution is acceptable depends on the application. We may be okay with a bit of creative truth-twisting for entertainment applications, but factual errors would be unacceptable if an AI system were to handle our tax preparations.
The easy availability of tools like ChatGPT and Stable Diffusion can produce misinformation superspreaders, and that may have political consequences. Perhaps the solution lies in adopting higher journalistic standards and implementing more rigorous source verification processes.
Another issue stemming from an eagerness to use AI systems in domains like criminal justice, hiring, access to education, etc., is that algorithmic decision making can amplify societal stereotypes. The data that models are trained on, and even the data collection processes, likely contains social biases. As a result, some groups may be marginalized or get left behind because they are not represented or are poorly represented in the data.
There are privacy issues to consider. For example, would we be okay if an AI-based system ingested our private data (e.g., medical or proprietary data)? And would we be okay if our interactions with a tool like ChatGPT were recorded to improve its next version?
Ownership and liability present thorny issues with using AI. When an AI system produces a song or an image, determining the rightful owner of the content may become a contentious matter. Similarly, in cases where an AI system operates a vehicle, identifying the party responsible for any errors or accidents becomes crucial. Moreover, the question arises as to whether organizations developing and deploying AI systems should bear the societal consequences of any negative impacts they might inflict.
Finally, I’d like to note that current implementations of AI demand substantial computational resources, both during the training and deployment phases. This translates to significant energy consumption, and creates disparities in access to AI development, deployment, and use among different individuals and organizations.
How do you think we can ensure that AI technologies are transparent and explainable to the public?
We probably need new governance ideas. There are several efforts under way across the world. For example, EU’s Artificial Intelligence Act was passed by the European Parliament with an overwhelming majority, the White House’s Office for Science and Technology Policy has a blueprint for an AI Bill of Rights, and the National Institute of Standards and Technology has an AI risk management framework.
The general themes in these and other regulatory efforts include the need to ensure that we have effective but safe systems that do not discriminate against individuals or groups of people. Moreover, they should be transparent, accountable, provide sources for their claims, and explain their reasoning in a way that allows for their audit. They should also offer privacy guarantees to their users and allow for a human alternative if a user does not want to engage with an AI system.
These guiding principles are not solely technical matters and should not be solely entrusted to lawmakers or AI system providers. Rather, a diverse array of stakeholders, including industry representatives, academics, and government entities, must be engaged collaboratively to effectively navigate this emerging terrain. Research advances determine whether certain tasks can be automated. But whether they should be automated, and if so, to what extent, should be a decision that we all make collectively.
What are some of the most promising applications of AI that you see emerging in the near future?
Artificial intelligence can be a disruptor across various domains. But what tasks and functions may be automated by data-driven computing tools may be different for each industry and organization. Given how fast things are changing, it is difficult to make predictions with any certainty.
As someone who works in artificial intelligence, I cannot help but feel a sense of excitement regarding its vast range of potential applications. However, it is crucial that we navigate this transformative landscape cautiously, taking into account the potential societal impacts that may arise.
Comments
Comments are moderated, so there may be a slight delay. Those that are off-topic or deemed inappropriate may not be posted. Your email address will not be published. Required fields are marked with an asterisk (*).