A panel of UC Berkeley scholars discusses the transformative potential of AI in academia, highlighting challenges of bias and data curation. They explore AI's impact in fields like psychology, astronomy, and environmentalism. The episode also tackles the consequences of divesting from education and the need for ethical AI and societal restructuring. The panel showcases the excitement in academia to investigate failure points and boundary conditions of AI models.
Ethical considerations and potential biases must be addressed in the use of AI in academia.
Historicizing the development of AI is essential to understand its cultural and ideological implications.
The environmental impact and materiality of digital technology need to be considered in the use of AI.
Deep dives
AI and machine learning changing academia and science
Machine learning and artificial intelligence (AI) have significantly impacted and transformed the way science and academia function. In the field of astronomy, machine learning has facilitated the identification and classification of celestial objects and enabled the discovery of new phenomena like supernovae. Similarly, in psychology, AI has been used to study cognitive-affective processes and develop predictive models for substance use disorders. However, the ethical implications and potential biases of AI systems must be considered to ensure fair and equitable outcomes. Additionally, there is a need for universities and academic institutions to balance the influence of private corporations and prioritize human welfare when utilizing AI technology.
Historicizing technology and AI's neutrality
It is important to historicize the development of technology, including AI, to understand its ideological and cultural implications. AI builds upon previous technological advancements and has the potential to perpetuate biases and inequalities. Researchers highlight the necessity of curating data sets and mitigating biases to ensure responsible and ethical AI applications. Furthermore, the involvement of large internet companies in data collection raises concerns about privacy, corporate agendas, and potential misuses of data. The role of academia in developing AI is crucial to consider human welfare, foster relationships with data subjects, and provide transparency in data usage.
The tension between data representation and human complexity
The representation of human life and reality through quantifiable data raises questions about the reduction of individuals to numbers and the extractive nature of data collection. The historical context of turning life into data relates to colonial pasts and exploitation. The panel discusses the need to question and critically analyze the translation of real-life complexities into numerical representations. While scientific advancements have showcased extraordinary results, such as the image of a black hole, there is a need to reflect on the implications of turning human experiences into data and the potential implications on societal relationships, power dynamics, and individual agency.
The Materiality of Digital Objects and Environmental Impact
The podcast discusses the materiality of digital objects and their environmental impact. It highlights the tension between the virtual and abstract nature of digital technology, such as large language models, and the physical infrastructures and energy costs required to support them. The speaker emphasizes the need for a better understanding of digital materiality and the environmental implications of using these technologies. They also urge for awareness of the material impact of digital objects and the need to consider their energy consumption, such as streaming and data usage.
The Challenges of Data Curation and Ethical Use of AI
The podcast explores the challenges of data curation and the ethical use of AI, specifically in machine learning models. It discusses the tension between the need for larger and better data sets to train algorithms and the potential biases and limitations embedded within the data. The speakers highlight the importance of considering data quality, such as avoiding racially biased algorithms, and the need for innovative approaches to address these challenges. Furthermore, they discuss the skepticism surrounding black box algorithms and the need to reconcile machine learning outputs with traditional statistical formalism. They also emphasize the role of academia in shaping ethical AI practices and training the next generation to responsibly and critically engage with AI technologies.
In Berkeley Talks episode 186, a panel of UC Berkeley scholars from the College of Letters and Science discusses the transformative potential of artificial intelligence in academia — and the questions and challenges it requires universities and other social institutions to confront.
"When it comes to human-specific problems, we often want fair, equitable, unbiased answers," said Keanan Joyner, an assistant professor of psychology. "But the data that we feed into the training set often is not that. And so, we are asking AI to produce something that it was never trained on, and that can be very problematic. We have to think very carefully about how we're training our AI models and whether they'll be useful or not. I think there's so many awesome uses of AI, and I'm going to use it in my own work, and it's going to definitely infuse psychological science and social sciences more broadly."
Panelists of the October 2023 Berkeley event included:
Alex Saum-Pascual, associate professor of contemporary Spanish literature
Moderated by Marion Fourcade, professor of sociology and director of the Social Science Matrix
This discussion is part of the L&S Salon Series, which showcases the diversity and range of academic disciplines embedded across the five divisions in the College of Letters and Science.