Latanya Sweeney, pioneer in online privacy and algorithms, explores the impact of technology on society, unforeseen consequences of generative AI, the role of public interest technologists, and the evolution of artificial intelligence.
The rise of generative AI raises concerns about disinformation, erosion of trust, and manipulation of truth in society.
Society needs to collaborate in shaping technology to align with human purpose and values, avoiding sacrificing privacy or utility.
The transformative power of technology requires individuals and institutions to become more discerning, redefine truth, and verify authenticity.
Deep dives
The Challenges of Generative AI and Disinformation
Generative AI, particularly technologies like chat GPT, presents significant challenges in terms of disinformation and the potential for manipulating truth. The ability to generate realistic-looking text, images, and videos raises concerns about the erosion of trust and the difficulty in distinguishing between real and fake content. The implications for democracy and the spread of disinformation are particularly worrisome, as technology can be used to create convincing narratives that can sway public opinion and undermine societal values. Society needs to grapple with questions of truth, navigate the rise of chatbots and the spread of disinformation, and find ways to rebuild trust in an era of generative AI.
The Search for Sweet Spots in Shaping Technology
Finding the 'sweet spots' in shaping technology to align with human purpose and values is crucial. Faced with take-it-or-leave-it situations where technology is already in the marketplace, society needs to identify design issues and arbitrary decisions that lead to clashes and harms. By intervening early in the development process, through principles, risk assessments, and impact assessments, we can avoid the zero-sum game of sacrificing privacy for utility or vice versa. The goal is to maximize the benefits of technology without compromising individual rights and societal well-being. It requires a collaborative effort involving stakeholders, including venture capitalists, company leaders, policymakers, and technologists.
From Blind Trust to Skepticism: Navigating the Transformative Power of Technology
The transformative power of technology, like generative AI, necessitates a shift from blind trust to skepticism in society. With the rise of disinformation and the manipulation of truth, individuals and institutions must become more discerning about the information they encounter online. The challenge lies in redefining our understanding of truth, reevaluating our reliance on the internet as a source of accurate information, and finding new mechanisms for verifying authenticity. In this age of rapid technological advancement, it is crucial to recognize the evolving nature of the challenges we face and work towards building a society that can navigate the complexities while upholding democratic values and promoting human welfare.
The Challenges and Tensions of Technology Innovation
As technology continues to advance, there are challenges and tensions that arise in its development. One major issue is the tendency to focus solely on solving a specific problem while ignoring the potential negative effects or consequences that may arise from the technology. This can lead to a lack of responsibility and consideration for society. Additionally, the pressure for quick returns on investment in the venture capital world can further hinder thorough problem-solving and risk mitigation. The need for a more holistic approach, which includes thinking through potential clashes and finding simple solutions, is important to ensure the responsible development and deployment of technology.
The Impact of Generative AI and the Need for Responsible Design
Generative AI, such as chat GPT, offers exciting potential but also raises significant concerns. The lack of content moderation at scale, particularly in social media, is a pressing issue. The increasing use of generative AI in generating content can lead to echo chambers and a lack of diverse perspectives. Trust becomes a challenge as determining what information is reliable or true becomes more difficult. Biases inherent in the training data can also be perpetuated by generative AI, leading to racist or sexist outputs. The responsibility lies not only with developers and companies but also with individuals to critically engage with and question the information produced by generative AI.
You may not know Latanya Sweeney's name, but as much as any other single person — and with good humor and grace as well as brilliance — she has led on the frontier of our gradual understanding of how far from anonymous you and I are in almost any database we inhabit, and how far from neutral all the algorithms by which we increasingly navigate our lives.
In this conversation with Krista, she brings a helpful big-picture view to our lives with technology, seeing how far we've come — and not — since the advent of the internet, and setting that in the context of history both industrial and digital. She insists that we don't have to accept the harms of digital technology in order to reap its benefits — and she sees very clearly the work that will take. From where she sits, the new generative AI is in equal measure an exciting and alarming evolution. And she shares with us the questions she is asking, and how she and her students and the emerging field of Public Interest Technology might help us all make sense.
This is the second in what will be an ongoing occasional On Being episode to delve into and accompany our lives with this new technological revolution — training clear eyes on downsides and dangers while cultivating an attention to how we might elevate the new frontier of AI — and how, in fact, it might invite us more deeply into our humanity.
Latanya Sweeney is the Daniel Paul Professor of the Practice of Government and Technology at the Harvard Kennedy School, among her many other credentials. She’s founder and director of Harvard’s Public Interest Tech Lab and its Data Privacy Lab, and she’s the former Chief Technology Officer at the U.S. Federal Trade Commission.
This interview is edited and produced with music and other features in the On Being episode "Latanya Sweeney — On Shaping Technology to Human Purpose." Find the transcript for that show at onbeing.org.
______
Sign up for The Pause — a Saturday morning companion to the podcast season, and a way to stay on top of all On Being happenings across the year.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode