241. Gary F. Marcus with Ted Chiang How to Make AI Work for Us (And Not the Other Way Around)
Nov 20, 2024
auto_awesome
Gary F. Marcus, a best-selling author and AI expert, teams up with acclaimed sci-fi writer Ted Chiang to explore the complex landscape of artificial intelligence. They delve into the ethical implications of AI, advocating for technology aligned with human rights. The discussion reveals the whims and limitations of AI language models, the risks of generative AI, and the need for robust policies. They emphasize the importance of understanding causal reasoning in AI and the challenges of integrating personal AI assistants into our lives, urging a balance between innovation and accountability.
Generative AI raises concerns over its factual accuracy, often producing misleading information that requires human validation for reliability.
Harms associated with AI, including misinformation and bias, necessitate a robust regulatory framework to ensure transparency and accountability.
To enhance AI's effectiveness, future development should integrate reasoning and cognitive science insights, moving beyond purely generative models.
Deep dives
The State of AI and Generative Technology
Currently, there is a prevalent focus on generative AI, which raises questions about whether it represents the type of artificial intelligence we truly desire or simply the best technological option available at this moment. Generative AI has significant limitations, including a tendency to produce misinformation and an overall lack of factual accuracy. Its function can be termed as 'rough draft AI' since the results it produces often require human validation and oversight to determine their reliability. Comparatively, there exist more stable forms of AI, such as navigation systems, which function without the same level of ambiguity or risk.
The Reliability Issues of Generative AI
One of the most significant challenges with generative AI lies in its factuality problem, which leads to outputs that may sound plausible yet are completely fabricated. An illustrative example presented is a faux biography written by an AI that incorrectly claimed the speaker owned a pet chicken named Henrietta, showcasing the lack of sanity checks and factual verification in the generative process. This trend highlights a broader issue where AI mimics human language convincingly but lacks comprehension and can produce nonsensical results based on mere statistical relationships rather than factual knowledge. As such, many users may mistakenly perceive generative AI as intelligent, further complicating its potential impact on society.
Risks Associated with Generative AI
Generative AI presents several immediate risks, including the spread of disinformation, bias, and potential legal liabilities. The technology can be manipulated to create harmful or defamatory content, as demonstrated by a case where a lawyer was falsely accused of misconduct based on fabricated AI outputs. Additionally, issues such as environmental costs associated with running these models and copying biases evident in outputs reinforce concerns that AI's deployment could exacerbate existing societal problems. Overall, these dangers signify the need for careful consideration of how AI systems are designed, used, and regulated.
The Need for AI Regulation
Addressing the multifaceted risks of AI requires a robust regulatory approach, with safeguards and transparency at the forefront. Recommendations include establishing independent oversight, akin to an FDA-like approval process for new AI technologies and mandates for disclosing the data used in AI training. The call for transparency is deemed critical to mitigate biases and assess the potential harms arising from AI systems. Furthermore, a layered oversight model is necessary—a system similar to commercial aviation's rigorous regulation, which could ensure accountability and prevent significant societal harm.
The Intersection of AI and Human Decision Making
Human decision-making, particularly in the face of emotional influences like fear and joy, presents its own challenges that AI technology may inadvertently amplify. The complexities of human emotion and the often irrational nature of decisions can lead to outcomes that are just as troubling as any issues arising from AI-generated outputs. The hope is that AI could be developed to support human reasoning and decision-making processes. However, presently, the lack of common sense and emotional understanding in AI models poses significant barriers to achieving this goal.
Looking Towards the Future of AI
Moving forward, there are calls for innovative approaches to integrate reasoning into AI systems rather than relying solely on generative models. Insights from cognitive science suggest that a hybrid methodology combining data-driven learning with classic symbolic reasoning could lead to advancements that accommodate more abstract concepts. The idea of building AI systems that understand and reason about the world similarly to how humans do may not only improve AI reliability but also enhance its utility across various fields. Ultimately, addressing these challenges will require an open-minded collaboration among researchers, policymakers, and industry leaders to pave the way for a more responsible AI landscape.
Artificial intelligence is an actively surging field in today’s digital landscape, and as each new AI interface reaches the public it throws into sharper resolution that all the big tech players are getting involved. And quickly. But where are the roots of this rapidly expanding industry’s interests? How does AI impact individuals, established industries, and the future of our society if it continues to grow faster than it is critically examined? In his newest book Taming Silicon Valley: How We Can Ensure That AI Works For Us, author and scientist Gary F. Marcus uses his expertise in the field to help readers understand the realities, risks, and responsibilities the public faces as AI gains widespread traction.
Taming Silicon Valley aims to compare and critique the potential futures that AI– alongside Big Tech strategies and governmental involvement– could present to our world. Marcus asserts that if used and regulated properly, there are openings for huge advancements in science, medicine, technology, and public prosperity. On the opposite side of the spectrum, there lie vulnerabilities to abuses of power, a lack of effective policy, and dwindling protections for intellectual property and fair democracy. Marcus emphasizes that AI is meant to be a tool, not an unchecked entity and that it is up to the public to choose how it is allowed to shape the paths ahead. His work sets out to provide context to how AI has gotten to its current state, guidance towards understanding what coherent AI policy should look like in the future, and a call to action in pushing for what is needed in real-time. In the tradition of Abbie Hoffman’s Steal This Book and Thomas Paine’s Common Sense, Taming Silicon Valley urges readers towards awareness, analysis, and activism in this pivotal time of new AI integration.
Gary F. Marcus is an author, psychologist, scientist, and prominent voice in the field of artificial intelligence. He is Professor Emeritus of Neural Science and Psychology at NYU and was the founder and original CEO of Geometric.AI. His previous publications include Guitar Zero, Kluge, and Rebooting AI: Building Artificial Intelligence We Can Trust.
Ted Chiang is an award-winning science fiction author. His publications include Tower of Babylon, Exhalation: Stories, and Stories of Your Life and Others, which has been translated into twenty-one languages. He is a frequent contributor to The New Yorker, particularly of non-fiction related to the intersections of art and technology.