Blake Lemoine, an ex-Google engineer, sparked controversy with his claims about the sentience of the LaMDA chatbot. Joining him is Gary Marcus, an AI critic and author passionate about the risks of advanced technology. They dive into the limitations of AI chatbots, the troubling implications of deep interactions, and whether these entities are genuinely intelligent or merely sophisticated code. Their spirited debate underscores the urgent need for safety measures as AI evolves rapidly, challenging listeners to consider the ethical future of technology.
01:04:15
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Credibility vs. Credulity in LLMs
Large language models (LLMs) statistically predict text, making them approximately correct, but not entirely reliable.
LLMs lack fact-checking mechanisms, leading to hallucinations and unreliable information, unlike credible sources.
insights INSIGHT
Limitations of Pure LLMs
While pre-trained LLMs predict text, reinforcement learning (RL) changes their function to achieving goals, not just prediction.
Pure LLMs lack truth mechanisms and hallucinate because they don't differentiate individuals from kinds, leading to errors.
insights INSIGHT
LLMs and Individual Tracking
LLMs could track individuals but this feature is often disabled due to privacy concerns and potential misuse.
LLMs' string outputs aren't machine-interpretable, making database interaction and fact-checking difficult.
Get the Snipd Podcast app to discover more snips from this episode
Gary Marcus's "Kluge" explores the human mind's design as a collection of workarounds and hacks rather than a perfectly engineered system. He argues that our cognitive abilities are a patchwork of evolved mechanisms, often inefficient and prone to errors. The book uses examples from various fields, including psychology, neuroscience, and computer science, to illustrate the kludgy nature of human thought. Marcus emphasizes the importance of understanding these limitations to improve our decision-making and problem-solving skills. He suggests that by acknowledging the imperfections of our cognitive architecture, we can develop strategies to mitigate biases and make more rational choices.
Rebooting AI
Building Artificial Intelligence We Can Trust
Gary Marcus
Ernest Davis
Gary Marcus and Ernest Davis provide a lucid assessment of the current science in AI, explaining what today’s AI can and cannot do. They argue that current AI systems, based on deep learning, are narrow and brittle, and that achieving true artificial general intelligence requires moving beyond statistical analysis and large data sets. The authors suggest that by incorporating knowledge-driven approaches and common sense, we can build AI systems that are reliable and trustworthy in various aspects of our lives, such as homes, cars, and medical offices.
Blake Lemoine is the ex-Google engineer who concluded the company's LaMDA chatbot was sentient. Gary Marcus is an academic, author, and outspoken AI critic. The two join Big Technology Podcast to debate the utility of AI chatbots, their dangers, and the actual technology they're built on. Join us for a fascinating conversation that reveals much about the state of this technology. There's plenty to be learned from the disagreements, and the common ground as well.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/
Questions? Feedback? Write to: bigtechnologypodcast@gmail.com