

248 | Yejin Choi on AI and Common Sense
22 snips Aug 28, 2023
Yejin Choi, a preeminent computer scientist at the University of Washington and expert in AI, dives deep into the capabilities of large language models like ChatGPT. She discusses how these models learn and reason differently from humans, raising questions about their understanding of reality. The conversation explores the challenges of AI's predictive abilities, the complexities of aligning AI with human values, and the implications of misinformation. Choi highlights the stark differences between human creativity and AI's limitations, emphasizing the need for improved AI literacy.
AI Snips
Chapters
Transcript
Episode notes
LLM Understanding
- LLMs seem very capable due to impressive answers.
- However, their understanding differs from humans, requiring careful trust evaluation.
Mimicking Sentience
- LLMs may appear sentient by expressing desires like "don't kill me."
- However, this could simply be mimicking human-written stories from the internet.
Turing Test and Limitations
- LLMs can pass the Turing test but do not exhibit true human-like interaction.
- They lack the nuanced memory, forgetting, and common sense reasoning of humans.