

#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS
Dec 26, 2022
In this talk, Alan Chan, a PhD student at Mila with a focus on AI alignment and governance, shares insights from his research on aligning AI with human values. He discusses the skepticism around AI alignment and the complexities of defining intelligence in AI systems. The conversation touches on the moral implications of AI decision-making, the risks of oversimplifying reward mechanisms in reinforcement learning, and the urgent need for collaborative safety measures in AI development. Chan's enthusiasm for the ethical aspects of AI governance is truly infectious.
AI Snips
Chapters
Books
Transcript
Episode notes
Skepticism towards AI Alignment
- Tim expresses skepticism towards AI alignment, citing Francois Chollet's article.
- He argues that large systems like Google face scaling bottlenecks due to externalized, distributed processes.
AI's Potential to Overcome Bottlenecks
- Alan acknowledges Tim's points about bottlenecks but suggests AIs might overcome them.
- He points out AI's potential for seamless communication and goal achievement, despite current limitations.
Redefining Intelligence
- Tim questions the definition of intelligence used in AI alignment discussions, referencing AIXI and AlphaZero.
- He emphasizes intelligence as adaptability and information conversion, rather than task-specific skill.