Yi Zeng, a prominent professor at the Chinese Academy of Sciences and AI safety advocate, dives deep into the intersection of AI, morality, and culture. He unpacks the challenge of instilling moral reasoning in AI, drawing insights from Chinese philosophy. Zeng explores the evolving role of AI as a potential partner or adversary in society, and contrasts American and Chinese views on governance and virtue. The conversation questions whether we can achieve harmony with AI or merely coexist, highlighting the need for adaptive values in our technological future.
01:14:19
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Moral Reasoning vs. Rule-Based AI
Yi Zeng distinguishes between rule-based AI ethics and moral reasoning.
He argues that true moral AI requires understanding, not just rule-following.
insights INSIGHT
Limitations of Current LLMs
Current large language models (LLMs) are perceived as potential threats due to their limitations.
LLMs lack intrinsic motivation and understanding of why they should follow rules.
insights INSIGHT
Importance of Sense of Self
Moral reasoning necessitates a sense of self, enabling agents to distinguish themselves from others.
Cognitive empathy, rooted in self-experience, is crucial for altruistic behavior and moral intuition.
Get the Snipd Podcast app to discover more snips from this episode
Sebastian Seung's "I am my connectome" delves into the intricate network of connections within the brain, arguing that the connectome—the complete map of neural connections—is the physical substrate of the self. He explores the implications of this idea for understanding consciousness, identity, and the potential for brain-computer interfaces. The book challenges traditional views of the brain and mind, proposing a new framework for understanding how our experiences shape our neural connections and ultimately define who we are. Seung's work highlights the complexity of the brain and the potential for future advancements in neuroscience and technology. The book is a significant contribution to the ongoing debate about the nature of consciousness and the self.
A dangerous master
How to Keep Technology from Slipping Beyond Our Control
Wendell Wallach
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades).
Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right.
See the full article from this episode: https://danfaggella.com/zeng1
This episode referred to the following other essays and resources:
-- AI Safety Connect - https://aisafetyconnect.com -- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/
...
There three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives? 2. What kind of posthuman future are we moving towards, or should we be moving towards? 3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect: