undefined

David Foster

AI critic who critiques arguments related to artificial intelligence and its potential existential threats

Top 3 podcasts with David Foster

Ranked by the Snipd community
undefined
101 snips
May 11, 2023 • 2h 32min

Future of Generative AI [David Foster]

Generative Deep Learning, 2nd Edition [David Foster] https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/ Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In this conversation, Tim Scarfe and David Foster, the author of 'Generative Deep Learning,' dive deep into the world of generative AI, discussing topics ranging from model families and auto regressive models to the democratization of AI technology and its potential impact on various industries. They explore the connection between language and true intelligence, as well as the limitations of GPT and other large language models. The discussion also covers the importance of task-independent world models, the concept of active inference, and the potential of combining these ideas with transformer and GPT-style models. Ethics and regulation in AI development are also discussed, including the need for transparency in data used to train AI models and the responsibility of developers to ensure their creations are not destructive. The conversation touches on the challenges posed by AI-generated content on copyright laws and the diminishing role of effort and skill in copyright due to generative models. The impact of AI on education and creativity is another key area of discussion, with Tim and David exploring the potential benefits and drawbacks of using AI in the classroom, the need for a balance between traditional learning methods and AI-assisted learning, and the importance of teaching students to use AI tools critically and responsibly. Generative AI in music is also explored, with David and Tim discussing the potential for AI-generated music to change the way we create and consume art, as well as the challenges in training AI models to generate music that captures human emotions and experiences. Throughout the conversation, Tim and David touch on the potential risks and consequences of AI becoming too powerful, the importance of maintaining control over the technology, and the possibility of government intervention and regulation. The discussion concludes with a thought experiment about AI predicting human actions and creating transient capabilities that could lead to doom. TOC: Introducing Generative Deep Learning [00:00:00] Model Families in Generative Modeling [00:02:25] Auto Regressive Models and Recurrence [00:06:26] Language and True Intelligence [00:15:07] Language, Reality, and World Models [00:19:10] AI, Human Experience, and Understanding [00:23:09] GPTs Limitations and World Modeling [00:27:52] Task-Independent Modeling and Cybernetic Loop [00:33:55] Collective Intelligence and Emergence [00:36:01] Active Inference vs. Reinforcement Learning [00:38:02] Combining Active Inference with Transformers [00:41:55] Decentralized AI and Collective Intelligence [00:47:46] Regulation and Ethics in AI Development [00:53:59] AI-Generated Content and Copyright Laws [00:57:06] Effort, Skill, and AI Models in Copyright [00:57:59] AI Alignment and Scale of AI Models [00:59:51] Democratization of AI: GPT-3 and GPT-4 [01:03:20] Context Window Size and Vector Databases [01:10:31] Attention Mechanisms and Hierarchies [01:15:04] Benefits and Limitations of Language Models [01:16:04] AI in Education: Risks and Benefits [01:19:41] AI Tools and Critical Thinking in the Classroom [01:29:26] Impact of Language Models on Assessment and Creativity [01:35:09] Generative AI in Music and Creative Arts [01:47:55] Challenges and Opportunities in Generative Music [01:52:11] AI-Generated Music and Human Emotions [01:54:31] Language Modeling vs. Music Modeling [02:01:58] Democratization of AI and Industry Impact [02:07:38] Recursive Self-Improving Superintelligence [02:12:48] AI Technologies: Positive and Negative Impacts [02:14:44] Runaway AGI and Control Over AI [02:20:35] AI Dangers, Cybercrime, and Ethics [02:23:42]
undefined
19 snips
Jul 2, 2023 • 2h 8min

MUNK DEBATE ON AI (COMMENTARY) [DAVID FOSTER]

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB The discussion between Tim Scarfe and David Foster provided an in-depth critique of the arguments made by panelists at the Munk AI Debate on whether artificial intelligence poses an existential threat to humanity. While the panelists made thought-provoking points, Scarfe and Foster found their arguments largely speculative, lacking crucial details and evidence to support claims of an impending existential threat. Scarfe and Foster strongly disagreed with Max Tegmark’s position that AI has an unparalleled “blast radius” that could lead to human extinction. Tegmark failed to provide a credible mechanism for how this scenario would unfold in reality. His arguments relied more on speculation about advanced future technologies than on present capabilities and trends. As Foster argued, we cannot conclude AI poses a threat based on speculation alone. Evidence is needed to ground discussions of existential risks in science rather than science fiction fantasies or doomsday scenarios. They found Yann LeCun’s statements too broad and high-level, critiquing him for not providing sufficiently strong arguments or specifics to back his position. While LeCun aptly noted AI remains narrow in scope and far from achieving human-level intelligence, his arguments lacked crucial details on current limitations and why we should not fear superintelligence emerging in the near future. As Scarfe argued, without these details the discussion descended into “philosophy” rather than focusing on evidence and data. Scarfe and Foster also took issue with Yoshua Bengio’s unsubstantiated speculation that machines would necessarily develop a desire for self-preservation that threatens humanity. There is no evidence today’s AI systems are developing human-like general intelligence or desires, let alone that these attributes would manifest in ways dangerous to humans. The question is not whether machines will eventually surpass human intelligence, but how and when this might realistically unfold based on present technological capabilities. Bengio’s arguments relied more on speculation about advanced future technologies than on evidence from current systems and research. In contrast, they strongly agreed with Melanie Mitchell’s view that scenarios of malevolent or misguided superintelligence are speculation, not backed by evidence from AI as it exists today. Claims of an impending “existential threat” from AI are overblown, harmful to progress, and inspire undue fear of technology rather than consideration of its benefits. Mitchell sensibly argued discussions of risks from emerging technologies must be grounded in science and data, not speculation, if we are to make balanced policy and development decisions. Overall, while the debate raised thought-provoking questions about advanced technologies that could eventually transform our world, none of the speakers made a credible evidence-based case that today’s AI poses an existential threat. Scarfe and Foster argued the debate failed to discuss concrete details about current capabilities and limitations of technologies like language models, which remain narrow in scope. General human-level AI is still missing many components, including physical embodiment, emotions, and the "common sense" reasoning that underlies human thinking. Claims of existential threats require extraordinary evidence to justify policy or research restrictions, not speculation. By discussing possibilities rather than probabilities grounded in evidence, the debate failed to substantively advance our thinking on risks from AI and its plausible development in the coming decades. David's new podcast: https://podcasts.apple.com/us/podcast/the-ai-canvas/id1692538973 Generative AI book: https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/
undefined
Aug 24, 2024 • 4min

What High-Achievers Do on Saturdays

David Foster, a legendary music producer, discusses the secret to his extraordinary success: passion. He and Darren delve into how loving what you do is essential for achieving greatness. The conversation highlights the relentless pursuit of mastery driven by intrinsic motivations, emphasizing the importance of dedication over financial gain. They explore the '10,000 hour rule', illustrating how consistent practice shapes high achievers. Tune in for inspiring insights on unlocking your true passion and pursuing excellence!