Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
undefined
Dec 20, 2020 • 2h 39min

#034 Eray Özkural- AGI, Simulations & Safety

Dr. Eray Özkural, an AGI researcher and founder of Celestial Intellect Cybernetics, critiques mainstream AI safety narratives, arguing they're rooted in fearmongering. He shares his skepticism about the intelligence explosion hypothesis and discusses the complexities of defining intelligence. The conversation also dives into the simulation argument, challenging its validity and exploring its implications. The panel covers the urgent need for nuanced approaches to AGI and the ethics surrounding AI development, urging a departure from doomsday thinking.
undefined
53 snips
Dec 13, 2020 • 1h 51min

#033 Prof. Karl Friston - The Free Energy Principle

Dive into the mind-bending world of the Free Energy Principle with a leading neuroscientist. Explore how the brain interprets ambiguous sensory data as an inference problem, moving beyond traditional optimization methods. Discover the balance between prediction accuracy and adaptability, the role of belief states, and the significance of Markov blankets in decision-making. Hear humorous takes on cultural differences in communication styles, all while contemplating the future implications of these complex concepts in cognitive science and machine learning.
undefined
Dec 6, 2020 • 1h 30min

#032- Simon Kornblith / GoogleAI - SimCLR and Paper Haul!

Simon Kornblith, a research scientist at Google Brain with a background in neuroscience, dives deep into the world of neural networks. He discusses the unique relationship between neural networks and biological brains, shedding light on how architecture affects learning. Kornblith explains the significance of loss functions in image classification and reveals insights from the SimCLR framework. He also touches on data augmentation strategies, self-supervised learning, and the programming advantages of Julia for machine learning tasks.
undefined
21 snips
Nov 28, 2020 • 2h 44min

#031 WE GOT ACCESS TO GPT-3! (With Gary Marcus, Walid Saba and Connor Leahy)

This conversation features Gary Marcus, a psychology and neuroscience professor, known for critiquing deep learning, alongside Waleed Sabah, an expert in natural language understanding, and Connor Leahy, a proponent of large language models. They dive into GPT-3's strengths and weaknesses, the philosophical implications of AI creativity, and the importance of integrating reasoning with pattern recognition. The dialogue also critiques AI's limitations in understanding language and explores future possibilities for achieving true artificial general intelligence.
undefined
Nov 20, 2020 • 1h 48min

#030 Multi-Armed Bandits and Pure-Exploration (Wouter M. Koolen)

Wouter M. Koolen, a Senior Researcher at Centrum Wiskunde & Informatica, delves into the fascinating world of multi-armed bandits and pure exploration. He discusses the balance between exploration and exploitation, illustrated through examples like clinical trials and game strategies. Wouter explains how to determine when to shift from learning to exploiting knowledge gained. The conversation also highlights the ethical considerations in decision-making and innovative algorithms that drive advancements in this area, making complex theories accessible for practical application.
undefined
Nov 8, 2020 • 1h 51min

#029 GPT-3, Prompt Engineering, Trading, AI Alignment, Intelligence

Connor Leahy, known for his work with EleutherAI, joins a fascinating discussion on AI, trading, and philosophy. The group delves into the potential of GPT-3 and the emerging skill of prompt engineering, arguing it could redefine software development. They explore the unpredictability of stock markets and critique the deceptive nature of quant finance. Additionally, philosophical dilemmas surrounding AI alignment and the ethical implications of technology in business are scrutinized, while pondering the complex relationship between randomness and human intelligence.
undefined
Nov 4, 2020 • 2h 21min

NLP is not NLU and GPT-3 - Walid Saba

Walid Saba, an expert in natural language understanding and co-founder of Ontologic, brings a wealth of knowledge to the table. He challenges conventional views on deep learning, arguing that the missing ontology is a critical issue in NLU. Their conversation dives into the limitations of models like GPT-3, emphasizing the need for contextual knowledge rather than just data memorization. Saba critiques existing evaluation methods, advocating for a deeper understanding of language that goes beyond technical applications, highlighting the complex interplay of reasoning, intention, and human cognition.
undefined
5 snips
Nov 1, 2020 • 2h 5min

AI Alignment & AGI Fire Alarm - Connor Leahy

Connor Leahy, a machine learning engineer from Aleph Alpha and founder of EleutherAI, dives into the urgent complexities of AI alignment and AGI. He argues that AI alignment is philosophy with a deadline, likening AGI's challenges to climate change but with even more catastrophic potential. The discussion touches on decision theories like Newcomb's paradox, the prisoner's dilemma, and the dangers of poorly defined utility functions. Together, they unravel the philosophical implications of AI, the nature of intelligence, and the dire need for responsible action in AI development.
undefined
Oct 28, 2020 • 1h 27min

Kaggle, ML Community / Engineering (Sanyam Bhutani)

Sanyam Bhutani, a prominent machine learning engineer and AI content creator at H2O, dives into the world of data science and the Kaggle community. He shares the importance of self-directed learning versus formal education in ML, offering insights from his own journey. Sanyam discusses the challenges of transitioning Kaggle models to real-world applications and highlights the necessity of engineering rigor in ML practices. He also emphasizes building authentic professional connections and the significance of model interpretability in high-stakes situations.
undefined
Oct 20, 2020 • 1h 31min

Sara Hooker - The Hardware Lottery, Sparsity and Fairness

Sara Hooker, a research scholar at Google Brain and founder of Delta Analytics, dives into the complexities of AI in this discussion. She introduces the 'Hardware Lottery' concept, highlighting how innovation is often dictated by existing technology. The conversation shifts to biases in AI models, emphasizing the need for fairness and interpretability. Sara critiques current methods and advocates for innovative solutions that prioritize model performance in underrepresented groups, bridging the gap between hardware choices and ethical AI development.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app