Generally Intelligent cover image

Generally Intelligent

Latest episodes

undefined
5 snips
Sep 18, 2024 • 1h 3min

Episode 37: Rylan Schaeffer, Stanford: On investigating emergent abilities and challenging dominant research ideas

In this discussion, Rylan Schaeffer, a PhD student at Stanford specializing in the engineering and mathematics of intelligence, shares intriguing insights about evaluating AI capabilities. He explores the evolving interplay between neuroscience and machine learning, arguing that breakthroughs in AI often do not require insights from human brains. Rylan also reflects on his struggles during his academic journey, emphasizing resilience and adaptability in research. Finally, he highlights the challenges of model evaluation and the phenomenon of model collapse in generative models.
undefined
Jul 11, 2024 • 1h 34min

Episode 36: Ari Morcos, DatologyAI: On leveraging data to democratize model training

Ari Morcos, the CEO of DatologyAI and former researcher at DeepMind and FAIR, dives into the fascinating world of data and deep learning. He explores the nuances of data quality, emphasizing the distinction between hard and bad data points. The conversation touches on the evolution of image representation models and the critical role of data selection for model training. Ari also warns against the careless use of synthetic data and discusses how careful curation can boost model performance. Overall, it's a deep dive into optimizing data for smarter AI.
undefined
May 9, 2024 • 1h 2min

Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models

Percy Liang, Stanford professor, discusses foundation models, reproducible research, and societal impacts of AI. Topics include paradigm shifts in AI, generative agents for social dynamics, academia's role in model development, aligning language models with human values, and dissent in science and society.
undefined
Mar 12, 2024 • 1h 56min

Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

Seth Lazar delves into the nuances of political philosophy and AI ethics, exploring the challenges of regulating AI and the ethical implications of algorithmic governance. The discussion highlights power dynamics in AI governance, the importance of legitimacy, authority, and democratic duties in system development, and the impact of regulatory toolkits on engineering decisions. It also touches on ethical design, AI agents, feasibility horizons, and the risks associated with building AI companions.
undefined
20 snips
Aug 9, 2023 • 1h 20min

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Tri Dao is a PhD student at Stanford, co-advised by Stefano Ermon and Chris Re. He’ll be joining Princeton as an assistant professor next year. He works at the intersection of machine learning and systems, currently focused on efficient training and long-range context. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent
undefined
21 snips
Jun 22, 2023 • 1h 2min

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss reverse engineering kernels, the conservation of learnability during training, infinite-width neural networks, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent
undefined
Mar 29, 2023 • 1h 15min

Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore the impact of cultural evolution on human knowledge acquisition, how pure biological evolution can lead to slow adaptation and overfitting, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent
undefined
Mar 23, 2023 • 1h 46min

Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long horizon reasoning, exploration, and representation learning. In this episode, we discuss designing simpler and more principled RL algorithms, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent
undefined
6 snips
Mar 9, 2023 • 1h 27min

Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

Jim Fan is a research scientist at NVIDIA and got his PhD at Stanford under Fei-Fei Li. Jim is interested in building generally capable autonomous agents, and he recently published MineDojo, a massively multiscale benchmarking suite built on Minecraft, which was an Outstanding Paper at NeurIPS. In this episode, we discuss the foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant.   About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent
undefined
17 snips
Mar 1, 2023 • 1h 35min

Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode