
Generally Intelligent
Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
Latest episodes

Mar 18, 2021 • 1h 5min
Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models
Yujia Huang (Website) is a PhD student at Caltech, working at the intersection of deep learning and neuroscience. She worked on optics and biophotonics before venturing into machine learning. Now, she hopes to design “less artificial” artificial intelligence.
Highlights from our conversation:
🏗 How recurrent generative feedback, a neuro-inspired design, improves adversarial robustness and and can be more efficient (less labels)
🧠 Adapting theories from neuroscience and classical research for machine learning
📊 What a new Turing test for “less artificial” or generalized AI could look like
💡 Tips for new machine learning researchers!

Mar 5, 2021 • 49min
Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions
Julian Chibane, a PhD student at the Max Planck Institute for Informatics, shares his insights on 3D reconstruction using implicit functions. He discusses how the IF-Net architecture surprisingly generates accurate representations without existing priors. The conversation explores Neural Unsigned Distance Fields and their utility in managing ambiguous 3D scenes. Chibane also addresses the critical balance between local and global data integration to enhance accuracy and hints at the future of the field, emphasizing innovation and foundational understanding.

4 snips
Feb 24, 2021 • 51min
Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding
Katja Schwarz, a researcher at the Max Planck Institute for Intelligent Systems, transitions from physics to 3D geometric scene understanding. She shares insights on the power of radiance fields in generative image synthesis and the role of 3D generation in conceptual understanding. The discussion includes practical tips on training GANs, challenges in generative modeling, and the significance of efficient models. Katja also emphasizes the influence of normalization techniques and the philosophical implications of using generative models for visual understanding.

Feb 17, 2021 • 1h 18min
Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning
Joel Lehman was previously a founding member at Uber AI Labs and assistant professor at the IT University of Copenhagen. He's now a research scientist at OpenAI, where he focuses on open-endedness, reinforcement learning, and AI safety.
Joel’s PhD dissertation introduced the novelty search algorithm. That work inspired him to write the popular science book, “Why Greatness Cannot Be Planned”, with his PhD advisor Ken Stanley, which discusses what evolutionary algorithms imply for how individuals and society should think about objectives.
We discuss this and much more:
- How discovering novelty search totally changed Joel’s philosophy of life
- Sometimes, can you reach your objective more quickly by not trying to reach it?
- How one might evolve intelligence
- Why reinforcement learning is a natural framework for open-endedness

Feb 1, 2021 • 60min
Episode 03: Cinjon Resnick, NYU, on activity and scene understanding
Cinjon Resnick, an AI researcher and PhD candidate at NYU, formerly with Google Brain, dives into the critical importance of scene understanding for generalization in machine learning. He shares his unique journey, from attempting to teach a baby through language and games to a pivotal moment with circus arts that reshaped his focus towards activity recognition. Cinjon highlights the underrated MetaSIM papers, discusses the intricacies of motion recognition, and proposes intriguing new research directions that could redefine our approach to AI.

Jan 7, 2021 • 36min
Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process
Sarah Jane Hong is the co-founder of Latent Space, a startup building the first fully AI-rendered 3D engine in order to democratize creativity.
We touch on what it was like taking classes under Geoff Hinton in 2013, the trouble with using natural language prompts to render a scene, why a model’s ability to scale is more important than getting state-of-the-art results, and more.

Dec 15, 2020 • 48min
Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems
We interview Kelvin Guu, a researcher at Google AI and the creator of REALM.
The conversation is a wide-ranging tour of language models, how computers interact with world knowledge, and much more.