The Inside View cover image

The Inside View

Latest episodes

undefined
Jun 14, 2022 • 1h 16min

Blake Richards–AGI Does Not Exist

Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so  hopefully, this podcast will help re-establish the truth. Transcript: https://theinsideview.ai/blake Video: https://youtu.be/kWsHS7tXjSU Outline: (01:03) Highlights (01:03) AGI good / AGI not now compass (02:25) AGI is not a coherent concept (05:30) you cannot build truly general AI (14:30) no "intelligence" threshold for AI (25:24) benchmarking intelligence (28:34) recursive self-improvement (34:47) scale is something you need (37:20) the bitter lesson is only half-true (41:32) human-like sensors for general agents (44:06) the credit assignment problem (49:50) testing for backpropagation in the brain (54:42) burstprop (bursts of action potentials), reward prediction errors (01:01:35) long-term credit-assignment in reinforcement learning (01:10:48) what would change his mind on scaling and existential risk
undefined
May 5, 2022 • 52min

Ethan Caballero–Scale is All You Need

Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop. Transcript: https://theinsideview.github.io/ethan Youtube: https://youtu.be/UPlv-lFWITI Michaël: https://twitter.com/MichaelTrazzi Ethan: https://twitter.com/ethancaballero Outline (00:00) highlights (00:50) who is Ethan, scaling laws T-shirts (02:30) scaling, upstream, downstream, alignment and AGI (05:58) AI timelines, AlphaCode, Math scaling, PaLM (07:56) Chinchilla scaling laws (11:22) limits of scaling, Copilot, generative coding, code data (15:50) Youtube scaling laws, constrative type thing (20:55) AGI race, funding, supercomputers (24:00) Scaling at Google (25:10) gossips, private research, GPT-4 (27:40) why Ethan was did not update on PaLM, hardware bottleneck (29:56) the fastest path, the best funding model for supercomputers (31:14) EA, OpenAI, Anthropics, publishing research, GPT-4 (33:45) a zillion language model startups from ex-Googlers (38:07) Ethan's journey in scaling, early days (40:08) making progress on an academic budget, scaling laws research (41:22) all alignment is inverse scaling problems (45:16) predicting scaling laws, useful ai alignment research (47:16) nitpicks aobut Ajeya Cotra's report, compute trends (50:45) optimism, conclusion on alignment
undefined
Apr 13, 2022 • 52min

10. Peter Wildeford on Forecasting

Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projects that scale. Check out the video and transcript here: https://theinsideview.github.io/peter
undefined
Mar 23, 2022 • 57min

9. Emil Wallner on Building a €25000 Machine Learning Rig

Emil Wallner, a resident at Google Arts & Culture Lab, discusses his impressive €25,000 machine learning rig. He dives into the challenges of acquiring high-performance GPUs and shares clever hacks for navigating the market. Emil reveals essential components, from motherboards to cooling solutions, and talks about the balance between shared resources and personal hardware for optimal project control. He also touches on the evolution of machine learning practices, including transitions from TensorFlow to JAX for enhanced performance.
undefined
Dec 22, 2021 • 1h 26min

8. Sonia Joseph on NFTs, Web 3 and AI Safety

Sonia Joseph, a graduate student at MILA who specializes in applying machine learning to neuroscience, dives into the vibrant worlds of NFTs and Web3. She discusses the hurdles of Ethereum's gas fees while highlighting platforms like Polygon. Sonia shares her insights on AI's influence on identity and memory, and critiques the orthogonality thesis, illustrating the risks it poses for AI safety. The conversation veers into philosophical territory as she reflects on the intersection of technology and human aspiration, pondering the very meaning of life.
undefined
Oct 24, 2021 • 2h 10min

7. Phil Trammell on Economic Growth under Transformative AI

Phil Trammell, an Oxford PhD student and a research associate at the Global Priorities Institute, dives into the fascinating intersection of transformative AI and economic growth. He discusses how AI could drastically alter GDP and the labor market, exploring both its potential benefits and risks. Topics include the historical evolution of economic growth, the complexities of measuring technological impact, and the future of labor in an AI-driven world. Trammell's insights provide a thought-provoking look at the challenges and opportunities that AI brings to our economy.
undefined
Oct 6, 2021 • 1h 40min

6. Slava Bobrov on Brain Computer Interfaces

Slava Bobrov, a self-taught Machine Learning Engineer, shares his expertise on brain-computer interfaces (BCIs) and their application in prosthetic technology. He discusses the intuitive control of robotic limbs using neural signals and the differences between invasive and non-invasive BCIs. Slava highlights the latest innovations from companies like Muse and OpenBCI, and examines the safety concerns surrounding neurotechnology. He also explores the intriguing relationship between BCIs, sleep tracking, and lucid dreaming, emphasizing the potential for enhancing human cognition.
undefined
Sep 16, 2021 • 2h 53min

5. Charlie Snell on DALL-E and CLIP

We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.edu/blog/). We look at multiple slides with art throughout our conversation, so I highly recommend watching the video (https://www.youtube.com/watch?v=gcwidpxeAHI). In the first part we go through Charlie's explanations of DALL-E, a model trained end-to-end by OpenAI to generate images from prompts. We then talk about CLIP + VQGAN, where CLIP is another model by OpenAI matching prompts and images, and VQGAN is a state-of-the art GAN used extensively in the AI Art scene. At the end of the video we look at different pieces of art made using CLIP, including tricks for using VQGAN with CLIP, videos, and the latest CLIP-guided diffusion architecture. At the end of our chat we talk about scaling laws and how progress in art relates to other advances in ML.
undefined
Sep 5, 2021 • 3h 7min

4. Sav Sidorov on Learning, Contrarianism and Robotics

Sav Sidorov, an accomplished robotics undergraduate at UBC, discusses groundbreaking advancements in computer vision and the challenges posed by energy constraints. He shares insights on top-down learning and the importance of embracing contrarian ideas. The conversation touches on navigating ideological debates in society and the role of traditions in promoting social cohesion. Additionally, Sav reflects on the delightful complexities of robotics competitions, teamwork, and personal growth, as well as the impact of digital connections and psychedelics on creativity and relationships.
undefined
Jun 8, 2021 • 1h 44min

3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability

We talk about Evan’s background @ MIRI & OpenAI, Coconut, homogeneity in AI takeoff, reproducing SoTA & openness in multipolar scenarios, quantilizers & operationalizing strategy stealing, Risks from learned optimization & evolution, learned optimization in Machine Learning, clarifying Inner AI Alignment terminology, transparency & interpretability, 11 proposals for safe advanced AI, underappreciated problems in AI Alignment & surprising advances in AI.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode