Theo Jaffee Podcast cover image

Theo Jaffee Podcast

Latest episodes

undefined
Dec 3, 2023 • 1h 48min

#9: Dwarkesh Patel - Podcasting, AI, Talent, and Fixing Government

Dwarkesh Patel, host of the Dwarkesh Podcast, discusses podcasting, AI, talent, and fixing government. Topics include AI's impact on podcasting, the future of interviews, addictive nature of social media, rationalist connections, factions in the rationalist community, reallocating smart talent, success and judgment, qualities of Elon Musk, and differences in orderliness.
undefined
Nov 13, 2023 • 1h 29min

#8: Scott Aaronson - Quantum computing, AI watermarking, Superalignment, complexity, and rationalism

Scott Aaronson is the Schlumberger Chair of Computer Science and Director of the Quantum Information Center at the University of Texas at Austin. Previously, he got his bachelor’s in CS from Cornell, his PhD in complexity theory at UC Berkeley, held postdocs at Princeton and Waterloo, and taught at MIT. Currently, he’s on leave to work on OpenAI’s Superalignment team. Scott’s blog, Shtetl-Optimized: https://www.scottaaronson.blog Scott’s website: https://www.scottaaronson.com PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/8-scott-aaronson Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj My Twitter: https://x.com/theojaffee My Substack: https://www.theojaffee.com CHAPTERS: Intro (0:00) Background (0:59) What Quantum Computers Can Do (16:07) P=NP (21:57) Complexity Theory (28:07) David Deutsch (33:49) AI Watermarking and CAPTCHAs (44:15) Alignment By Default (56:41) Cryptography in AI (1:02:12) OpenAI Superalignment (1:10:29) Twitter (1:20:27) Rationalism (1:24:50)
undefined
Nov 8, 2023 • 2h 24min

#7: Nora Belrose - EleutherAI, Interpretability, Linguistics, and ELK

Nora Belrose is the Head of Interpretability at EleutherAI, and a noted AI optimist. Nora’s Twitter: https://x.com/norabelrose EleutherAI Website: https://www.eleuther.ai/ EleutherAI Discord: https://discord.gg/zBGx3azzUn PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/7-nora-belrose Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj My Twitter: https://x.com/theojaffee My Substack: https://www.theojaffee.com CHAPTERS: Intro (0:00) EleutherAI (0:32) Optimism (8:02) Linguistics (22:27) What Should AIs Do? (32:01) Regulation (43:44) Future Vibes (53:56) Anthropic Polysemanticity (1:05:05) More Interpretability (1:19:52) Eliciting Latent Knowledge (1:44:44)
undefined
Oct 15, 2023 • 1h 17min

#6: Razib Khan - Genetics, ancient history, rationalism, IQ

Razib Khan is a geneticist, the CXO and CSO of a biotech startup, and a writer and podcaster with interests in genetics, genomics, evolution, history, and politics. Razib’s Twitter: https://x.com/razibkhan Razib’s Website: https://www.razib.com Razib’s Substack (Unsupervised Learning): https://www.razibkhan.com PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/5-quintin-pope Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj My Twitter: https://x.com/theojaffee My Substack: https://www.theojaffee.com CHAPTERS: Intro (0:00) Genrait (0:37) Genetics and Memetics (4:31) Domestication (13:48) Ancient History (22:48) TESCREALism (30:02) Transhumanism (53:05) IQ (1:02:26)
undefined
Oct 1, 2023 • 2h 36min

#5: Quintin Pope - AI alignment, machine learning, failure modes, and reasons for optimism

Quintin Pope is a machine learning researcher focusing on natural language modeling and AI alignment. Among alignment researchers, Quintin stands out for his optimism. He believes that AI alignment is far more tractable than it seems, and that we appear to be on a good path to making the future great. On LessWrong, he's written one of the most popular posts of the last year, “My Objections To ‘We're All Gonna Die with Eliezer Yudkowsky’”, as well as many other highly upvoted posts on various alignment papers, and on his own theory of alignment, shard theory. Quintin’s Twitter: https://twitter.com/QuintinPope5 Quintin’s LessWrong profile: https://www.lesswrong.com/users/quintin-pope My Objections to “We’re All Gonna Die with Eliezer Yudkowsky”: https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky The Shard Theory Sequence: https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX Quintin’s Alignment Papers Roundup: https://www.lesswrong.com/s/5omSW4wNKbEvYsyje Evolution provides no evidence for the sharp left turn: https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn Deep Differentiable Logic Gate Networks: https://arxiv.org/abs/2210.08277 The Hydra Effect: Emergent Self-repair in Language Model Computations: https://arxiv.org/abs/2307.15771 Deep learning generalizes because the parameter-function map is biased towards simple functions: https://arxiv.org/abs/1805.08522 Bridging RL Theory and Practice with the Effective Horizon: https://arxiv.org/abs/2304.09853 PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/5-quintin-pope Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj My Twitter: https://x.com/theojaffee My Substack: https://www.theojaffee.com CHAPTERS: Introduction (0:00) What Is AGI? (1:03) What Can AGI Do? (12:49) Orthogonality (23:14) Mind Space (42:50) Quintin’s Background and Optimism (55:06) Mesa-Optimization and Reward Hacking (1:02:48) Deceptive Alignment (1:11:52) Shard Theory (1:24:10) What Is Alignment? (1:30:05) Misalignment and Evolution (1:37:21) Mesa-Optimization and Reward Hacking, Part 2 (1:46:56) RL Agents (1:55:02) Monitoring AIs (2:09:29) Mechanistic Interpretability (2:14:00) AI Disempowering Humanity (2:28:13)
undefined
Sep 8, 2023 • 1h 5min

#4: Rohit Krishnan - Developing Genius, Investing, AI Optimism, and the Future

Rohit Krishnan is a venture capitalist, economist, engineer, former hedge fund manager, and essayist. On Twitter @krishnanrohit, and on his Substack, Strange Loop Canon, at strangeloopcanon.com, he writes about AI, business, investing, complex systems, and more. CHAPTERS: Intro (0:00) Comparing Countries (0:33) Reading (6:50) Developing Genius (12:36) Investing (24:08) Contra AI Doom (34:27) The Future of AI (46:26) PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/4-rohit-krishnan Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj SOCIALS: My Twitter: https://twitter.com/theojaffee My Substack: https://www.theojaffee.com Rohit’s Twitter: https://twitter.com/krishnanrohit Strange Loop Canon: https://www.strangeloopcanon.com
undefined
Aug 18, 2023 • 2h 21min

#3: Zvi Mowshowitz - Rationality, Writing, Public Policy, and AI

Zvi Mowshowitz is a former professional Magic: The Gathering player who is a Pro Tour and Grand Prix winner, and member of the MTG Hall of Fame. He’s also been a professional trader and market maker, and a startup founder. He’s been involved in the rationalist movement for many years, and today, he writes about AI, rationality, game design and theory, philosophy, and more on his blog, Don’t Worry About the Vase. PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/3-zvi-mowshowitz Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj SOCIALS: My Twitter: https://twitter.com/theojaffee My Substack: https://www.theojaffee.com Zvi’s Twitter: https://twitter.com/thezvi Zvi’s Blog: https://thezvi.wordpress.com/ Zvi’s Substack: https://thezvi.substack.com/ Balsa Research: https://balsaresearch.com/ CHAPTERS: Intro (0:00) Zvi’s Background (0:42) Rationalism (6:28) Critiques of Rationalism (20:08) The Pill Poll (39:26) Balsa Research (47:58) p(doom | AGI) (1:05:47) Alignment (1:17:18) Decentralization and the Cold War (1:39:42) More on AI (1:53:53) Dealing with AI Risks (2:07:40) Writing (2:18:57)
undefined
Aug 12, 2023 • 1h 39min

#2: Carlos de la Guardia - AGI, Deutsch, Popper, knowledge, and progress

Carlos de la Guardia is an independent AGI researcher inspired by the work of Karl Popper, David Deutsch, and Richard Dawkins. In his research, he seeks answers to some of humanity’s biggest questions: how humans create knowledge, how AIs can one day do the same, and how we can use knowledge to figure out how to speed up our minds and even end death. Carlos is currently working on a book about AGI, which will be published on his Substack, Making Minds and Making Progress. PODCAST LINKS: Video Transcript: https://www.theojaffee.com/p/2-carlos-de-la-guardia Spotify: https://open.spotify.com/show/1IJRtB8FP4Cnq8lWuuCdvW?si=eba62a72e6234efb Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj SOCIALS: My Twitter: https://twitter.com/theojaffee My Substack: https://www.theojaffee.com Carlos’s Twitter: https://twitter.com/dela3499 Carlos's Substack: https://carlosd.substack.com/ Carlos's Website: https://carlosd.org/ CHAPTERS: Intro (0:00) Carlos’ Research (0:55) Economics and Computation (19:46) Bayesianism (27:54) AI Doom and Optimism (34:21) Will More Compute Produce AGI? (46:04) AI Alignment and Interpretability (54:11) Mind Uploading (1:05:44) Carlos’ 6 Questions on AGI (1:12:47) 1. What are the limits of biological evolution? (1:13:06) 2. What makes explanatory knowledge special? (1:18:07) 3. How can Popperian epistemology improve AI? (1:19:54) 4. What are the different kinds of idea conflicts? (1:23:58) 5. Why is the brain a network of neurons? (1:25:47) 6. How do neurons make consciousness? (1:27:26) The Optimistic AI Future (1:34:50)
undefined
Jul 29, 2023 • 2h 27min

#1: Greg Fodor - AI, knowledge acceleration, aliens, & VR

Greg Fodor is a software engineer who’s been involved in augmented and virtual reality for over a decade. He co-founded Altspace VR, a virtual events company acquired by Microsoft, worked on Mozilla Hubs, a browser-based VR community platform; built Jel, a virtual environment video game for working, and is currently working on a new extended reality hardware stealth startup. On his Twitter account, @gfodor, he tweets about AI, VR and AR, unidentified aerial phenomena, aliens, and the philosophy of knowledge. CHAPTERS: (00:00) - Intro (00:46) - Superconductors (03:51) - The Turing test (16:21) - AI risks, alignment, and knowledge acceleration (33:22) - What is alignment? (46:50) - AI doom (58:01) - e/acc (1:16:28) - UAPs and aliens (1:32:05) - Risks from aliens (1:43:11) - Alien zoo hypothesis (1:50:50) - Virtual reality (2:05:37) - The future of VR (2:17:38) - Greg’s Intellectual Journey (2:26:15) - Outro

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode