The Inside View

Michaël Trazzi
undefined
May 4, 2021 • 1h 29min

2. Connor Leahy on GPT3, EleutherAI and AI Alignment

In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research. [1] https://youtu.be/HrV19SjKUss?t=4785 [2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference [3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities [4] https://www.eleuther.ai/ [5] https://discord.gg/j65dEVp5
undefined
Apr 25, 2021 • 26min

1. Does the world really need another podcast?

In this first episode I'm the one being interviewed. Questions: - Does the world really needs another podcast? - Why call your podcast superintelligence? - What is the Inside view? The Outside view? - What could be the impact of podcast conversations? - Why would a public discussion on superintelligence be different? - What are the main reasons we listen to podcasts at all? - Explaining GPT-3 and how we could scale to GPT-4 - Could GPT-N write a PhD thesis? - What would a superintelligence need on top of text prediction? - Can we just accelerate human-level common sense to get superintelligence?

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app