undefined

Ilya Sutskever

One of the leading AI scientists behind ChatGPT, reflecting on his founding vision and values and making startling predictions for a technology already shaping our world.

Top 10 podcasts with Ilya Sutskever

Ranked by the Snipd community
undefined
376 snips
May 8, 2020 • 1h 38min

#94 – Ilya Sutskever: Deep Learning

Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Ilya’s Twitter: https://twitter.com/ilyasut Ilya’s Website: https://www.cs.toronto.edu/~ilya/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:23 – AlexNet paper and the ImageNet moment 08:33 – Cost functions 13:39 – Recurrent neural networks 16:19 – Key ideas that led to success of deep learning 19:57 – What’s harder to solve: language or vision? 29:35 – We’re massively underestimating deep learning 36:04 – Deep double descent 41:20 – Backpropagation 42:42 – Can neural networks be made to reason? 50:35 – Long-term memory 56:37 – Language models 1:00:35 – GPT-2 1:07:14 – Active learning 1:08:52 – Staged release of AI systems 1:13:41 – How to build AGI? 1:25:00 – Question to AGI 1:32:07 – Meaning of life
undefined
160 snips
Mar 27, 2023 • 48min

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:* time to AGI* leaks and spies* what's after generative models* post AGI futures* working with Microsoft and competing with Google* difficulty of aligning superhuman AIWatch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.Timestamps(00:00) - Time to AGI(05:57) - What’s after generative models?(10:57) - Data, models, and research(15:27) - Alignment(20:53) - Post AGI Future(26:56) - New ideas are overrated(36:22) - Is progress inevitable?(41:27) - Future Breakthroughs Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
87 snips
Nov 2, 2023 • 42min

What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever

Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, discusses the origins of OpenAI, emergent behaviors of GPT models, token scarcity, next frontiers of AI research, AI safety, and the premise of Superalignment. They also explore the definition of digital life and the challenges of creating pro-human AI.
undefined
57 snips
Oct 26, 2022 • 1h 16min

What, if anything, do AIs understand? (with ChatGPT Co-Creator Ilya Sutskever)

Read the full transcript here. Can machines actually be intelligent? What sorts of tasks are narrower or broader than we usually believe? GPT-3 was trained to do a "single" task: predicting the next word in a body of text; so why does it seem to understand so many things? What's the connection between prediction and comprehension? What breakthroughs happened in the last few years that made GPT-3 possible? Will academia be able to stay on the cutting edge of AI research? And if not, then what will its new role be? How can an AI memorize actual training data but also generalize well? Are there any conceptual reasons why we couldn't make AIs increasingly powerful by just scaling up data and computing power indefinitely? What are the broad categories of dangers posed by AIs?Ilya Sutskever is Co-founder and Chief Scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D. in Computer Science from the University of Toronto. Follow him on Twitter at @ilyasut. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
44 snips
Mar 15, 2023 • 43min

Ilya Sutskever: The Mastermind Behind GPT-4 and the Future of AI

In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4. Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society. The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future. Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values. Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society. Timestamps: (00:04) Introduction of Craig Smith and Ilya Sutskever. (01:00) Sutskever's AI and consciousness interests. (02:30) Sutskever's start in machine learning with Hinton. (03:45) Realization about training large neural networks. (06:33) Convolutional neural network breakthroughs and imagenet. (08:36) Predicting the next thing for unsupervised learning. (10:24) Development of GPT-3 and scaling in deep learning. (11:42) Specific scaling in deep learning and potential discovery. (13:01) Small changes can have big impact. (13:46) Limits of large language models and lack of understanding. (14:32) Difficulty in discussing limits of language models. (15:13) Statistical regularities lead to better understanding of world. (16:33) Limitations of language models and hope for reinforcement learning. (17:52) Teaching neural nets through interaction with humans. (21:44) Multimodal understanding not necessary for language models. (25:28) Autoregressive transformers and high-dimensional distributions. (26:02) Autoregressive transformers work well on images. (27:09) Pixels represented like a string of text. (29:40) Large generative models learn compressed representations of real-world processes. (31:31) Human teachers needed to guide reinforcement learning process. (35:10) Opportunity to teach AI models more skills with less data. (39:57) Desirable to have democratic process for providing information. (41:15) Impossible to understand everything in complicated situations. Craig Smith Twitter: https://twitter.com/craigssEye on A.I. Twitter: https://twitter.com/EyeOn_AI
undefined
36 snips
Nov 23, 2023 • 13min

The exciting, perilous journey toward AGI | Ilya Sutskever

Ilya Sutskever, OpenAI's cofounder and chief scientist, discusses the transformative potential of artificial general intelligence (AGI) surpassing human intelligence and revolutionizing healthcare and other fields. Collaboration and safety in developing AGI are emphasized, along with the idea of joining ahead companies rather than competing.
undefined
30 snips
May 21, 2024 • 1h 14min

#98: Google I/O, GPT-4o, and Ilya Sutskever’s Surprise Departure from OpenAI

Chief scientist at OpenAI, Ilya Sutskever, discusses Google I/O highlights, GPT-4o launch, and his departure from OpenAI. They also touch on Apple's AI plans, HubSpot transparency, and AI impact assessments
undefined
27 snips
May 27, 2024 • 3h 38min

ICLR 2024 — Best Papers & Talks (ImageGen, Vision, Transformers, State Space Models) ft. Durk Kingma, Christian Szegedy, Ilya Sutskever

Christian Szegedy, Ilya Sutskever, and Durk Kingma discuss the most notable topics from ICLR 2024, including expansion of deep learning models, latent variable models, generative models, unsupervised learning, adversarial machine learning, attention maps in vision transformers, efficient model training strategies, and optimization in large GPU clusters.
undefined
26 snips
Sep 10, 2024 • 1h 8min

#114: ProblemsGPT, The ROI of Generative AI, Andrej Karpathy on the Road to Automated Intelligence & Ilya Sutskever Raises $1B

Andrej Karpathy, a leading AI researcher, shares his journey toward automated intelligence, providing fascinating insights into the evolving landscape of AI. Ilya Sutskever discusses his involvement in a significant $1B funding round for Safe Superintelligence, highlighting its potential impact on the field. The conversation also covers the launch of ProblemsGPT, a new tool for business problem-solving, and explores the transformative benefits of generative AI across various industries, kicking off discussions about workforce evolution and ethical considerations in AI development.
undefined
23 snips
Jun 22, 2024 • 51min

(L'HEBDO) Les ingérences étrangères dans l'espace numérique européen

Ilya Sutskever, co-fondateur d'OpenAI et pionnier de l'IA, discute du développement d'une "super intelligence sûre". Il évoque les enjeux d'ingérences étrangères, notamment pendant les élections, et comment l'IA impacte la démocratie. Les influenceurs pourraient influencer les votes, et Sutskever souligne les dangers de la désinformation alimentée par l'IA. Les défis de la cybersécurité sont également abordés, mettant en lumière l'utilisation de l'IA par les cybercriminels et l'importance de réguler ces technologies.