Clearer Thinking with Spencer Greenberg cover image

Clearer Thinking with Spencer Greenberg

Latest episodes

undefined
Oct 26, 2022 • 1h 16min

What, if anything, do AIs understand? (with ChatGPT Co-Creator Ilya Sutskever)

Read the full transcript here. Can machines actually be intelligent? What sorts of tasks are narrower or broader than we usually believe? GPT-3 was trained to do a "single" task: predicting the next word in a body of text; so why does it seem to understand so many things? What's the connection between prediction and comprehension? What breakthroughs happened in the last few years that made GPT-3 possible? Will academia be able to stay on the cutting edge of AI research? And if not, then what will its new role be? How can an AI memorize actual training data but also generalize well? Are there any conceptual reasons why we couldn't make AIs increasingly powerful by just scaling up data and computing power indefinitely? What are the broad categories of dangers posed by AIs?Ilya Sutskever is Co-founder and Chief Scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D. in Computer Science from the University of Toronto. Follow him on Twitter at @ilyasut. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Oct 21, 2022 • 1h 32min

Forecasting the things that matter (with Peter Wildeford)

Read the full transcript here. How can we change the way we think about expertise (or the trustworthiness of any information source) using forecasting? How do prediction markets work? How can we use prediction markets in our everyday lives? Are prediction markets more trustworthy than large or respectable news outlets? How long does it take to sharpen one's prediction skills? In (e.g.) presidential elections, we know that the winner will be one person from a very small list of people; but how can we reasonably make predictions in cases where the outcomes aren't obviously multiple-choice (e.g., predicting when artificial general intelligence will be created)? How can we move from the world we have now to a world in which people think more quantitatively and make much better predictions? What scoring rules should we use to keep track of our predictions and update accordingly?Peter Wildeford is the co-CEO of Rethink Priorities, where he aims to scalably employ a large number of well-qualified researchers to work on the world's most important problems. Prior to running Rethink Priorities, he was a data scientist in industry for five years at DataRobot, Avant, Clearcover, and other companies. He is also recognized as a Top 50 Forecaster on Metaculus (international forecasting competition) and has a Triple Master Rank on Kaggle (international data science competition) with top 1% performance in five different competitions. Follow him on Twitter at @peterwildeford.Further reading:ClearerThinking.org's "Calibrate Your Judgment" practice programMetaculus (forecasting platform)Manifold MarketsPolymarket"Calibration Scoring Rules for Practical Prediction Training", a paper by Spencer Greenberg StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Oct 12, 2022 • 1h 23min

Is the universe a computer? (with Joscha Bach)

Read the full transcript here. What is intelligence? What exactly does an IQ test measure? What are the similarities and differences between the structure of GPT-3 and the structure of the human brain (so far as we understand it)? Is suffering — as the Buddhists might say — just a consequence of the stories we tell about ourselves and the world? What's left (if anything) of the human mind if we strip away the "animal" parts of it? We've used our understanding of the human brain to inform the construction of AI models, but have AI models yielded new insights about the human brain? Is the universe is a computer? Where does AI go from here?Joscha Bach was born in Eastern Germany, and he studied computer science and philosophy at Humboldt University in Berlin and computer science at Waikato University in New Zealand. He did his PhD at the Institute for Cognitive Science in Osnabrück by building a cognitive architecture called MicroPsi, which explored the interaction of motivation, emotion, and cognition. Joscha researched and lectured about the Future of AI at the MIT Media Lab and Harvard, and worked as VP for Research at a startup in San Francisco before joining Intel Labs as a principal researcher. Email him at joscha.bach@gmail.com, follow him on Twitter at @plinz, or subscribe to his YouTube channel.Further reading:The 7 Realms of Truth StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Oct 5, 2022 • 1h 27min

Inventions, stories, and ideas that don't matter (with Pablos Holman)

Read the full transcript here. How does 3D-printed food work? How do hackers and inventors think? What are some ideas that don't matter? Why are humans so driven by stories? What are the current sentiments around nuclear energy? What is an "information DMZ"? Is "cryptocurrency regulation" a contradiction in terms? What are "deep" and "shallow" technologies? How could we handle intellectual property rights more fairly?Pablos is a hacker and inventor that runs Deep Future, a venture capital firm backing mad scientists, rogue inventors, crazy hackers, and maverick entrepreneurs who are implementing science fiction, solving big problems, and helping our species become better ancestors. Pablos is a top public speaker on technology whose TED Talks have over 30 million views. With his Deep Future Podcast, Pablos is sharing his conversations with people who understand the biggest problems in the world and the technologies that could help us solve them. Follow him on Twitter at @pablos, email him at pablos@deepfuture.tech, or find out more about him at deepfuture.tech. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Sep 28, 2022 • 1h 5min

Humble-bragging, counter-signalling, and impression management (with Övül Sezer)

Read the full transcript here. What should we do (or not do) to make a good first impression on others? Is "humble-bragging" better or worse than straightforward bragging? Or is completely hiding our successes an even better strategy than humble-bragging or straightforward bragging? When do our attempts to signal something about ourselves actually end up signalling something else that we don't intend? What are some long-term strategies for gaining others' respect?Övül Sezer is a behavioral scientist, stand-up comedian, and Visiting Assistant Professor at Columbia University, Columbia Business School. She received her A.B. in Applied Mathematics and her Ph.D in Organizational Behavior from Harvard University. Follow her on Twitter at @ovulsezer or learn more about her at ovulsezer.com. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Sep 21, 2022 • 1h 6min

Ambition and expected value at extremes (with Habiba Islam)

Read the full transcript here. Are ambition and altruism compatible? How ambitious should we be if we want to do as much good in the world as possible? How should we handle expected values when the probabilities become very small and/or the values of the outcomes become very large? What's a reasonable probability of success for most entrepreneurs to aim for? Are there non-consequentialist justifications for longtermism?Habiba Islam is an advisor at 80,000 Hours where she talks to people one-on-one, helping them to pursue high impact careers. She previously served as the Senior Administrator for the Future of Humanity Institute and the Global Priorities Institute at Oxford. Before that she qualified as a barrister and worked in management consulting at PwC specialising in operations for public and third sector clients. Follow her on Twitter at @FreshMangoLassi or learn more about her work at 80,000 Hours at 80000hours.org. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Sep 14, 2022 • 1h 21min

Career science, open science, and inspired science (with Alexa Tullett)

Read the full transcript here. How much should we actually trust science? Are registered reports more trustworthy than meta-analyses? How does "inspired" science differ from "open" science? Open science practices may make research more defensible, but do they make it more likely to find truth? Do thresholds (like p < 0.05) represent a kind of black-and-white thinking, since they often come to represent a binary like "yes, this effect is significant" or "no, this effect is not significant"? What is "importance laundering"? Is generalizability more important than replicability? Should retribution be part of our justice system? Are we asking too much of the US Supreme Court? What would an ideal college admissions process look like?Alexa Tullett is a social psychologist who works at the University of Alabama. Her lab examines scientific, religious, and political beliefs, and the factors that facilitate or impede belief change. Some of her work takes a meta-scientific approach, using psychological methods to study the beliefs and practices of psychological scientists. Learn more about her at alexatullett.com, or send her an email at atullett@ua.edu. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Sep 7, 2022 • 1h 6min

Estimating the long-term impact of our actions today (with Will MacAskill)

Read the full transcript here. What is longtermism? Is the long-term future of humanity (or life more generally) the most important thing, or just one among many important things? How should we estimate the chance that some particular thing will happen given that our brains are so computationally limited? What is "the optimizer's curse"? How top-down should EA be? How should an individual reason about expected values in cases where success would be immensely valuable but the likelihood of that particular individual succeeding is incredibly low? (For example, if I have a one in a million chance of stopping World War III, then should I devote my life to pursuing that plan?) If we want to know, say, whether protests are effective or not, we merely need to gather and analyze existing data; but how can we estimate whether interventions implemented in the present will be successful in the very far future?William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. A Forbes 30 Under 30 social entrepreneur, he also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and Y Combinator–backed 80,000 Hours, which together have moved over $200 million to effective charities. He's the author of Doing Good Better, Moral Uncertainty, and What We Owe The Future. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Aug 31, 2022 • 1h 17min

The differences between analytic and continental philosophy (with Alexander Prescott-Couch)

Read the full transcript here. What is the genetic fallacy? How do the analytic and continental philosophical traditions differ? What is the role and value of intuition in analytic philosophy? Is continental philosophy too poetic for its own good?Alexander Prescott-Couch is an Associate Professor of Philosophy at the University of Oxford. He is currently writing a book on genealogy that is under contract with Oxford University Press. His academic work has appeared in journals such as Noûs and Journal of Political Philosophy, and he contributes to a regular interview column in the ZeitMagazin. Email him at alexander.prescott-couch@philosophy.ox.ac.uk or follow him on Twitter at @prescottcouch. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Aug 24, 2022 • 1h 18min

Voting method reform in the US (with Aaron Hamlin)

Read the full transcript here. Does the US have one of the worst implementations of democracy in the world? Why do people sometimes seem to vote against their own interests? Is it rational for them to do so? How robust are various voting systems to strategic voting? What sorts of changes would we notice in the US if we suddenly switched to other voting systems? How hard would it be to migrate from our current voting systems to something more robust and fair, and what would be required to make that happen? Are centrist candidates always boring?Aaron Hamlin is the executive director and co-founder of The Center for Election Science. He's been featured as an electoral systems expert on MSNBC.com, NPR, Free Speech TV, Inside Philanthropy, 80K Hours, and Popular Mechanics; and he has given talks across the country on voting methods. He's written for Deadspin, USA Today Magazine, Independent Voter Network, and others. Additionally, Aaron is a licensed attorney with two additional graduate degrees in the social sciences. You can learn more about The Center for Election Science at [electionscience.org(https://electionscience.org/) and can contact Aaron at aaron@electionscience.org. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumJanaisa Baril — TranscriptionistMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode