Clearer Thinking with Spencer Greenberg cover image

Clearer Thinking with Spencer Greenberg

Latest episodes

undefined
Jul 26, 2023 • 1h 3min

AI creativity and love (with Joel Lehman)

Read the full transcript here. Where does innovation come from? How common is it for "lone wolf" scientists to make large leaps in innovation by themselves? How can we imbue AIs with creativity? Or, conversely, how can we apply advances in AI creativity to our own personal creative processes? How do creative strategies that work well for individuals differ from creative strategies that work well for groups? To what extent are models like DALL-E and ChatGPT "creative"? Can machines love? Or can they only ever pretend to love? We've worried a fair bit about AI misalignment; but what should we do about the fact that so many humans are misaligned with humanity's own interests? What might it mean to be "reverent" towards science?Joel Lehman is a machine learning researcher interested in algorithmic creativity, AI safety, artificial life, and intersections of AI with psychology and philosophy. Most recently he was a research scientist at OpenAI co-leading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called Why Greatness Cannot Be Planned on what AI search algorithms imply for individual and societal accomplishment. Follow him on Twitter at @joelbot3000 or email him at lehman.154@gmail.com.Further reading:"Machine Love" by Joel Lehman StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jul 19, 2023 • 1h 17min

Glimpses of enlightenment through nondual meditation (with Michael Taft and Jeremy Stevenson)

Read the full transcript here. How does nondual meditation differ from other forms of meditation? Is nonduality the sort of thing a person can just "get" immediately? What value is provided by the more effortful, less "sudden" forms of meditation? Is there such a thing as full or complete enlightenment? And what would such a state entail? To what extent do nondual meditation teachers agree about what nonduality is? Are glimpses of enlightenment available to everyone? How long does it usually take a person to stabilize their ability to return to a nondual way of seeing the world? What are some common ways people get "stuck" while learning nondual meditation? How important are meditation retreats? Though the paths themselves are obviously quite distinct from one another, do all forms of meditation ultimately share a common goal? How are all of these things related to spirituality or religion?Michael Taft is a teacher of nondual meditation and host of the Deconstructing Yourself podcast and website. He is the author of The Mindful Geek, and co-founder of The Alembic, a Berkeley-based center for meditation, movement, citizen neuroscience, and visionary culture. Having lived all over the world and practiced deeply in several traditions, Michael currently makes his home in California. Email him at michaeltaft@gmail.com, or learn more about him at his website, deconstructingyourself.com.Jeremy Stevenson hails from Adelaide, Australia, and has a PhD in clinical psychology with a dissertation focused on the effects of self-compassion on social anxiety. During his PhD he became intensely interested in meditation, sitting several shorter retreats which eventually culminated in sitting longer retreats, including a 3-month retreat in Nepal. He is now working as a clinical psychologist as well as doing research work for Spark Wave. His ongoing meditation interest is the perplexing skill of nondual mindfulness. Email him at jeremy.david.stevenson@gmail.com, or listen to his previous episode on this podcast here. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jul 12, 2023 • 1h 3min

Crumbling institutions, culture wars, and the dismissal economy (with Ashley Hodgson)

Read the full transcript here. What is the New Enlightenment? What might it mean to improve our epistemics with regard to institutions? How should we fix imbalanced salience in contexts where misinformation is a problem (like news media)? How have the economics of institutions deteriorated? How can we continually reinvigorate systems so that they remain ungameable and resistant to runaway feedback loops? In the context of government in particular, how can we move away from "one dollar, one vote" and back towards "one person, one vote"? At what levels or layers should institutional interventions be applied? What can we do to increase trust across social differences and reduce contempt among groups? Under what conditions is it rational to feel contempt for an out-group? How can we make conflict and "dunking" less appealing, and make openmindedness and careful consideration more appealing? What is the "dismissal" economy? How can we deal with information overload? How might the adversarial economic model be used to improve academia?Ashley Hodgson is an Associate Professor of Economics and a YouTuber. She teaches behavioral economics, digital industries, health care economics, and blockchain economics. Her YouTube channel, The New Enlightenment, explores topics related to economics, governance, and epistemics — that is, the determination of truth and validity — in a world of social media and increasing power concentration. She also has another YouTube channel with her economics lectures. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jul 5, 2023 • 1h 19min

Virtual reality, simulation theory, consciousness, and identity (with David Chalmers)

Read the full transcript here. What does philosophy have to say about virtual reality (VR)? Under what conditions is "normal" reality preferable to VR? To what extent are VR experiences "real"? How likely is it that we're living in a simulation? What implications would the discovery that we're living in a simulation have for our beliefs about reality? How common is Bayesian thinking among philosophers? How should we think about identity over time if selves can be split or duplicated? What might it look like for our conception of identity to undergo a "fall from Eden"? What do people mean when they say that consciousness is an illusion? Finding a grand unified theory of physics seems at least in principle the sort of thing that science can do, even if we haven't done it yet; but can science even in principle solve the hard problem of consciousness? Might consciousness just be a fundamental law of the universe, an axiom which we must accept but for which there might be no explanation? Is consciousness needed in order to attain certain levels of biological evolution? How conscious (or not) are our current AI models? Statistically speaking, what are the most prevalent views held by philosophers?David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996) and Reality+ (2022). He is known for formulating the "hard problem" of consciousness, which inspired Tom Stoppard's play The Hard Problem, and for the idea of the "extended mind," which says that the tools we use can become parts of our minds. Learn more about him at consc.net. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jun 28, 2023 • 1h 20min

Deep canvassing, street epistemology, and other tools of persuasion (with David McRaney)

Read the full transcript here. What is persuasion, and what is it not? How does persuasion differ from coercion? What is the Elaboration Likelihood Model (ELM) of persuasion? How are the concepts of assimilation and accommodation related to persuasion? Motivated reasoning is usually seen as a cognitive bias or error; but what if all reasoning is motivated? Are we motivated more by physical death or social death? How much evidence would Flat-Earthers need in order to be convinced that Earth is round? What are "deep" canvassing and "street" epistemology? In what contexts are they most effective? Under what conditions is persuasion morally acceptable?David McRaney is a science journalist fascinated with brains, minds, and culture. He created the podcast You Are Not So Smart based on his 2009 internationally bestselling book of the same name and its followup, You Are Now Less Dumb. Before that, he cut his teeth as a newspaper reporter covering Hurricane Katrina on the Gulf Coast and in the Pine Belt region of the Deep South. Later, he covered things like who tests rockets for NASA, what it is like to run a halfway home for homeless people who are HIV-positive, and how a family sent their kids to college by making and selling knives. Since then, he has been an editor, photographer, voiceover artist, television host, journalism teacher, lecturer, and tornado survivor. Most recently, after finishing his latest book, How Minds Change, he wrote, produced, and recorded a six-hour audio documentary exploring the history of the idea and the word: genius. Learn more about him at davidmcraney.com, or follow him on Twitter at @davidmcraney. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jun 21, 2023 • 1h 25min

Will AI destroy civilization in the near future? (with Connor Leahy)

Read the full transcript here. Does AI pose a near-term existential risk? Why might existential risks from AI manifest sooner rather than later? Can't we just turn off any AI that gets out of control? Exactly how much do we understand about what's going on inside neural networks? What is AutoGPT? How feasible is it to build an AI system that's exactly as intelligent as a human but no smarter? What is the "CoEm" AI safety proposal? What steps can the average person take to help mitigate risks from AI?Connor Leahy is CEO and co-founder of Conjecture, an AI alignment company focused on making AI systems boundable and corrigible. Connor founded and led EleutherAI, the largest online community dedicated to LLMs, which acted as a gateway for people interested in ML to upskill and learn about alignment. With capabilities increasing at breakneck speed, and our ability to control AI systems lagging far behind, Connor moved on from the volunteer, open-source Eleuther model to a full-time, closed-source model working to solve alignment via Conjecture. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jun 14, 2023 • 1h 1min

Is AI development moving too fast or not fast enough? (with Reid Hoffman)

Read the full transcript here. Many people who work on AI safety advocate for slowing the rate of development; but might there be any advantages in speeding up AI development? Which fields are likely to be impacted the most (or the least) by AI? As AIs begin to displace workers, how can workers make themselves more valuable? How likely is it that AI assistants will become better at defending against users who are actively trying to circumvent assistants' guardrails? What effects would the open-sourcing of AI code, models, or training data likely have? How do actual or potential AI intelligence levels affect AI safety calculus? Are there any good solutions to the problem that only ethically-minded people are likely to apply caution and restraint in AI development? What will a world with human-level AGI look like?An accomplished entrepreneur, executive, and investor, Reid Hoffman has played an integral role in building many of today's leading consumer technology businesses including as the co-founder of LinkedIn. He is the host of the podcasts Masters of Scale and Possible. He is the co-author of five best-selling books: The Startup of You, The Alliance, Blitzscaling, Masters of Scale, and Impromptu.Note from Reid: Possible [the podcast] is back this summer with a three-part miniseries called "AI and The Personal," which launches on June 21. Featured guests use AI, hardware, software, and their own creativity to better people's daily lives. Subscribe here to get the series: https://link.chtbl.com/thepossiblepodcast StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
Jun 7, 2023 • 1h 25min

Where philosophy meets the real world (with Peter Singer)

Read the full transcript here. How have animal rights and the animal rights movement changed in the last few decades? How has the scale of animal product consumption grown relative to human population growth? On what principles ought animal ethics to be grounded? What features of human psychology enable humans to empathize with and dislike animal suffering and yet also eat animal products regularly? How does the agribusiness industry convince people to make choices that go against their own values? What are some simple changes people can make to their diets if they're not ready yet to go completely vegetarian or vegan but still want to be less responsible for animal suffering? What attitudes should vegetarians and vegans hold towards meat-eaters? When, if ever, is it possible to have done "enough", morally speaking? What are the things that matter intrinsically to humans and other sentient beings? What is the most complex organism that is apparently not conscious? Will we ever have the technology to scan someone's brain and measure how much pleasure or suffering they're experiencing? How uncertain should we be about moral uncertainty? What should we eat if it's eventually discovered that plants can suffer?Peter Singer is a philosopher and the Ira W. DeCamp Professor of Bioethics in the University Center for Human Values at Princeton University. His work focuses on the ethics of human treatment of animals; he is often credited with starting the modern animal rights movement; and his writings have significantly influenced the development of the Effective Altruism movement. In 1971, Peter co-founded the Australian Federation of Animal Societies, now called Animals Australia, the country's largest and most effective animal organization; and in 2013, he founded The Life You Can Save, an organization named after his 2009 book, which aims to spread his ideas about why we should be doing much more to improve the lives of people living in extreme poverty and how we can best do this. In 2021, he received the Berggruen Prize for Philosophy and Culture for his "widely influential and intellectually rigorous work in reinvigorating utilitarianism as part of academic philosophy and as a force for change in the world". He has written, co-authored, edited, or co-edited more than 50 books, including Animal Liberation, The Life You Can Save, Practical Ethics, The Expanding Circle, Rethinking Life and Death, One World, The Ethics of What We Eat (with Jim Mason), and The Point of View of the Universe (with Katarzyna de Lazari-Radek); and his writings have been translated into more than 25 languages. Find out more about him at his website, petersinger.info, or follow him on Facebook, Twitter, or Instagram. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
May 31, 2023 • 1h 25min

Large language models, deep peace, and the meaning crisis (with Jim Rutt)

Read the full transcript here. What are large language models (LLMs) actually doing when they churn out text? Are they sentient? Is scale the only difference among the various GPT models? Google has seemingly been the clear frontrunner in the AI space for many years; so how did they fail to win the race to LLMs? And why are other competing companies having such a hard time catching their LLM tech up to OpenAI's? What are the implications of open-sourcing LLM code, models, and corpora? How concerned should we be about bad actors using open source LLM tools? What are some possible strategies for combating the coming onslaught of AI-generated spam and misinformation? What are the main categories of risks associated with AIs? What is "deep" peace? What is "the meaning crisis"?Jim Rutt is the host of the Jim Rutt Show podcast, past president and co-founder of the MIT Free Speech Alliance, executive producer of the film "An Initiation to Game~B", and the creator of Network Wars, the popular mobile game. Previously he has been chairman of the Santa Fe Institute, CEO of Network Solutions, CTO of Thomson-Reuters, and chairman of the computer chip design software company Analog Design Automation, among various business and not-for-profit roles. He is working on a book about Game B and having a great time exploring the profits and perils of the Large Language Models. StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
undefined
May 24, 2023 • 1h 6min

Censorship, cancel culture, and truth-seeking (with Iona Italia)

When is a certain speech act an opinion versus a call to action? Does that distinction matter for censorship purposes? Why does it seem that human behavior tends towards censorship rather than towards freedom of expression? Is feeling emotionally or politically harmed a valid reason for censoring certain speech acts? Will it always be the case that, given enough time, truth will win out over ignorance, bullshit, misinformation, and lies? What are the necessary and sufficient conditions for creating a society in which truth wins at the end of the day? Why are citizens so often attracted to populist and/or fascist ideologies and political parties? What value does religion provide to a society?Iona Italia is the editor-in-chief of Areo Magazine and the host of its Two for Tea podcast. Iona is the author of two books: Anxious Employment (a study of eighteenth-century essayists) and Our Tango World (sociological and philosophical musings on dance and life). She holds a PhD in English Literature from Cambridge University and publishes weekly creative non-fiction pieces on her Substack, The Second Swim. Her background includes a decade in academe and a 12-year career as a tango dancer and teacher. Iona lives in London with four old friends. She loves dancing, running, choral singing, chess, dogs, and sci-fi.StaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumMiles Kestran — MarketingMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode