The Valmy

Peter Hartree
undefined
Feb 25, 2021 • 47min

Butler on Machines

Podcast: Talking Politics: HISTORY OF IDEAS Episode: Butler on MachinesRelease date: 2021-02-23Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationSamuel Butler’s Erewhon (1872) is a strange and unsettling book about a world turned upside down. Usually classified as utopian or dystopian fiction, it also contains an eerie prophecy about the coming of intelligent machines. David explores the origins of Butler’s ideas and asks what they have to teach us about the oddity of how we choose to organise our societies, both then and now.Free version of the textRecommended version to buyGoing Deeper:Samuel Butler, The Way of All Flesh (1903)Virginia Woolf, 'Mr. Bennett and Mrs. Brown' (1924)George Dyson, Darwin Among the Machines (1997)(Video) James Paradis, 'Naturalism and Utopia: Samuel Butler's Erewhon' See acast.com/privacy for privacy and opt-out information.
undefined
Oct 6, 2020 • 1h 35min

#16 – SJ Beard on Parfit, Climate Change, and Existential Risk

Podcast: Hear This Idea Episode: #16 – SJ Beard on Parfit, Climate Change, and Existential RiskRelease date: 2020-09-30Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDr S. J. Beard is a research associate at the Centre for the Study of Existential Risk at the University of Cambridge, and an AHRC/BBC New Generation Thinker. With a background in philosophy, he works on ethical problems relating to the long-term future of humanity, as well as evaluating extreme technological risks. In this episode we discuss: [2:00] Existential risks defined in brief; [4:45] SJ's background; [12:30] What made philosopher Derek Parfit so influential; [17:30] What is the repugnant conclusion? [22:12] What is the non-identity problem? [28:40] Meeting Parfit; [34:20] Why SJ chose a career in existial risk research; [36:43] What existential risk research looks like; [45:58] How can we estimate the probability of catastrophes with no strict precedents? [56:52] Under what circumstances could climate change cause a collapse of global civilization? [1:07:52] Why SJ ran as an MP for the Liberal Democrats; [1:17:25] Is academia broken? How can we fix it? [1:23:23] Why SJ changed his mind about whether COVID is a potential global catastrophe You can read much more on this episode's accompanying write-up: hearthisidea.com/episodes/Simon. If you have any feedback or suggestions for future guests, please get in touch through our website. Please also consider leaving a review on Apple Podcasts or wherever you're listening to this. If you want to support the show more directly and help us keep hosting these episodes online, consider leaving a tip at https://www.tips.pinecast.com/jar/hear-this-idea. Thanks for listening!
undefined
Oct 6, 2020 • 1h 46min

Kelly Wanser on Climate Change as a Possible Existential Threat

Podcast: Future of Life Institute Podcast Episode: Kelly Wanser on Climate Change as a Possible Existential ThreatRelease date: 2020-09-30Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationKelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.  Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques  - The international politics of climate change and weather modification You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/ Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU Timestamps:  0:00 Intro 2:30 What is SilverLining’s mission?  4:27 Why is climate change thought to be very risky in the next 10-30 years?  8:40 Tipping points and tipping cascades 13:25 Is climate change an existential risk?  17:39 Earth systems that help to stabilize the climate  21:23 Days where it will be unsafe to work outside  25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in  41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions?  50:20 International politics of weather modification  53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight?  57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous?  59:33 What are the main points of persons skeptical of climate intervention approaches  01:13:21 The international problem of coordinating on climate change  01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks? 01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention?   01:37:48 What can listeners do to help with this issue?  01:40:00 Climate change and mars colonization  01:44:55 Where to find and follow Kelly This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Sep 9, 2020 • 1h 55min

Iason Gabriel on Foundational Philosophical Questions in AI Alignment

Podcast: Future of Life Institute Podcast Episode: Iason Gabriel on Foundational Philosophical Questions in AI AlignmentRelease date: 2020-09-03Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.     Topics discussed in this episode include: -How moral philosophy and political theory are deeply related to AI alignment -The problem of dealing with a plurality of preferences and philosophical views in AI alignment -How the is-ought problem and metaethics fits into alignment  -What we should be aligning AI systems to -The importance of democratic solutions to questions of AI alignment  -The long reflection You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/ Timestamps:  0:00 Intro 2:10 Why Iason wrote Artificial Intelligence, Values and Alignment 3:12 What AI alignment is 6:07 The technical and normative aspects of AI alignment 9:11 The normative being dependent on the technical 14:30 Coming up with an appropriate alignment procedure given the is-ought problem 31:15 What systems are subject to an alignment procedure? 39:55 What is it that we're trying to align AI systems to? 01:02:30 Single agent and multi agent alignment scenarios 01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals 01:30:28 The long reflection 01:53:55 Where to follow and contact Iason This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Aug 25, 2020 • 44min

Utilitarianism

Podcast: In Our Time Episode: UtilitarianismRelease date: 2015-06-11Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationA moral theory that emphasises ends over means, Utilitarianism holds that a good act is one that increases pleasure in the world and decreases pain. The tradition flourished in the eighteenth and nineteenth centuries with Jeremy Bentham and John Stuart Mill, and has antecedents in ancient philosophy. According to Bentham, happiness is the means for assessing the utility of an act, declaring "it is the greatest happiness of the greatest number that is the measure of right and wrong." Mill and others went on to refine and challenge Bentham's views and to defend them from critics such as Thomas Carlyle, who termed Utilitarianism a "doctrine worthy only of swine."WithMelissa Lane The Class of 1943 Professor of Politics at Princeton UniversityJanet Radcliffe Richards Professor of Practical Philosophy at the University of OxfordandBrad Hooker A Professor of Philosophy at the University of ReadingProducer: Simon Tillotson.
undefined
Aug 22, 2020 • 33min

GPT-3: What's Hype, What's Real on the Latest in AI

Podcast: a16z Podcast Episode: GPT-3: What's Hype, What's Real on the Latest in AIRelease date: 2020-07-30Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode -- cross posted from our 16 Minutes show feed -- we cover all the buzz around GPT-3, the pre-trained machine learning model from OpenAI that’s optimized to do a variety of natural-language processing tasks. It’s a commercial product, built on research; so what does this mean for both startups AND incumbents… and the future of “AI as a service”? And given that we’re seeing all kinds of (cherrypicked!) examples of output from OpenAI’s beta API being shared — how do we know how good it really is or isn’t? How do we know the difference between “looks like” a toy and “is” a toy when it comes to new innovations? And where are we, really, in terms of natural language processing and progress towards artificial general intelligence? Is it intelligent, does that matter, and how do we know (if not with a Turing Test? Finally, what are the broader questions, considerations, and implications for jobs and more? Frank Chen explains what “it” actually is and isn’t and more in conversation with host Sonal Chokshi. The two help tease apart what’s hype/ what’s real here… as is the theme of 16 Minutes.
undefined
Aug 20, 2020 • 48min

Helen's History of Ideas

Podcast: TALKING POLITICS Episode: Helen's History of IdeasRelease date: 2020-07-09Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDavid talks with Helen to get her take on the history of ideas - both what's there and what's missing. Why start with Hobbes? What can we learn from the Federalist Papers? Where's Nietzsche? Plus we talk about whether understanding where political ideas come from isliberating or limiting and we ask how many of them were just rationalisations for power.Talking Points: Should we start the story of modern politics with Hobbes?Hobbes poses a stark question: what is the worst thing that can happen in politics? Civil war or tyranny?Is Hobbes’ answer utopian?What are the consequences of the breakdown of political authority—and how do they compare to the consequences of empowering the state to do terrible things? Who has the authority to decide is a fundamental question in politics.But there are lots of ways of thinking about politics that avoid this question.If you accept the notion that political authority is essential, what form should that authority take and how can it be made as bearable as possible for as many people as possible?Constant says that the worst thing that can happen isn’t civil war; it’s the tyranny of the state.To him, the French Revolution showed that when people who hold the coercive power of the state also hold certain beliefs, the damage can be much worse.Constant wants to say that the beliefs people have in the modern world are a constraint on political possibilities.What does the pluralism of beliefs mean for politics? Constant is also more direct about the importance of debt and money. From the French revolution onwards, nationalism became the dominant idea by which the authority of states was justified to those over whom it exercised power.Sieyès equated the state with its people.The idea of federalism as enshrined in the US constitution is also important: Hobbes did not think sovereignty could be divided.How do you reconcile constitutional ideals with the horrors they justified?Nietzsche forces a reckoning with the religion question.This blows up the distinction between pre-modern and modern.He presents a genealogy not just of morality, but civilization, ideas of justice, religion.For Nietzsche, Christianity is the manifestation of the will to power of the powerless.Nietzsche tells us how we became the way we are—it didn’t have to go that way.In exposing contingency, he forces us to engage with political questions we don’t really want to think about.What do ideas explain about human motivation in politics, and to what extent are they rationalizations of other motives?Helen thinks that the history of ideas can make political action seem too straightforward. How should we think about the relationship between ideas and material constraints (or opportunities)?Studying history more generally leads to at least some degree of cynicism about the relationship between ideas and power.Mentioned in this Episode: Talking Politics: the History of IdeasThe Federalist PapersThe Genealogy of MoralityOur episode on Weber’s ‘Politics as a Vocation’Further Learning: 
undefined
Aug 20, 2020 • 1h 42min

Peter Railton on Moral Learning and Metaethics in AI Systems

Podcast: Future of Life Institute Podcast Episode: Peter Railton on Moral Learning and Metaethics in AI SystemsRelease date: 2020-08-18Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationFrom a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Topics discussed in this episode include: -Moral epistemology -The potential relevance of metaethics to AI alignment -The importance of moral learning in AI systems -Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/ Timestamps:  0:00 Intro 3:05 Does metaethics matter for AI alignment? 22:49 Long-reflection considerations 26:05 Moral learning in humans 35:07 The need for moral learning in artificial intelligence 53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit 1:38:50 The need for engagement between philosophers and the AI alignment community 1:40:37 Where to find Peter's work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
undefined
Jul 29, 2020 • 1h 9min

Elijah Millgram, "John Stuart Mill and the Meaning of Life" (Oxford UP, 2019)

Podcast: New Books in Philosophy Episode: Elijah Millgram, "John Stuart Mill and the Meaning of Life" (Oxford UP, 2019)Release date: 2019-11-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAccording to an intuitive view, lives are meaningful when they manifest a directedness or instantiate a project such that the disparate events and endeavors “add up to” a life.  John Stuart Mill’s life certainly was devoted to a project in that sense.  Yet Mill’s life was in many respects unsatisfying – riven with anxiety and trauma.  What does Mill’s life teach us about meaningful lives?In John Stuart Mill and the Meaning of Life (Oxford University Press 2019), Elijah Millgram weaves intellectual biography together with philosophical analysis in the service of a distinctive style of moral philosophizing. Learn more about your ad choices. Visit megaphone.fm/adchoicesSupport our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
undefined
Jul 8, 2020 • 16min

Aaron Ridley on Nietzsche on Art and Truth

Podcast: Philosophy Bites Episode: Aaron Ridley on Nietzsche on Art and TruthRelease date: 2008-08-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationFriedrich Nietzsche's ideas about art and truth run through much of his philosophical writing, but are most apparent in his first book, The Birth of Tragedy. In this episode of Philosophy Bites Nigel Warburton interviews Aaron Ridley about this topic.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app