
Philosophical Disquisitions
Things hid and barr'd from common sense
Latest episodes

Jun 6, 2023 • 0sec
110 - Can we pause AI Development? Evidence from the history of technological restraint
In this episode, I chat to Matthijs Maas about pausing AI development. Matthijs is currently a Senior Research Fellow at the Legal Priorities Project and a Research Affiliate at the Centre for the Study of Existential Risk at the University of Cambridge. In our conversation, we focus on the possibility of slowing down or limiting the development of technology. Many people are sceptical of this possibility but Matthijs has been doing some extensive research of historical case studies of, apparently successful, technological slowdown. We discuss these case studies in some detail.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant LinksRecording of Matthijs's Chalmers about this topic: https://www.youtube.com/watch?v=vn4ADfyrJ0Y&t=2s Slides from this talk -- https://drive.google.com/file/d/1J9RW49IgSAnaBHr3-lJG9ZOi8ZsOuEhi/view?usp=share_linkPrevious essay / primer, laying out the basics of the argument: https://verfassungsblog.de/paths-untaken/Incomplete longlist database of candidate case studies: https://airtable.com/shrVHVYqGnmAyEGsz
Subscribe to the newsletter

May 30, 2023 • 0sec
109 - How Can We Align Language Models like GPT with Human Values?
In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant LinksAtoosa's webpageAtoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values'
Subscribe to the newsletter

4 snips
May 3, 2023 • 0sec
108 - Miles Brundage (Head of Policy Research at Open AI) on the speed of AI development and the risks and opportunities of GPT
[UPDATED WITH CORRECT EPISODE LINK]In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Subscribe to the newsletter

Apr 19, 2023 • 0sec
107 - Will Large Language Models disrupt healthcare?
In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant LinksJess's WebsiteJess on TwitterJohn Snow's cholera map
Subscribe to the newsletter

Apr 11, 2023 • 0sec
106 - Why GPT and other LLMs (probably) aren't sentient
In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant LinksRobert's webpageRobert's substack
Subscribe to the newsletter

Apr 2, 2023 • 0sec
105 - GPT: Higher Education's Jurassic Park Moment?
In this episode of the podcast, I talk to Thore Husfeldt about the impact of GPT on education. Thore is a Professor of Computer Science at the IT University of Copehagen, where he specialises in pretty technical algorithm-related research. He is also affiliated with Lund University in Sweden. Beyond his technical work, Thore is interested in ideas at the intersection of computer science, philosophy and educational theory. In our conversation, Thore outlines four models of what a university education is for, and considers how GPT disrupts these models. We then talk, in particular, about the 'signalling' theory of higher education and how technologies like GPT undercut the value of certain signals, and thereby undercut some forms of assessment. Since I am an educator, I really enjoyed this conversation, but I firmly believe there is much food for thought in it for everyone.
You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Subscribe to the newsletter

Mar 28, 2023 • 0sec
104 - What will be the economic impact of GPT?
In this episode of the podcast, I chat to Anton Korinek about the economic impacts of GPT. Anton is a Professor of Economics at the University of Virginia and the Economics Lead at the Centre for AI Governance. He has researched widely on the topic of automation and labour markets. We talk about whether GPT will substitute for or complement human workers; the disruptive impact of GPT on the economic organisation; the jobs/roles most immediately at risk; the impact of GPT on wage levels; the skills needed to survive in an AI-enhanced economy, and much more.You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be.
Relevant LinksAnton's homepageAnton's paper outlining 25 uses of LLMs for academic economistsAnton's dialogue with GPT, Claude and the economic David Autor
Subscribe to the newsletter

Mar 23, 2023 • 0sec
103 - GPT: How worried should we be?
Olle Häggström, a professor of mathematical statistics, discusses GPT, its intelligent nature, risks, and the reckless development of this technology. They explore the lack of transparency in GPT models and touch on the potential harm and safety concerns. The podcast also delves into the appropriate pace of AI development, the parallel between nuclear weapons and AI, and concerns about the timeline and readiness for AI development.

Dec 16, 2022 • 0sec
102 - Fictional Dualism and Social Robots
How should we conceive of social robots? Some sceptics think they are little more than tools and should be treated as such. Some are more bullish on their potential to attain full moral status. Is there some middle ground? In this episode, I talk to Paula Sweeney about this possibility. Paula defends a position she calls 'fictional dualism' about social robots. This allows us to relate to social robots in creative, human-like ways, without necessarily ascribing them moral status or rights. Paula is a philosopher based in the University of Aberdeen, Scotland. She has a background in the philosophy of language (which we talk about a bit) but has recently turned her attentio n to applied ethics of technology. She is currently writing a book about social robots.
You download the episode here, or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services.
Relevant LinksA Fictional Dualism Model of Social Robots by PaulaTrusting Social Robots by PaulaWhy Indirect Harms do Not Support Social Robot Rights by Paula
Subscribe to the newsletter

Nov 28, 2022 • 0sec
101 - Pistols, Pills, Pork and Ploughs: How Technology Changes Morality
It's clear that human social morality has gone through significant changes in the past. But why? What caused these changes? In this episode, I chat to Jeroen Hopster from the University of Utrecht about this topic. We focus, in particular, on a recent paper that Jeroen co-authored with a number of colleagues about four historical episodes of moral change and what we can learn from them. That paper, from which I take the title of this podcast, was called 'Pistols, Pills, Pork and Ploughs' and, as you might imagine, looks at how specific technologies (pistols, pills, pork and ploughs) have played a key role in catalysing moral change.
You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).
Subscribe to the newsletter
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.