Algocracy and Transhumanism Podcast cover image

Algocracy and Transhumanism Podcast

Latest episodes

undefined
Oct 27, 2020 • 0sec

85 – The Internet and the Tyranny of Perceived Opinion

Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in the Faculty of Business, Languages and Social Science at Østfold University College in Norway. He has a particular interest in political theory and philosophy, and has worked extensively on Thomas Hobbes and social contract theory, environmental ethics and game theory. At the moment his work focuses mainly on issues involving the dynamics between human individuals, society and technology. You download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include: Selective Exposure and Confirmation Bias How algorithms curate our informational ecology Filter Bubbles Echo Chambers How the internet is created more internally conformist but externally polarised groups The nature of political freedom Tocqueville and the tyranny of the majority Mill and the importance of individuality How algorithmic curation of speech is undermining our liberty What can be done about this problem? Relevant Links Henrik’s faculty homepage Henrik on Researchgate Henrik on Twitter ‘The Tyranny of Perceived Opinion: Freedom and information in the era of big data’ by Henrik ‘Privacy as an aggregate public good‘ by Henrik ‘Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data‘ by Henrik ‘When nudge comes to shove: Liberty and nudging in the era of big data‘ by Henrik
undefined
Oct 20, 2020 • 0sec

84 – Social Media, COVID-19 and Value Change

Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include: What is a value?Descriptive vs normative theories of valuePsychological theories of personal valuesThe nature of emotionsThe connection between emotions and valuesEmotional contagionEmotional climates vs emotional atmospheresThe role of social media in causing emotional contagionIs the coronavirus promoting a negative emotional climate?Will this affect our political preferences and policies?General lessons for technology and value change Relevant Links Steffen’s HomepageThe Designing for Changing Values Project @ TU DelftCorona and Value Change by Steffen‘Unleashing the Constructive Potential of Emotions’ by Steffen and Sabine RoeserAn Overview of the Schwartz Theory of Basic Personal Values
undefined
Oct 10, 2020 • 0sec

83 – Privacy is Power

Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book Privacy is Power. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).  Show Notes Topics discussed in this show include: The most surprising examples of digital surveillanceThe nature of privacyIs privacy dead?Privacy as an intrinsic and instrumental valueThe relationship between privacy and autonomyDoes surveillance help with security and health?The problem with mass surveillanceThe phenomenon of toxic dataHow surveillance undermines democracy and freedomAre we willing to trade privacy for convenient services?And much more Relevant Links Carissa’s WebpagePrivacy is Power by CarissaSummary of Privacy is Power in AeonReview of Privacy is Power in The Guardian Carissa’s Twitter feed (a treasure trove of links about privacy and surveillance)Views on Privacy: A Survey by Sian Brooke and Carissa VélizData, Privacy and the Individual by Carissa Véliz
undefined
Sep 23, 2020 • 0sec

82 – What should we do about facial recognition?

Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show notes Topics discussed include: What is facial recognition anyway? Are there multiple forms that are confused and conflated? What’s the history of facial recognition? What has changed recently? How is the technology used? What are the benefits of facial recognition? What’s bad about it? What are the privacy and other risks? Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms? What can we do to address the risks? Should we regulate or ban? Relevant Links Brenda’s Homepage Brenda on Twitter ‘The Privacy Expert’s Guide to AI and Machine Learning’ by Brenda (at FPF) Brenda’s US Congress Testimony on Facial Recognition ‘Facial recognition and the future of privacy: I always feel like … somebody’s watching me’ by Brenda ‘The Case for Banning Law Enforcement From Using Facial Recognition Technology’ by Evan Selinger and Woodrow Hartzog
undefined
Sep 18, 2020 • 0sec

81 – Consumer Credit, Big Tech and AI Crime

In today’s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of ‘too big to fail’ tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute’s Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area. You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topics discussed include: The digitisation, datafication and disintermediation of consumer credit markets Algorithmic credit scoring The problems of risk and bias in credit scoring How law and regulation can address these problems Tech platforms that are too big to fail What should we do if Facebook fails? The forms of AI crime How to address the problem of AI crime Relevant Links Nikita’s homepage Nikita on Twitter ‘The Norms of Algorithmic Credit Scoring’ by Nikita ‘What if Facebook Goes Down? Ethical and Legal Considerations for the Demise of Big Tech Platforms‘ by Carl Ohman and Nikita ‘Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions‘ by Thomas King, Nikita, Mariarosario Taddeo and Luciano Floridi
undefined
Aug 5, 2020 • 0sec

79 – Is There a Techno-Responsibility Gap?

What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine’s actions? That’s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History & Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).     Show Notes Topics discussed include: What is responsibility? Why is it so complex? The three faces of responsibility: attribution, accountability and answerability Why are people so worried about responsibility gaps for autonomous systems? What are some of the alleged solutions to the “gap” problem? Who are the techno-pessimists and who are the techno-optimists? Why does Daniel think that there is no techno-responsibility gap? Is our application of responsibility concepts to machines overly metaphorical?   Relevant Links Daniel’s ResearchGATE profile Daniel’s papers on Philpapers “There is no Techno-Responsibility Gap” by Daniel “Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability” by Mark Coeckelbergh Technologically blurred accountability? by Kohler, Roughley and Sauer
undefined
Jul 20, 2020 • 0sec

77 – Should AI be Explainable?

If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring. You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). Show Notes Topic covered include: Why do people worry about the opacity of AI? What’s the difference between explainability and transparency? What’s the moral value or function of explainable AI? Must we distinguish between the ethical value of an explanation and its epistemic value? Why is it so technically difficult to make AI explainable? Will we ever have a technical solution to the explanation problem? Why does Scott think there is Catch 22 involved in insisting on explainable AI? When should we insist on explanations and when are they unnecessary? Should we insist on using boring AI?   Relevant Links Scotts’s webpage Scott’s paper “A Misdirected Principle with a Catch: Explicability for AI” Scott’s paper “The Value of Transparency: Bulk Data and Authorisation“ “The Right to an Explanation Explained” by Margot Kaminski Episode 36 – Wachter on Algorithms and Explanations  
undefined
Apr 18, 2020 • 0sec

76 – Surveillance, Privacy and COVID 19

How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz about these questions. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics at Oxford and the Wellcome Centre for Ethics and Humanities, also at Oxford. She is the editor of the Oxford Handbook of Digital Ethics as well as two forthcoming solo-authored books Privacy is Power (Transworld) and The Ethics of Privacy (Oxford University Press). You can download the episode here or listen below.You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes Topics discussed include The value of privacy Do we balance privacy against other rights/values? The significance of consent in debates about consent Digital contact tracing and digital quarantines The ethics of digital contact tracing Is the value of digital contact tracing being oversold? The relationship between testing and contact tracing COVID 19 as an important moment in the fight for privacy The data economy in light of COVID 19 The ethics of immunity passports The importance of focusing on the right things in responding to COVID 19   Relevant Links Carissa’s Webpage Carissa’s Twitter feed (a treasure trove of links about privacy and surveillance) Views on Privacy: A Survey by Sian Brooke and Carissa Véliz Data, Privacy and the Individual by Carissa Véliz Science paper on the value of digital contact tracing The Apple-Google proposal for digital contact tracing ”The new normal’: China’s excessive coronavirus public monitoring could be here to stay’  ‘In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags’ ‘To curb covid-19, China is using its high-tech surveillance tools’ ‘Digital surveillance to fight COVID-19 can only be justified if it respects human rights’ ‘Why ‘Mandatory Privacy-Preserving Digital Contact Tracing’ is the Ethical Measure against COVID-19′ by Cansu Canca ‘The COVID-19 Tracking App Won’t Work’  ‘What are ‘immunity passports’ and could they help us end the coronavirus lockdown?’ ‘The case for ending the Covid-19 pandemic with mass testing’  
undefined
Apr 14, 2020 • 0sec

75 – The Vital Ethical Contexts of COVID 19

There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on today’s show: David Shaw. David is a Senior Researcher at the Institute for Biomedical Ethics at the University of Basel and an Assistant Professor at the Care and Public Health Research Institute, Maastricht University. We discuss some recent writing David has been doing on the Journal of Medical Ethics blog about the coronavirus crisis. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).  Show Notes Topics discussed include… Why is it important to keep death rates and other data in context? Is media reporting of deaths misleading? Why do the media discuss ‘soaring’ death rates and ‘grim’ statistics Are we ignoring the unintended health consequences of COVID 19? Should we take the economic costs more seriously given the link between poverty/inequality and health outcomes? Did the UK government mishandle the response to the crisis? Are they blameworthy for what they did? Is it fair to criticise governments for their handling of the crisis? Is it okay for governments to experiment on their populations in response to the crisis?   Relevant Links David’s Profile Page at the University of Basel ‘The Vital Contexts of Coronavirus’ by David ‘The Slow Dragon and the Dim Sloth: What can the world learn from coronavirus responses in Italy and the UK?’ by Marcello Ienca and David Shaw ‘Don’t let the ethics of despair infect the ICU‘ by David Shaw, Dan Harvey and Dale Gardiner ‘Deaths in New York City Are More Than Double the Usual Total‘ in the NYT (getting the context right?!) Preliminary results from German Antibody tests in one town: 14% of the population infected Do Death Rates Go Down in a Recession? The Sun’s Good Friday headline
undefined
Apr 10, 2020 • 0sec

74 – How to Understand COVID 19

I’m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the ‘classical’ ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this I will be talking to Katherine Furman. Katherine is a lecturer in philosophy at the University of Liverpool. Her research interests are at the intersection of Philosophy and Health Policy. She is interested in how laypeople understand issues of science, objectivity in the sciences and social sciences, and public trust in science. Her previous work has focused on the HIV/AIDs pandemic and the Ebola outbreak in West Africa in 2014-2015. We will be talking about the lessons we can draw from this work for how we think about the COVID-19 pandemic. You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). Show Notes Topics discussed include: The history of explaining the causes of disease Mono-causal theories of disease Multi-causal theories of disease Lessons learned from the HIV/AIDs pandemic The practical importance of understanding the causes of disease in the current pandemic Is there an ethics of belief? Do we have epistemic duties in relation to COVID-19? Is it reasonable to believe ‘rumours’ about the disease? Lessons learned from the 2014-2015 Ebola outbreak The importance of values in the public understanding of science   Relevant Links Katherine’s Homepage Katherine @ University of Liverpool “Mono-Causal and Multi-Causal Theories of Disease: How to Think Virally and Socially about the Aetiology of AIDS” by Katherine “Moral Responsibility, Culpable Ignorance, and Suppressed Disagreement” by Katherine “The international response to the Ebola outbreak has excluded Africans and their interests” by Katherine Imperial College paper on COVID-19 scenarios Oxford Paper on possible exposure levels to novel Coronavirus  

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app