Pondering AI

Kimberly Nevala, Strategic Advisor - SAS
undefined
Mar 8, 2023 • 1h 7min

Plain Talk About Talking AI with J Mark Bishop

Professor J Mark Bishop reflects on the trickiness of language, how LLMs work, why ChatGPT can’t understand, the nature of AI and emerging theories of mind.Mark explains what large language models (LLM) do and provides a quasi-technical overview of how they work. He also exposes the complications inherent in comprehending language. Mark calls for more philosophical analysis of how systems such as GPT-3 and ChatGPT replicate human knowledge. Yet, understand nothing. Noting the astonishing outputs resulting from more or less auto-completing large blocks of text, Mark cautions against being taken in by LLM’s disarming façade.Mark then explains the basis of the Chinese Room thought experiment and the hotly debated conclusion that computation does not lead to semantic understanding. Kimberly and Mark discuss the nature of learning through the eyes of a child and whether computational systems can ever be conscious. Mark describes the phenomenal experience of understanding (aka what it feels likes). And how non-computational theories of mind may influence AI development. Finally, Mark reflects on whether AI will be good for the few or the many.Professor J Mark Bishop is the Professor of Cognitive Computing (Emeritus) at Goldsmith College, University of London and Scientific Advisor to FACT360.A transcript of this episode is here. 
undefined
Feb 22, 2023 • 44min

In AI We Trust with Chris McClean

Chris McClean discusses ethics vs. risk, positive outcomes, trust, expanding definitions of privacy, and the role we play in creating the digital ecosystem. They explore the importance of digital ethics, highlight the need to consider various ethical impacts in AI, and discuss trust-related harms caused by AI systems. The interplay between digital makers and takers is explored, emphasizing responsible innovation. They also delve into surveillance vs. monitoring in the workplace, and the ethical aspects of the metaverse, including privacy concerns and positive uses.
undefined
Feb 8, 2023 • 40min

AI for Sustainable Development with Henrik Skaug Sætra

Henrik Skaug Sætra contends humans aren’t mere machines, assesses AI thru a sustainable development lens and weighs the effect of political imbalances and ESG.Henrik embraces human complexity. He advises against applying AI to naturally messy problems or to influence populations least able to resist. Henrik outlines how the UN Sustainable Development Goals (SDG) can identify beneficial and marketable avenues for AI. He also describes SDG’s usefulness in ethical impact assessment. Championing affordable and equitable access to technology, Henrik shows how disparate impacts occur between individuals, groups and society. Along the way, Kimberly and Henrik discuss political imbalances, the technocratic nature of emerging regulations and why we shouldn’t expect corporations to be broadly ethical of their own accord. Outlining his AI ESG protocol, Henrik surmises that qualitative rigor can address gaps in quantitative analysis alone. Finally, Henrik encourages the proactive use of SDGs and ESG to drive innovation and opportunity.Henrik is Head of the Digital Society and an Associate Professor at Østfold University College. He is a political theorist focusing on the political, ethical, and social implications of technology.A transcript of this episode can be found here. 
undefined
Aug 3, 2022 • 39min

The Philosophy of AI with Dr. Mark Coeckelbergh

Dr. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, member of the High-Level Expert Group on Artificial Intelligence (EC) and the Austrian Council on Robotics and AI, discusses the political implications of AI and technology, the challenges of incorporating ethics and human aspects into AI discussions, and the need for collaboration and education in the field. He also emphasizes the difficulty of global governance and sounds a cautionary note about the potential for AI to undermine democratic institutions.
undefined
Jul 20, 2022 • 39min

Keeping Science in Data Science with Patrick Hall

Patrick Hall is the Principal Scientist at bnh.ai.Patrick artfully illustrates how data science has become divorced from scientific rigor. At least, that is, in popular conceptions of the practice. Kimberly and Patrick discuss the pernicious influence of the McNamara Fallacy, applying the scientific method to algorithmic development and keeping an open mind without sacrificing concept validity. Patrick addresses the recent hubbub around AI sentience, cautions against using AI in social contexts and identifies the problems AI algorithms are best suited to solve. Noting AI is no different than any other mission-critical software, he outlines the investment and oversight required for AI programs to deliver value. Patrick promotes managing AI systems like products and makes the case for why performance in the lab should not be the first priority.A transcript of this episode can be found here. 
undefined
Jul 6, 2022 • 42min

Synthesizing the Future with Fernando Lucini

Fernando Lucini is the Global Data Science & ML Engineering Lead (aka Chief Data Scientist) at Accenture.Fernando Lucini outlines common uses for AI generated synthetic data. He emphasizes that synthetic data is a facsimile – close, but not quite real - and debunks the notion it is inherently private. Kimberly and Fernando discuss the potential pitfalls in synthetic data sets, the emergent need for standard controls, and why ensuring quality - much less fairness - is not simple. Fernando assesses the current state of the synthetic data market and the work still to be done to enable broad-scale adoption. Tipping his hat to fabulous achievements such as GPT-3 and Dall-E, Fernando identifies multiple ways synthetic data can be used for good works and creative endeavors.A transcript of this episode can be found here. 
undefined
Jun 22, 2022 • 46min

The Future of Human Decision Making with Roger Spitz

Roger Spitz is the CEO of Techistential and Chairman of the Disruptive Futures Institute.In this thought-provoking discussion, Roger discusses why neither humans nor AI systems are great at decision making in complex environments. But why humans should be. Roger unveils the insidious influence of AI systems on human decisions and why uncertainty is a pre-requisite for human choice, freedom, and agency. Kimberly and Roger discuss the implications of complexity, the rising cost of poor assumptions, and the dangerous allure of delegating too many decisions to AI-enabled machines. Outlining the AAA (antifragile, anticipatory, agile) model for decision-making in the face of deep uncertainty, Roger differentiates foresight from strategic planning and anticipatory agility from ‘move fast and break things.’ Last but not least, Roger argues that current educational incentives run counter to nurturing the mindset and skills needed to thrive in our increasingly complex, emergent world.A transcript of this episode can be found here. 
undefined
Jun 8, 2022 • 37min

Risk vs. Rights in AI with Dorothea Baur

Dr. Dorothea Baur is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.A transcript of this episode can be found here. 
undefined
May 25, 2022 • 39min

In AI We Trust with Marisa Tschopp

Marisa Tschopp is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.A transcript of this episode can be found here.
undefined
May 11, 2022 • 41min

AI’s World View with Dr. Erica Thompson

Dr Erica Thompson is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.A transcript of this episode can be found here. 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app