
Pondering AI
How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
Latest episodes

Jun 8, 2022 • 37min
Risk vs. Rights in AI with Dorothea Baur
Dr. Dorothea Baur is an ethicist and independent consultant on the topics of ethics, responsibility and sustainability in tech and finance.Dorothea debunks common ethical misconceptions and explores the novel issues that arise when applying ethics to technology. Kimberly and Dorothea discuss the risks posed by risk management-based approaches to tech ethics. As well as the “unholy collision” between the pursuit of scale and universal generalization. Dorothea reluctantly gives a nod to Milton Friedman when linking ethics to material business outcomes. Along the way, Dorothea illustrates how stakeholder engagement is evolving and the power of the employee. Noting that algorithms do not have agency and will never be ethical, Dorothea persuasively articulates our moral responsibility to retain responsibility for our AI creations.A transcript of this episode can be found here.

May 25, 2022 • 39min
In AI We Trust with Marisa Tschopp
Marisa Tschopp is a Human-AI interaction researcher at scip AG and Co-Chair of the IEEE Agency and Trust in AI Systems Committee.Marisa answers the question ‘what is trust?' and compares trust between humans to trust in a machine. Differentiating trust from trustworthiness, Marisa emphasizes the importance of considering the context and motivation behind AI systems. Kimberly and Marisa discuss the pros and cons of endowing AI systems with human characteristics (aka anthropomorphizing) and why ‘do you trust AI?’ is the wrong question. Debunking the concept of ‘The AI’, Marisa outlines practices for calibrating trust in AI systems. A self-described skeptical optimist, Marisa also shares her research into how people perceive their relationships with AI-enabled machines and how these patterns may change over time.A transcript of this episode can be found here.

May 11, 2022 • 41min
AI’s World View with Dr. Erica Thompson
Dr Erica Thompson is a Senior Policy Fellow in Ethics of Modelling and Simulation at the LSE Data Science Institute.Using the trusty-ish weather forecast as a starting point, Erica highlights the gaps to be minded when applying models in real-life. Kimberly and Erica discuss the role of expert judgement and intuition, the orthodoxy of data-driven cultures, models as engines not cameras, and why exposing uncertainty improves decision-making. Erica illustrates why it is so easy to become overconfident in models. She shows how value judgements are embedded in every step of model development (and hidden in math), why chameleons and accountability don’t mix, and considerations for using model outputs to think or decide effectively. Looking forward, Erica foresees a future in which values rather than data drive decision-making.A transcript of this episode can be found here.

Apr 27, 2022 • 40min
Designing for Human Experience with Sheryl Cababa
Sheryl Cababa is the Chief Design Officer at Substantial where she conducts research, develops design strategies and advocates for human-centric outcomes.From the infinite scroll to Twitter edits, Sheryl illustrates how current design practices unwittingly undermine human agency. Often while delivering exactly what a user wants. She refutes the need to categorically eliminate the term ‘users’ while showing how a singular user focus has led us astray. Sheryl then outlines how systems thinking can reorient existing design practices toward human-centric outcomes. Along the way, Kimberly and Sheryl discuss the limits of empathy, the evolving ethos of unintended consequences and embracing nuance. While acknowledging the challenges ahead, Sheryl remains optimistic about our ability to design for human well-being not just expediency or profit.A transcript of this episode can be found here. Our next episode explores the limits of model land with Dr Erica Thompson. Subscribe now so you don’t miss it.

Dec 15, 2021 • 45min
Humanity at Scale with Kate O’Neill
Kate O’Neill is an executive strategist, the Founder and CEO of KO Insights, and author dedicated to improving the human experience at scale. In this paradigm-shifting discussion, Kate traces her roots from a childhood thinking heady thoughts about language and meaning to her current mission as ‘The Tech Humanist’. Following this thread, Kate illustrates why meaning is the core of what makes us human. She urges us to champion meaningful innovation and reject the notion that we are victims of a predetermined future.Challenging simplistic analysis, Kate advocates for applying multiple lenses to every situation: the individual and the collective, uses and abuses, insight and foresight, wild success and abject failure. Kimberly and Kate acknowledge but emphatically disavow current norms that reject nuanced discourse or conflate it with ‘both-side-ism’. Emphasizing that everything is connected, Kate shows how to close the gap between human-centricity and business goals. She provides a concrete example of how innovation and impact depend on identifying what is going to matter, not just what matters now. Ending on a strategically optimistic note, Kate urges us to anchor on human values and relationships, habituate to change and actively architect our best human experience – now and in the future.A transcript of this episode can be found here.Thank you for joining us for Season 2 of Pondering AI. Join us next season as we ponder the ways in which AI continues to elevate and challenge our humanity. Subscribe to Pondering AI now so you don’t miss it.

Dec 1, 2021 • 43min
Automation, Agency and the Future of Work with Giselle Mota
Giselle Mota is a Principal Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. In this energetic discussion, Giselle shares how navigating dyslexia spawned a passion for technology and enabling learning at work. Giselle stresses that human agency and automation are only mutually exclusive when AI is employed with the wrong end in mind. Prioritizing human experience over ‘doing more with less’ Giselle explores the impact – good and bad - of AI systems on humans at work today.While ruminating on the future happening now, Giselle puts the onus on organizations to ensure no employee is left behind. From the warehouse floor to HR, the importance of diverse perspectives, rigorous due diligence and critical thinking when deploying AI systems is underscored. Along the way, Kimberly and Giselle dissect what AI algorithms can and cannot reasonably predict. Giselle then defines the leadership mindsets and talent needed to bring AI to work appropriately. With infectious optimism, she imposes a reality check on our innate desire to “just do cool things”. Finally, in a rousing call to action, Giselle makes a robust argument for robust accountability and making ethics endemic to every human endeavor, including AI.A transcript of this episode can be found here.Our final episode of Season 2 features Kate O’Neill. A tech humanist and author of ‘A Future so Bright’ Kate will discuss how we can architect the future of AI with strategic optimism. Subscribe to Pondering AI now so you don’t miss it.

Nov 17, 2021 • 44min
Growing Up with AI with Baroness Beeban Kidron
Baroness Beeban Kidron is an award-willing filmmaker, a Crossbench Peer in the UK House of Lords and the Founder and Chair of the 5Rights Foundation.In this eye-opening discussion, Beeban vividly describes how the seed for 5Rights was planted while getting up close and personal with teenagers navigating the physical and digital realms ‘In Real Life’. Beeban sounds a resounding alarm about why treating all humans as equal on the internet is regressive. As well as how existing business models have created a perfect societal storm, especially for children.Intertwining the voices of these underserved and underrepresented stakeholders with some shocking facts, Beeban illustrates the true impact of the current digital experiment on young people. In that vein, Kimberly and Beeban examine behaviors we implicitly condone and, in fact, promote in the digital realm that would never pass muster in so-called real life. Speaking to the brilliantly terrifying Twisted Toys campaign, Beeban shows how storytelling can make these critical yet oft sensitive topics accessible. Finally, Beeban speaks about critical breakthroughs such as the Age-Appropriate Design Code, positive action being taken by digital platforms in response and the long road still ahead.A transcript of this episode can be found here.Our next episode features Giselle Mota. Giselle is a Principle Consultant for the Future of Work at ADP where she advices organizations on human agency, diversity and learning in the age of AI. Subscribe to Pondering AI now so you don’t miss it.

Nov 3, 2021 • 33min
Is AI-Driven Sustainability Sustainable with Vincent de Montalivet
Vincent de Montalivet is the Global AI Sustainability Leader at Capgemini where he develops strategies to use AI to combat climate change and drive corporate net-zero initiatives.In this forthright discussion, Vincent charts his path from supply chain engineering to his current position at the crossroads of data, IT and sustainability. Vincent stresses this is the ‘decade of action’ and highlights cutting edge AI applications enabling the turn from simulation to accountability in real-time. Addressing fears about AI, Vincent shows how it enables rather than replaces human expertise.In that vein, Kimberly and Vincent have a frank discussion about whether AI for environmental good balances AI’s own appetite for energy. Vincent examines different aspects of the argument and shares recent research, facts and figures to shed light on the debate. He describes why AI is not a silver bullet, why AI is not always required and emerging research into making AI itself green. Vincent then provides a 3-step roadmap for corporate sustainability initiatives. Discussing emerging innovations, Vincent pragmatically points out that we are only addressing 3% of the green use cases that can be addressed with AI today. He rightfully suggests focusing there.A transcript of this episode can be found here.Our next episode features Baroness Beeban Kidron. She is the Founder and Chair of the 5Rights Foundation which is leading the fight to protect children’s rights and well-being in the digital realm. Subscribe to Pondering AI now so you don’t miss it.

Oct 20, 2021 • 48min
The Case for Humanizing Technology with David Ryan Polgar
David Ryan Polgar is the Founder of All Tech is Human. He is a leading tech ethicist, an advocate for human-centric technology, and advisor on improving social media and crafting a better digital future. In this timely discussion, David traces his not-so-unlikely path from practicing law to being a standard bearer for the responsible technology movement. He artfully illustrates the many ways technology is altering the human experience and makes the case for “no application without representation”. Arguing that many of AI’s misguided foibles stem from a lack of imagination, David shows how all paths to responsible AI start with diversity. Kimberly and David debunk the myth of the ethical superhero but agree there may be a need for ethical unicorns. David expounds on the need for expansive education, why non-traditional career paths will become traditional and the benefits of thinking differently. Acknowledging the complex, nuanced problems ahead, David advocates for space to air constructive, critical, and, yes, contrarian points of view. While disavowing 80s sitcoms, David celebrates youth intuition, bemoans the blame game, prioritizes progress over problem statements, and leans into our inevitable mistakes. Finally, David invokes a future in which responsible tech is so in vogue it becomes altogether unremarkable. A transcript of this episode can be found here. Our next episode features Vincent de Montalivet, leader of Capgemini’s global AI Sustainability program. Vincent will help us explore the yin and yang of AI’s relationship with the environment. Subscribe now to Pondering AI so you don’t miss it.

Oct 6, 2021 • 46min
Your (Personal) Digital Twin with Dr. Valérie Morignat PhD
Dr. Valérie Morignat PhD is the CEO of Intelligent Story and a leading advisor on the creative economy. She is a true polymath working at the intersection of art, culture, and technology.In this perceptive discussion, Valérie illustrates how cultural legacies inform technology and innovation today. Tracing a path from storytelling in caves to modern Sci-Fi she proves that everything new takes (a lot of) time. Far from theoretical, Valérie shows how this philosophical understanding helps business innovators navigate the current AI landscape.Discussing the evolution of VR/AR, Valérie highlights the existential quandary created by our increasingly fragmented digital identities. Kimberly and Valérie discuss the pillars of responsible innovation and the amplification challenges AI creates. Valérie shares the power of AI to teach us about ourselves and increase human learning, creativity, and autonomy. Assuming, of course, we don’t encode ancient, spurious classification schemes or aggravate negative behaviors. She also describes our quest for authenticity and flipping the script to search for the real in the virtual.Finally, Valérie sketches a roadmap for success including executive education and incremental adoption to create trust and change our embedded mental models.A transcript of this episode can be found here.Our next episode features David Ryan Polgar, founder of All Tech is Human. David is a leading tech ethicist and responsible technology advocate who is well-known for his work on improving social media. Subscribe now so you don’t miss it.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.