Luc Steels is a Professor Emeritus of Artificial Intelligence with a focus on enactive approaches, while Takashi Ikegami is a Professor at the University of Tokyo specializing in complex systems. They explore the intricacies of AI versus human cognition. Questions arise about participatory sense-making and the potential for enactive AI. The discussion critiques data-driven models, emphasizing the ethical dangers of AI and the human misuse of technology. They also highlight creativity's connection to humanity and reflect on experimental robotics that encourage interaction and understanding.
01:45:47
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
The Narrowing of AI
AI research has shifted from cybernetics and cognitive reasoning to a narrow focus on deep learning and data-driven models.
This shift limits the broader possibilities of AI, including its connection to artificial life and embodied intelligence.
insights INSIGHT
Artificial Life's Bottom-Up Approach
Artificial life uses a bottom-up approach, starting from basic elements to understand how lifelike behavior emerges.
This contrasts with traditional AI's symbol-first approach and allows for exploring the subjective experience of intelligence.
insights INSIGHT
Sensemaking as 'Scissors'
Sensemaking involves creating relevant concepts to understand the world, acting as 'scissors' to cut it up usefully.
Participatory sensemaking adds the dimension of coordinating this 'cutting up' process with others through interaction.
Get the Snipd Podcast app to discover more snips from this episode
Set in a dystopian future, 'Klara and the Sun' follows the story of Klara, an Artificial Friend (AF) who is purchased by a mother for her ailing daughter, Josie. Klara, powered by solar energy, develops a deep bond with Josie and becomes obsessed with the Sun, which she believes has the power to save Josie's life. The novel delves into themes of loneliness, the nature of consciousness, and the societal implications of genetic engineering and artificial intelligence. Through Klara's observations and interactions, the book explores the human condition and the complexities of love, faith, and sacrifice in a world marked by social inequality and technological advancements.
Ethics
R. H. M. Elwes
Baruch Spinoza
Written between 1661 and 1675 and published posthumously in 1677, *Ethics* is a comprehensive philosophical work divided into five parts. It addresses the nature of God, concluding that God is intrinsic to the universe rather than outside it. The treatise dissects the human mind and body, explores the notion of free will and good and evil, and analyzes the origin and strength of emotions. Spinoza argues that reason is the sole means to achieve virtue and freedom from emotional bondage. The work is characterized by its use of Euclid's step-by-step logical method to prove various propositions[3][5].
With few conversations did I feel the stakes to be so high, so thorny and complex. For this conversation on “Computing Differently,” I sat down with Dr Luc Steels and Dr Takashi Ikegami, two of the world’s preeminent researchers in the fields of Artificial Intelligence, Artificial Life, and robotics — but researchers who come at the question of AI from a decidedly divergent perspective, that of the enactive approach and participatory sense-making. The conversation was one of not just defining the current stakes of AI research, but of considering the outer reaches of each guest’s thinking about AI and defining some of the intractable questions in the field today. How close does the apparent sense-making of a robot come to human sense-making? What defines participatory sense-making as a distinctly human activity? Can there be such a thing as an “enactive AI”? If so, what insights might it afford us about human cognition, and about AI itself? How are we to apply appropriate caution when discussing the current frontiers of AI research? Where should its priorities be? How can we grapple with the very real dangers of AI already at hand, such as the hypernormativity of predictive systems which propagate harmful biases and drive information pollution? As Luc Steels points out, it is not so much the AI systems that we ought to fear, but the human uses and misuses of them, and the exponential looping effect that takes hold between human and machine.
From our discussion of the basics of robotics and large language models emerged some of the most limpid definitions of participatory sense-making I’ve heard yet, and both speakers took great care in clarifying some of the basic terms of the discussion, which have too often been obscured by the popular media. Whether or not you feel you have a stake in the ongoing AI conversation, this conversation sheds light on so many of the fundamental questions of what it means to be a creative, enactive, and participatory being in the world today — in short, what it means to be human.