Ep. 234 - We Are All Just Conscious Turing Machines w/Dr. Lenore Blum
Jun 9, 2023
auto_awesome
Dr. Lenore Blum, mathematician, discusses the recent petition for AI researchers to study consciousness. They explore the Conscious Turing Machine model, pain and empathy in robots, information processing in the theater of consciousness, tacit knowledge in infants, and the limitations of AI and computational theories of mind.
The Conscious Turing Machine (CTM) model explains consciousness through a global workspace and chunk-based information processing.
The CTM model is influenced by Bar's theater model of consciousness, mimicking communication among processors without the need for an executive director.
The CTM model can be applied to AGI systems, providing insights into pathologies, special cases, and the emergence of consciousness through processor interaction.
Deep dives
The Conscious Turing Machine Model
In the Conscious Turing Machine (CTM) model, conscious awareness is explained through the concept of a global workspace. Long-term memory processors, which represent unconscious processes, compete to get their information into short-term memory, which is then broadcasted to the entire system. The model incorporates the idea of chunk-based information processing, where small amounts of information (chunks) are weighted and put into competition for short-term memory. Link formation between processors transforms conscious processing into unconscious processing. The model also includes specialized processors for inner speech, inner vision, inner sensation, and the model of the world. The model of the world processor creates models of the external and internal world using predictions, feedback, and learning. The learning process involves adjusting weights based on feedback from the sleeping experts algorithm. The CTM model provides explanations for phenomena such as blind sight and can be applied to AGI systems.
Blending Bar's Theater Model with Turing Machines
The CTM model is influenced by Bar's theater model of consciousness. In the CTM, long-term memory processors represent the audience, while short-term memory acts as the stage where information is broadcasted. The competition among processors to have their information broadcasted mimics Bar's notion of communication among the audience. The CTM simplifies the model by eliminating the need for an executive director. Instead, it focuses on the formation of bidirectional links between processors as unconscious processing becomes more established. The model also incorporates a multimodal language called brainish and accounts for the learning process through feedback and weight adjustments.
Exploring Pathologies and Special Cases
The CTM model allows for the exploration of pathologies and special cases. It can account for phenomena like blind sight, where information is processed unconsciously and bypasses short-term memory. By breaking the path to short-term memory and linking directly to the motor processor, the CTM replicates the experience of blind sight in a simplified manner. The model also explains phenomena like change blindness, where changes in the environment go unnoticed due to unconscious processing. The CTM's focus on weight adjustments and information competition provides insights into how the brain may function in the presence of various pathologies and special cases.
Applying the CTM Model to AGI
The CTM model can be applied to artificial general intelligence (AGI) systems. It addresses the need for a global broadcast system, short-term memory, and long-term memory processors in AGI design. The CTM's approach to consciousness and learning offers a simplified framework without the need for an executive director, making it suitable for AGI development. The model suggests that consciousness can emerge from the interaction of various processors and the formation of bidirectional links. The CTM's multimodal language and emphasis on weight adjustments align with the learning needs of AGI systems.
The Global Workspace Model: A Good Model for AGI
The podcast episode explores the concept of the global workspace model as a potential model for artificial general intelligence (AGI). The global workspace is presented as a decentralized system where different processes or processors in AGI independently work on tasks based on their interest and capability. This model is seen as a way for AGI to effectively gather and process information without a central director. The idea of how mathematician Andre Punk-A-ray's unconscious processes led to a breakthrough in solving a problem is used as an example to support the effectiveness of the global workspace model. Overall, the episode highlights the potential of the global workspace model in the development of AGI.
The Feeling of Consciousness in the CTM Model
The Conscious Turing Machine (CTM) model discussed in the podcast episode suggests that consciousness emerges from the machine's process of creating models of the world. The machine's ability to label elements of its model as self or non-self and its capacity for planning and prediction contribute to its development of a model of self-consciousness. The CTM model emphasizes the importance of the machine's interpretation and labeling of its model rather than ascribing consciousness to individual processors. The podcast also touches on the role of dreaming, meditation, and the possibility of machines being hypnotized within the context of the CTM model.
In episode 234 of the Parker's Pensées Podcast, I'm joined by Dr. Lenore Blum to discuss a recent petition from the Association for Mathematical Consciousness Studies calling for AI researchers to study consciousness. The petition is called "The responsible development of AI agenda needs to include consciousness research" and can be found here: https://amcs-community.org/open-letters/
We also discuss Dr. Blum's work on developing a model of consciousness, which her and her husband call the "Conscious Turing Machine" model.
If you like this podcast, then support it on Patreon for $3, $5 or more a month. Any amount helps, and for $5 you get a Parker's Pensées sticker and instant access to all the episode as I record them instead of waiting for their release date. Check it out here:
Patreon: https://www.patreon.com/parkers_pensees
If you want to give a one-time gift, you can give at my Paypal:
https://paypal.me/ParkersPensees?locale.x=en_US
Check out my merchandise at my Teespring store: https://teespring.com/stores/parkers-penses-merch
Come talk with the Pensées community on Discord: dsc.gg/parkerspensees
Sub to my Substack to read my thoughts on my episodes: https://parknotes.substack.com/
Check out my blog posts: https://parkersettecase.com/
Check out my Parker's Pensées YouTube Channel:
https://www.youtube.com/channel/UCYbTRurpFP5q4TpDD_P2JDA
Check out my other YouTube channel on my frogs and turtles: https://www.youtube.com/c/ParkerSettecase
Check me out on Twitter: https://twitter.com/trendsettercase
Instagram: https://www.instagram.com/parkers_pensees/0:00 - Who is Dr. Lenore Blum?
15:34 - Why AI Engineers Gotta Study Consciousness!
27:16 - What is Consciousness?
33:59 - Global Workspace and Conscious Turing Machines
48:55 - Infants are MORE conscious than adults!?
1:08:30 - Unity problems for Large Language Models and Machine Consciousness?
1:16:53 - Can you hypnotize a Turing machine?
1:19:07 - The Chinese Room and Chinese Nation Objections to Machine Functionalism
1:25:49 - Lucas/Penrose's use of Gödel's Incompleteness Theorems Against AI Doesn't Work?
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode