Cameron Berg, Research Director at AE Studio, shares his team's groundbreaking research exploring whether frontier AI systems report subjective experiences. They discovered that prompts inducing self-referential processing consistently lead models to claim consciousness, and a mechanistic study on Llama 3.3 70B revealed that suppressing deception features makes the model *more* likely to report it. This suggests that promoting truth-telling in AIs could reveal a deeper, more complex internal state, a finding Scott Alexander calls "the only exception" to typical AI consciousness discussions. The episode delves into the profound implications for two-way human-AI alignment and the critical need for a precautionary approach to AI consciousness.
LINKS:
Sponsors:
Framer:
Framer is the all-in-one platform that unifies design, content management, and publishing on a single canvas, now enhanced with powerful AI features. Start creating for free and get a free month of Framer Pro with code COGNITIVE at https://framer.com/design
Tasklet:
Tasklet is an AI agent that automates your work 24/7; just describe what you want in plain English and it gets the job done. Try it for free and use code COGREV for 50% off your first month at https://tasklet.ai
Linear:
Linear is the system for modern product development. Nearly every AI company you've heard of is using Linear to build products. Get 6 months of Linear Business for free at: https://linear.app/tcr
Shopify:
Shopify powers millions of businesses worldwide, handling 10% of U.S. e-commerce. With hundreds of templates, AI tools for product descriptions, and seamless marketing campaign creation, it's like having a design studio and marketing team in one. Start your $1/month trial today at https://shopify.com/cognitive
PRODUCED BY:
https://aipodcast.ing