
2035 Joscha Bach presents "Machine Consciousness and Beyond" | dAGI Summit 2025
Bach reframes AI as the endpoint of a long philosophical project to “naturalize the mind,” arguing that modern machine learning operationalizes a lineage from Aristotle to Turing in which minds, worlds, and representations are computational state-transition systems. He claims computer science effectively re-discovers animism—software as self-organizing, energy-harvesting “spirits”—and that consciousness is a simple coherence-maximizing operator required for self-organizing agents rather than a metaphysical mystery. Current LLMs only simulate phenomenology using deepfaked human texts, but the universality of learning systems suggests that, when trained on the right structures, artificial models could converge toward the same internal causal patterns that give rise to consciousness. Bach proposes a biological-to-machine consciousness framework and a research program (CIMC) to formalize, test, and potentially reproduce such mechanisms, arguing that understanding consciousness is essential for culture, ethics, and future coexistence with artificial minds.
Key takeaways
▸ Speaker & lens: Cognitive scientist and AI theorist aiming to unify philosophy of mind, computer science, and modern ML into a single computationalist worldview.
▸ AI as philosophical project: Modern AI fulfills the ancient ambition to map mind into mathematics; computation provides the only consistent language for modeling reality and experience.
▸ Computationalist functionalism: Objects = state-transition functions; representations = executable models; syntax = semantics in constructive systems.
▸ Cyber-animism: Software as “spirits”—self-organizing, adaptive control processes; living systems differ from dead ones by the software they run.
▸ Consciousness as function: A coherence-maximizing operator that integrates mental states; second-order perception that stabilizes working memory; emerges early in development as a prerequisite for learning.
▸ LLMs & phenomenology: Current models aren’t conscious; they simulate discourse about consciousness using data full of “deepfaked” phenomenology. A Turing test cannot detect consciousness because performance ≠ mechanism.
▸ Universality hypothesis: Different architectures optimized for the same task tend to converge on similar internal causal structures; suggests that consciousness-like organization could arise if it’s the simplest solution to coherence and control.
▸ Philosophical zombies: Behaviorally identical but non-conscious agents may be more complex than conscious ones; evolution chooses simplicity → consciousness may be the minimal solution for self-organized intelligence.
▸ Language vs embodiment: Language may contain enough statistical structure to reconstruct much of reality; embodiment may not be strictly necessary for convergent world models.
▸ Testing for machine consciousness: Requires specifying phenomenology, function, search space, and success criteria—not performance metrics.
▸ CIMC agenda: Build frameworks and experiments to recreate consciousness-like operators in machines; explore implications for ethics, interfaces, and coexistence with future minds.
