
Connor Leahy on AGI and Cognitive Emulation
Future of Life Institute Podcast
The Importance of Causal Stories in AGI Design
Q4: Do you imagine Coems as a sort of additional element on top of the most advanced models that interact with these models and limit their output to what is humanly understandable, or what is human-like? So far, this is all background. I think probably any realistic safe AGI design will have this structure or look something like this. It will have some black boxes, some white boxes, it will have causal stories of safety. All of this is background information. And why is it that all plausible stories will involve this? Is this because the black boxes are where the most advanced capabilities are coming from, and they will have to be involved somehow? At this current moment,
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.