
Connor Leahy on the State of AI and Alignment Research
Future of Life Institute Podcast
The Importance of Security Mindset in Super Intelligent Systems
The GPT model that's available right now is not used to create havoc in the world. If they have an alignment technique that actually worked, it should be able to make your less smart systems never in any scenario say bad thing. It should work in almost all cases or basically all cases for you. But by default, these are black boxes. Unless you give me a theory, a causal story about why I should relax my assumptions, then I'm like, well, if it's breakable, it will break. And this is the security mindset. The difference between security mindset and ordinary paranoia is ordinary paranoia assumes things are safe until proven otherwise. Security mindset assumes things are unsafe until proven