I think that these are like relevant, agentic intelligent systems. They're not intelligent or agentic or coherent enough yet to be essentially dangerous. I think this is just a matter of time. Trying to get specific about how the world is going to end is obviously going to be extremely fraught. But I still think it can be useful to just think about like, hypothetical. So what would just one scenario you imagine be where it like goes from auto GPT version five to end of the world? Man. Let me let us ask you. If assume you have an AI system, you know, smart as John von Neumann or whatever, it has no emotions. It doesn't sleep.
Read the full transcript here.
Does AI pose a near-term existential risk? Why might existential risks from AI manifest sooner rather than later? Can't we just turn off any AI that gets out of control? Exactly how much do we understand about what's going on inside neural networks? What is AutoGPT? How feasible is it to build an AI system that's exactly as intelligent as a human but no smarter? What is the "CoEm" AI safety proposal? What steps can the average person take to help mitigate risks from AI?
Connor Leahy is CEO and co-founder of Conjecture, an AI alignment company focused on making AI systems boundable and corrigible. Connor founded and led EleutherAI, the largest online community dedicated to LLMs, which acted as a gateway for people interested in ML to upskill and learn about alignment. With capabilities increasing at breakneck speed, and our ability to control AI systems lagging far behind, Connor moved on from the volunteer, open-source Eleuther model to a full-time, closed-source model working to solve alignment via Conjecture.
Staff
Music
Affiliates