
Forming your own views on AI safety (without stress!) | Neel Nanda | EA Global: SF 22
EA Talks
00:00
Introduction
Neil Nanda is an AI interpretability researcher currently on sabbatical and de-in some independent research. Neil will outline why you might want to form your own views about AI safety, why this can actually be overrated as well as common traps, pitfalls and misconceptions. He'll also give concrete first steps for trying to do this.
Transcript
Play full episode