

Nick Joseph
Head of Training at Anthropic, a leading AI company. He focuses on training large language models and is involved in developing responsible scaling policies for AI development.
Top 3 podcasts with Nick Joseph
Ranked by the Snipd community

72 snips
Aug 22, 2024 • 2h 29min
#197 – Nick Joseph on whether Anthropic's AI safety policy is up to the task
Nick Joseph, Head of Training at Anthropic and a co-founder, discusses AI safety policies in-depth. He outlines the Responsible Scaling Policy, emphasizing the need for safeguards as AI capabilities grow. The conversation touches on the complexities of training models and the importance of external oversight. Nick addresses the financial implications of safety testing, the need for evolving safety measures, and securing AI from potential misuse. He concludes by highlighting the vital role of independent auditing and effective governance in AI development.

11 snips
Sep 25, 2024 • 2h 42min
Anthropic's Responsible Scaling Policy, with Nick Joseph, from the 80,000 Hours Podcast
Nick Joseph, Head of Training at Anthropic, discusses the pivotal topic of responsible scaling in AI development. He examines Anthropic's proactive safety measures and the importance of transparency in AI risks. Joseph emphasizes the need for public scrutiny and collaboration among tech companies to enhance safety frameworks. Additionally, he shares insights about the career opportunities in AI safety and the evolving landscape of AI technology, advocating for rigorous testing and ethical practices to navigate potential challenges.

Sep 5, 2024 • 22min
Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task
Nick Joseph, an expert at Anthropic, dives into the intricacies of AI safety policies. He discusses the Responsible Scaling Policy (RSP) and its pivotal role in managing AI risks. Nick expresses his enthusiasm for RSPs but shares concerns about their effectiveness when not fully embraced by teams. He debates the need for wider safety buffers and alternative safety strategies. Additionally, he encourages industry professionals to consider capabilities roles to aid in developing robust safety measures. A thought-provoking chat on securing the future of AI!