This chapter explores the concept of responsible scaling policies for AI systems, including forecasting capabilities, conducting tests during training, and addressing unexpected capabilities. It also discusses the authority and role of the OpenAI board in governing the company and ensuring its profitability benefits humanity. The chapter debates the safety of closed source and open source approaches, emphasizing the need for rigorous testing and involvement of the board or government in releasing AI models.
Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures.
Recorded on September 27th in Los Angeles.
Learn more about your ad choices. Visit podcastchoices.com/adchoices