The alignment problem is about building an AI that acts in the best interest of humanity and avoids conflicts with our goals or indifference towards us. It involves preventing accidental and intentional misuse of AI and addressing inner alignment problems. Self-improving systems can help solve the alignment problem in small scale and align larger models. While there are ideas about how to solve the problem, it is uncertain how it will be solved in the long term. However, once AI is advanced enough, it can be used as a tool for alignment research. For example, AI could be instructed not to be racist and may even self-cleanse when it fully understands the complexities of racism.
Greylock general partner Reid Hoffman interviews OpenAI CEO Sam Altman. The AI research and deployment company's primary mission is to develop and promote AI technology that benefits humanity. Founded in 2015, the company has most recently been noted for its generative transformer model GPT - 3, which uses deep learning to produce human-like text, and its image-creation platform DALL-E.
This interview took place during Greylock’s Intelligent Future event, a day-long summit featuring experts and entrepreneurs from some of today’s leading artificial intelligence organizations. You can watch the video of this interview on our YouTune channel here: https://youtu.be/WHoWGNQRXb0
You can read a transcript of this interview here: https://greylock.com/greymatter/sam-altman-ai-for-the-next-era/
Learn more about your ad choices. Visit megaphone.fm/adchoices