It seems to me like there are two pieces here. One is around lots of diverse people or agents kind of contributing to the system. The other is sort of the incrementalism that it's not sort of a flash of lightning and someone makes a huge advance but rather they're building on lots of building blocks that came before. And I'm wondering how that does that sort of incremental model impact the idea of how you would fund innovation.
Read the full transcript here.
Where does innovation come from? How common is it for "lone wolf" scientists to make large leaps in innovation by themselves? How can we imbue AIs with creativity? Or, conversely, how can we apply advances in AI creativity to our own personal creative processes? How do creative strategies that work well for individuals differ from creative strategies that work well for groups? To what extent are models like DALL-E and ChatGPT "creative"? Can machines love? Or can they only ever pretend to love? We've worried a fair bit about AI misalignment; but what should we do about the fact that so many humans are misaligned with humanity's own interests? What might it mean to be "reverent" towards science?
Joel Lehman is a machine learning researcher interested in algorithmic creativity, AI safety, artificial life, and intersections of AI with psychology and philosophy. Most recently he was a research scientist at OpenAI co-leading the Open-Endedness team (studying algorithms that can innovate endlessly). Previously he was a founding member of Uber AI Labs, first employee of Geometric Intelligence (acquired by Uber), and a tenure track professor at the IT University of Copenhagen. He co-wrote with Kenneth Stanley a popular science book called Why Greatness Cannot Be Planned on what AI search algorithms imply for individual and societal accomplishment. Follow him on Twitter at @joelbot3000 or email him at lehman.154@gmail.com.
Further reading:
Staff
Music
Affiliates