AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Engaging Experts in Predictive Housing Valuation Models
It's hard to just say, here's JPD for tell me, you know, what considerations we should take into account. You really have to go into the nitty gritty of how different people are using it. We should absolutely be engaging more with sectoral experts. That's a really key insight.
In episode 70 of The Gradient Podcast, Daniel Bashir speaks to Irene Solaiman.
Irene is an expert in AI safety and policy and the Policy Director at HuggingFace, where she conducts social impact research and develops public policy. In her former role at OpenAI, she initiated and led bias and social impact research at OpenAI in addition to leading public policy. She built AI policy at Zillow group and advised poilcymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (02:00) Intro to Irene and her work
* (03:45) What tech people need to learn about policy, and vice versa
* (06:35) Societal impact—words and reality, Irene’s experience
* (08:30) OpenAI work on GPT-2 and release strategies (yes, this was recorded on Pi Day)
* (11:00) Open-source proponents and release
* (14:00) What does a multidisciplinary approach to working on AI look like?
* (16:30) Thinking about end users and enabling contributors with different sets of expertise
* (18:00) “Preparing for AGI” and current approaches to release
* (21:00) Who constitutes a researcher? What constitutes safety and who gets resourced? Limitations in red-teaming potentially dangerous systems.
* (22:35) PALMS and Values-Targeted Datasets
* (25:52) PALMS and RLHF
* (27:00) Homogenization in foundation models, cultural contexts
* (29:45) Anthropic’s moral self-correction paper and Irene’s concerns about marketing “de-biasing” and oversimplification
* (31:50) Data work, human systemic problems → AI bias
* (33:55) Why do language models get more toxic as they get larger? (if you have ideas, let us know!)
* (35:45) The gradient of generative AI release, Irene’s experience with the open-source world, tradeoffs along the release gradient
* (38:40) More on Irene’s orientation towards release
* (39:40) Pragmatics of keeping models closed, dealing with open-source by force
* (42:22) Norm setting for release and use, normalization of documentation on social impacts
* (46:30) Race dynamics :(
* (49:45) Resource allocation and advances in ethics/policy, conversations on integrity and disinformation
* (53:10) Organizational goals, balancing technical research with policy work
* (58:10) Thoughts on governments’ AI policies, impact of structural assumptions
* (1:04:00) Approaches to AI-generated sexual content, need for more voices represented in conversations about AI
* (1:08:25) Irene’s suggestions for AI practitioners / technologists
* (1:11:24) Outro
Links:
* Irene’s homepage and Twitter
* Papers
* Release Strategies and the Social Impacts of Language Models
* Hugh Zhang’s open letter in The Gradient from 2019
* Process for Adapting Large Models to Society (PALMS) with Values-Targeted Datasets
* The Gradient of Generative AI Release: Methods and Considerations
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode