#10: Karen Hao, MIT Technology Review: Responsible AI
Mar 8, 2022
auto_awesome
Karen Hao, Senior AI editor at MIT Technology Review, discusses the ethical development of AI, policies of major tech companies, and lessons learned. She emphasizes the importance of responsible AI and its potential for social change. The podcast also covers potential harms of AI, concerns about large language models, advancements in AI technology, and the AlphaFold project.
Major tech companies like Google and Facebook have established ethical AI teams to address potential harms and conflicts arising from their AI algorithms.
Marketers and business leaders should prioritize user benefit over profit and incorporate responsible AI into KPIs, while actively encouraging ethical inquiry and responsible AI practices.
Deep dives
The Importance of Incorporating Responsible AI
In this podcast episode, Karen Howe, senior AI editor at MIT Technology Review, discusses the importance of responsible AI and its ethical development and application. She emphasizes the need to mitigate harm and maximize benefit when using AI technologies. Howe examines the policies and practices of major tech companies like Google and Facebook, highlighting how their AI algorithms can perpetuate discrimination and amplify divisive content. She stresses the importance of continuously revisiting AI projects, incorporating responsible AI principles into key performance indicators, and empowering employees to ask tough questions. Additionally, she urges individuals to actively participate in shaping the future of AI to ensure it benefits humanity as a whole.
Google's Journey with AI and Responsible AI Team
Google harnesses AI to enhance its mission of organizing the world's information. While AI algorithms facilitate efficient information retrieval and user engagement, researchers discovered potential harms, such as perpetuating discrimination and misinformation. Google established an ethical AI team to scrutinize and address these issues. However, conflicts arose when the team criticized successful revenue-generating aspects of Google's technology. This led to the firing of team leaders, highlighting the challenge of aligning bottom-line interests with responsible AI practices. Marketers and business leaders are encouraged to prioritize user benefit over profit, incorporate responsible AI into KPIs, and reward employees who ask tough ethical questions.
Facebook's Engagement Maximization and Responsible AI
Facebook's mission to connect everyone led to the integration of AI in various features, enhancing user engagement. However, researchers discovered that the AI algorithms also amplified divisive content, leading to societal polarization. In response, Facebook initiated a responsible AI team to investigate and address the issue. Nevertheless, the company prioritized minimizing engagement reduction over mitigating polarizing effects. As a result, the responsible AI team was reoriented to focus on less threatening outcomes like fairness. The lack of effective action and disregard for responsible AI practices ultimately led to the displacement of concerned employees. Marketers and business leaders should prioritize user benefit over engagement metrics, actively question AI practices, and implement processes that encourage responsible AI development.
Understanding AI and Encouraging Participation
Karen Howe emphasizes the importance of understanding AI and its potential impact on various industries, such as healthcare, education, and scientific discovery. While AI may seem complex or daunting, Howe believes that it is accessible to all. Organizations and individuals should take an active role in driving responsible AI by continuously educating themselves, revisiting AI projects, and fostering a culture that rewards ethical inquiry and responsible AI practices. Encouraging widespread participation in shaping the future of AI is crucial to ensure its positive impact and prevent concentration of power in the hands of a select few.
In this week's episode, show host Paul Roetzer sits down with Karen Hao, senior AI editor, MIT Technology Review. This special episode took place during MAICON 2021 when Paul and Karen sat down for a fireside chat to discuss responsible AI.
In this episode, Paul and Karen explore the ethical development and application of AI.
Drawing on her expansive research and writing, Hao offers:
An inside look at the policies and practices of major tech companies.
Lessons learned that you can use to ensure your company’s AI initiatives put people over profits.
A look into what's next and what's needed for responsible AI.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode