Dr. Nirit Weiss-Blatt, author of 'The Techlash and Tech Crisis Communication' and the AI Panic newsletter, discusses the misrepresentation of AI research in media, questionable funding and messaging strategies of AI safety organizations, the need for balanced AI coverage, and Google's AI Test Kitchen's exploration of image effects.
Some AI safety organizations use questionable surveys, messaging, and media influence to spread fears about artificial intelligence risks.
Meta (formerly Facebook) has a diverse AI leadership team, including around 60% of women in its Fundamental AI Research Lab, promoting inclusivity and diverse perspectives in the development and conversation around AI.
Deep dives
Meta and OpenAI take steps to label AI-generated content
Meta and OpenAI are both making moves to identify and label AI-generated content. Meta is committing to detection and labeling of AI-generated content on its platforms, while OpenAI is adding metadata and visible labels to its Dolly 3 content. These initiatives aim to provide transparency and clarity to users about the origin and nature of AI-generated content. While these measures may not be foolproof, they represent steps towards fostering responsible AI use and creating a more informed online environment.
Meta stands out for its diverse AI leadership team
Meta, formerly known as Facebook, is notable for its diverse AI leadership team, with around 60% of the leadership team at its Fundamental AI Research Lab being women. This diversity extends to other AI researchers and experts who have been critical of AI. This is a positive step towards promoting inclusivity and diverse perspectives in the development and conversation around AI.
Google launches image effects and OpenAI adds metadata to DALL·E 3
Google's AI test kitchen has introduced image effects, allowing users to generate images based on prompts and explore creative possibilities. Meanwhile, OpenAI is incorporating metadata and visible identifiers into its DALL·E 3 image generation to provide information about the origins and nature of the images. These developments demonstrate the continuous effort to expand the capabilities, control, and transparency of AI-generated content.
This week, Jason Howell and Jeff Jarvis welcome Dr. Nirit Weiss-Blatt, author of "The Techlash and Tech Crisis Communication" and the AI Panic newsletter, to discuss how some AI safety organizations use questionable surveys, messaging, and media influence to spread fears about artificial intelligence risks, backed by hundreds of millions in funding from groups with ties to Effective Altruism.
INTERVIEW
Introduction to guest Dr. Nirit Weiss-Blatt
Research on how media coverage of tech shifted from optimistic to pessimistic
AI doom predictions in media
AI researchers predicting human extinction
Criticism of annual AI Impacts survey
Role of organizations like MIRI, FLI, Open Philanthropy in funding AI safety research
Using fear to justify regulations
Need for balanced AI coverage
Potential for backlash against AI safety groups
With influence comes responsibility and scrutiny
The challenge of responsible AI exploration
Need to take concerns seriously and explore responsibly
NEWS BITES
Meta's AI teams are filled with female leadership
Meta embraces labeling of GenAI content
Google's AI Test Kitchen and image effects generator