Mystery AI Hype Theater 3000 cover image

Mystery AI Hype Theater 3000

Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024

Jul 19, 2024
Computer scientist Ali Alkhatib discusses Google DeepMind's flawed study on dangerous capabilities of large language models, emphasizing the social implications of AI. Critiques on poorly developed AI models, misinterpretations in AI research, hacker tools extracting data, and the limitations of tech news solutions are also explored. The conversation challenges hierarchical thinking in AI benchmarks and deceptive rhetoric in promoting harmful ideas.
01:02:00

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Big tech companies using preprint servers to avoid peer review, influence public AI conversation.
  • Anthropic's Cloud 3.5 Sonnet benchmarks criticized for hierarchical implications.

Deep dives

Cloud 3.5 Sonnet Benchmarks Segmented by Educational Stages Spark Controversy

In a post from anthropic on LinkedIn, the Cloud 3.5 Sonnet benchmarks are revealed to have segments like 'graduate-level reasoning,' 'undergraduate-level knowledge,' and 'grade school math,' drawing criticism for the hierarchical implications embedded in the model names.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode