

Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024
7 snips Jul 19, 2024
Computer scientist Ali Alkhatib discusses Google DeepMind's flawed study on dangerous capabilities of large language models, emphasizing the social implications of AI. Critiques on poorly developed AI models, misinterpretations in AI research, hacker tools extracting data, and the limitations of tech news solutions are also explored. The conversation challenges hierarchical thinking in AI benchmarks and deceptive rhetoric in promoting harmful ideas.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Intro
00:00 • 2min
Analyzing a Paper Artifact and Evaluating AI System's Dangerous Capabilities
01:57 • 18min
Critiquing AI Research and Misinterpretations
20:13 • 25min
Criticism of Study on Expert Forecasts and Exploration of Swift Center
45:01 • 6min
Discussion on a Hacker Tool Extracting Data from Windows New Recall AI
51:13 • 3min
Discussion on Tech News and AI Solutions
53:48 • 3min
Challenging Singular Solutions and Deceptive Rhetoric in AI Models
56:28 • 5min