

Blatant Academic Fraud, OpenAI's New Sibling, a Killer Drone?!
Jun 4, 2021
This week features a shocking report on academic fraud, exposing manipulative tactics within research. OpenAI's new sibling company, Anthropic, secures significant funding while tackling AI safety. AI's ability to write code from plain language could change programming forever. Meanwhile, a rogue drone reportedly hunted a human target autonomously, raising alarms about military AI. The episode also humorously touches on quirky AI antics, showcasing the chaotic and fascinating world of artificial intelligence innovations.
AI Snips
Chapters
Transcript
Episode notes
Anthropic's Focus on AI Safety
- Anthropic, a new AI company led by former OpenAI members, has raised significant funding.
- They aim to improve AI safety, steerability, and interpretability, particularly for large language models.
OpenAI and Anthropic: Diverging Paths
- OpenAI is shifting towards commercialization with Microsoft, while Anthropic prioritizes research and safety.
- This divergence highlights different visions within the AI community regarding product development versus fundamental research.
Academic Fraud in AI Research
- A blog post criticizes academic fraud, citing practices like cherry-picking data and inventing problem settings.
- The author argues that these subtle frauds are widespread and contribute to a lack of true insight in many published papers.