LessWrong (Curated & Popular)

LessWrong
undefined
May 28, 2024 • 5min

“Maybe Anthropic’s Long-Term Benefit Trust is powerless” by Zach Stein-Perlman

Discussion on Anthropic's unconventional governance mechanism, concerns over lack of transparency in the Long-Term Benefit Trust, and potential for stockholder influence. Exploration of trust's impact on board elections and power dynamics, along with calls for more transparency and disclosure of trust agreement.
undefined
May 27, 2024 • 16min

“Notifications Received in 30 Minutes of Class” by tanagrabeast

Exploring the replication of a viral image showing student phone notifications in class, gender differences in notification frequencies, impact of school-related notifications on student engagement, challenges faced by teachers due to digital connectivity, and reflections on technology's impact on student engagement.
undefined
May 24, 2024 • 8min

“AI companies aren’t really using external evaluators” by Zach Stein-Perlman

The podcast discusses the importance of external evaluators for AI models pre-deployment to enhance risk assessment and public accountability. It explores the challenges faced by AI companies like DeepMind and Open AI when it comes to model evaluation. The need for advanced access for safety researchers in model deployment and the role of external evaluations in addressing potential risks are also emphasized.
undefined
May 24, 2024 • 7min

“EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024” by scasper

The podcast delves into Anthropic's latest sparse autoencoder research, highlighting brilliant experiments, insights, and concerns about safety washing. The author reflects on predictions made about the paper's accomplishments, pointing out underperformance. Discussion also covers limitations of Anthropic's interpretability research, concerns about promotional strategies, and the need to prioritize a safety agenda.
undefined
May 22, 2024 • 7min

“What’s Going on With OpenAI’s Messaging?” by ozziegoen

Delve into OpenAI's messaging strategy, balancing transformative AI goals with safety concerns. Explore how the organization navigates public relations, competitive pressures, and investor interests. Analyze communication tactics, the importance of actions over promises, and the challenges of maintaining a coherent message.
undefined
May 21, 2024 • 29min

“Language Models Model Us” by eggsyntax

Exploring the ability of language models to deduce personal information from text, concerns about privacy and manipulation. Analyzing GPT 3.5's accuracy in determining gender, education, and ethnicity. Comparing model performance and discussing personal interests, routines, and philosophical musings of individuals.
undefined
May 21, 2024 • 51sec

Jaan Tallinn’s 2023 Philanthropy Overview

Jaan Tallinn discusses his philanthropic efforts in 2023, exceeding his commitment by funding $44M worth of grants. Detailed breakdown of fund allocations and the minimum price of ETH. Narrated by Type 3 Audio for LessWrong.
undefined
7 snips
May 21, 2024 • 1h 25min

“OpenAI: Exodus” by Zvi

Former OpenAI member Ilya Sutskever and Jan Leike leave amid safety concerns. Allegations of safety neglect, resource shortages, and culture shift at OpenAI. Departures prompt discussions on AI ethics, CEO's leadership, and employee rights. Controversy arises over legal tactics, non-disclosure agreements, and ethics within the organization.
undefined
May 20, 2024 • 7min

DeepMind’s ”​​Frontier Safety Framework” is weak and unambitious

Discussing DeepMind's 'Frontier Safety Framework' weaknesses and lack of ambition compared to other AI labs' safety plans. Addressing concerns about internal deployment specifics, evaluation frequency, and the absence of formal commitments.
undefined
May 18, 2024 • 11min

Do you believe in hundred dollar bills lying on the ground? Consider humming

Introduction. [Reminder: I am an internet weirdo with no medical credentials]A few months ago, I published some crude estimates of the power of nitric oxide nasal spray to hasten recovery from illness, and speculated about what it could do prophylactically. While working on that piece a nice man on Twitter alerted me to the fact that humming produces lots of nasal nitric oxide. This post is my very crude model of what kind of anti-viral gains we could expect from humming.I’ve encoded my model at Guesstimate. The results are pretty favorable (average estimated impact of 66% reduction in severity of illness), but extremely sensitive to my made-up numbers. Efficacy estimates go from ~0 to ~95%, depending on how you feel about publication bias, what percent of Enovid's impact can be credited to nitric oxide, and humming's relative effect. Given how made up speculative some [...]--- First published: May 16th, 2024 Source: https://www.lesswrong.com/posts/NBZvpcBx4ewqkdCdT/do-you-believe-in-hundred-dollar-bills-lying-on-the-ground-1 --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app