

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Dec 2, 2023 • 23min
Thoughts on “AI is easy to control” by Pope & Belrose
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Quintin Pope & Nora Belrose have a new “AI Optimists” website, along with a new essay “AI is easy to control”, arguing that the risk of human extinction due to future AI (“AI x-risk”) is a mere 1% (“a tail risk worth considering, but not the dominant source of risk in the world”). (I’m much more pessimistic.) It makes lots of interesting arguments, and I’m happy that the authors are engaging in substantive and productive discourse, unlike the ad hominem vibes-based drivel which is growing increasingly common on both sides of the AI x-risk issue in recent months.This is not a comprehensive rebuttal or anything, but rather picking up on a few threads that seem important for where we disagree, or where I have something I want to say.Summary / table-of-contents:Note: I think Sections 1 [...]--- First published: December 1st, 2023 Source: https://www.lesswrong.com/posts/YyosBAutg4bzScaLu/thoughts-on-ai-is-easy-to-control-by-pope-and-belrose --- Narrated by TYPE III AUDIO.

Nov 30, 2023 • 9min
The 101 Space You Will Always Have With You
The podcast discusses the importance of consistently sharing crucial information in a community. It explores the difficulties in educating newcomers and suggests being more tolerant towards them. The episode also highlights the challenge of assuming familiarity and emphasizes the importance of maintaining common knowledge.

12 snips
Nov 28, 2023 • 1h 6min
[HUMAN VOICE] "Social Dark Matter" by Duncan Sabien
Explore the concept of 'Social Dark Matter' and its impact on our perceptions, using the #MeToo movement. Delve into hidden behaviors like sexual assault, alcoholism, and homosexuality. Discuss the existence of controversial groups and actions, the reflexive tendency to judge others, and the misrepresentation of social dark matter. Learn strategies to prevent miscalibrated reactions and examine societal confusion and misguided beliefs.

Nov 28, 2023 • 1h 17min
Shallow review of live agendas in alignment & safety
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.Summary.You can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on. This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been [...]The original text contained 2 footnotes which were omitted from this narration. --- First published: November 27th, 2023 Source: https://www.lesswrong.com/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety --- Narrated by TYPE III AUDIO.

Nov 25, 2023 • 8min
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying “LLMs turned out to be not very want-y, when are the people who expected 'agents' going to update?”, so, here we are.Okay, so you know how AI today isn't great at certain... let's say "long-horizon" tasks? Like novel large-scale engineering projects, or writing a long book series with lots of foreshadowing?(Modulo the fact that it can play chess pretty well, which is longer-horizon than some things; this distinction is quantitative rather than qualitative and it's being eroded, etc.)And you know how the AI doesn't seem to have all that much "want"- or "desire"-like behavior?(Modulo, e.g., the fact that it can play chess pretty well, which indicates a [...]---First published: November 24th, 2023 Source: https://www.lesswrong.com/posts/AWoZBzxdm4DoGgiSj/ability-to-solve-long-horizon-tasks-correlates-with-wanting --- Narrated by TYPE III AUDIO.

Nov 23, 2023 • 6min
[HUMAN VOICE] "The 6D effect: When companies take risks, one email can be very powerful." by scasper
Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedRecently, I have been learning about industry norms, legal discovery proceedings, and incentive structures related to companies building risky systems. I wanted to share some findings in this post because they may be important for the frontier AI community to understand well. TL;DRDocumented communications of risks (especially by employees) make companies much more likely to be held liable in court when bad things happen. The resulting Duty to Due Diligence from Discoverable Documentation of Dangers (the 6D effect) can make companies much more cautious if even a single email is sent to them communicating a risk. Source:https://www.lesswrong.com/posts/J9eF4nA6wJW6hPueN/the-6d-effect-when-companies-take-risks-one-email-can-beNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓

Nov 22, 2023 • 20min
OpenAI: The Battle of the Board
Previously: OpenAI: Facts from a Weekend. On Friday afternoon, OpenAI's board fired CEO Sam Altman. Overnight, an agreement in principle was reached to reinstate Sam Altman as CEO of OpenAI, with an initial new board of Brad Taylor (ex-co-CEO of Salesforce, chair), Larry Summers and Adam D’Angelo. What happened? Why did it happen? How will it ultimately end? The fight is far from over. We do not entirely know, but we know a lot more than we did a few days ago. This is my attempt to put the pieces together. This is a Fight For Control; Altman Started it This was and still is a fight about control of OpenAI, its board, and its direction. This has been a long simmering battle and debate. The stakes are high. Until recently, Sam Altman worked to reshape the company in his [...]--- First published: November 22nd, 2023 Source: https://www.lesswrong.com/posts/sGpBPAPq2QttY4M2H/openai-the-battle-of-the-board --- Narrated by TYPE III AUDIO.

Nov 20, 2023 • 17min
OpenAI: Facts from a Weekend
Approximately four GPTs and seven years ago, OpenAI's founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created. Now we are engaged in a great corporate war, testing whether that entity, or any entity so conceived and so dedicated, can long endure. What matters is not theory but practice. What happens when the chips are down? So what happened? What prompted it? What will happen now? To a large extent, even more than usual, we do not know. We should not pretend that we know more than we do. Rather than attempt to interpret here or barrage with an endless string of reactions and quotes, I will instead do my best to stick to a compilation of the key facts. (Note: All times stated here [...]--- First published: November 20th, 2023 Source: https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-facts-from-a-weekend --- Narrated by TYPE III AUDIO.

Nov 18, 2023 • 1min
Sam Altman fired from OpenAI
This is a linkpost for https://openai.com/blog/openai-announces-leadership-transitionBasically just the title, see the OAI blog post for more details.Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam's many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company's research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have [...]--- First published: November 17th, 2023 Source: https://www.lesswrong.com/posts/eHFo7nwLYDzpuamRM/sam-altman-fired-from-openai Linkpost URL:https://openai.com/blog/openai-announces-leadership-transition --- Narrated by TYPE III AUDIO.

Nov 17, 2023 • 53min
Social Dark Matter
You know it must be out there, but you mostly never see it.Author's Note 1: I'm something like 75% confident that this will be the last essay that I publish on LessWrong. Future content will be available on my substack, where I'm hoping people will be willing to chip in a little commensurate with the value of the writing, and (after a delay) on my personal site. I decided to post this final essay here rather than silently switching over because many LessWrong readers would otherwise never find out that they could still get new Duncan content elsewhere. Author's Note 2: This essay is not intended to be revelatory. Instead, it's attempting to get the consequences of a few very obvious things lodged into your brain, such that they actually occur to you from time to time as opposed to occurring to you approximately never.Most people [...]The original text contained 9 footnotes which were omitted from this narration. --- First published: November 7th, 2023 Source: https://www.lesswrong.com/posts/KpMNqA5BiCRozCwM3/social-dark-matter --- Narrated by TYPE III AUDIO.


