LessWrong (Curated & Popular)

LessWrong
undefined
Feb 15, 2024 • 10min

CFAR Takeaways: Andrew Critch

I'm trying to build my own art of rationality training, and I've started talking to various CFAR instructors about their experiences – things that might be important for me to know but which hadn't been written up nicely before.This is a quick write up of a conversation with Andrew Critch about his takeaways. (I took rough notes, and then roughly cleaned them up for this. I don't know "What surprised you most during your time at CFAR?Surprise 1: People are profoundly non-numerate. And, people who are not profoundly non-numerate still fail to connect numbers to life. I'm still trying to find a way to teach people to apply numbers for their life. For example: "This thing is annoying you. How many minutes is it annoying you today? how many days will it annoy you?". I compulsively do this. There aren't things lying around in [...]--- First published: February 14th, 2024 Source: https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/cfar-takeaways-andrew-critch --- Narrated by TYPE III AUDIO.
undefined
Feb 14, 2024 • 25min

[HUMAN VOICE] "Believing In" by Anna Salamon

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/duvzdffTzL3dWJcxn/believing-in-1Narrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓[125+ Karma Post] ✓
undefined
Feb 14, 2024 • 8min

[HUMAN VOICE] "Attitudes about Applied Rationality" by Camille Berger

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/5jdqtpT6StjKDKacw/attitudes-about-applied-rationalityNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓
undefined
Feb 14, 2024 • 16min

Scale Was All We Needed, At First

A speculative fiction vignette about the creation of AGI by January 2025, a meeting between Doctor Browning and Director Yarden, efficient fine-tuning and scaling up of language models, disagreement and cyber attack at OpenAI, speculation about the Alice model architecture, and speculations on the growth and limitations of Alice.
undefined
Feb 11, 2024 • 7min

Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

The podcast explores the differing views of Sam Altman and OpenAI on developing artificial general intelligence (AGI) and the risks of AI surpassing human control. It discusses the importance of computational resources for training AI models and the market dominance of Nvidia. Additionally, it looks at the relationship between computing power and AI advancement, the need for capital to improve AI chip production, and the impact of increased compute on AI safety concerns.
undefined
Feb 9, 2024 • 12min

[HUMAN VOICE] "A Shutdown Problem Proposal" by johnswentworth, David Lorell

In this podcast, johnswentworth and David Lorell propose a solution to the shutdown problem in AI by using a sub-agent architecture and negotiation between utility-maximizing subagents. They discuss the design of an agent with multiple subagents and the importance of corrugibility. They also explore alignment problems, ontological issues, designing utility functions, and challenges in bridging the theory-practice gap.
undefined
Feb 4, 2024 • 9min

Brute Force Manufactured Consensus is Hiding the Crime of the Century

People often parse information through an epistemic consensus filter. They do not ask "is this true", they ask "will others be OK with me thinking this is true". This makes them very malleable to brute force manufactured consensus; if every screen they look at says the same thing they will adopt that position because their brain interprets it as everyone in the tribe believing it.- Anon, 4Chan, slightly editedOrdinary people who haven't spent years of their lives thinking about rationality and epistemology don't form beliefs by impartially tallying up evidence like a Bayesian reasoner. Whilst there is a lot of variation, my impression is that the majority of humans we share this Earth with use a completely different algorithm for vetting potential beliefs: they just believe some average of what everyone and everything around them believes, especially what they see on screens, newspapers and "respectable", "mainstream" websites.--- First published: February 3rd, 2024 Source: https://www.lesswrong.com/posts/bMxhrrkJdEormCcLt/brute-force-manufactured-consensus-is-hiding-the-crime-of --- Narrated by TYPE III AUDIO.
undefined
Feb 3, 2024 • 1h 41min

[HUMAN VOICE] "Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI" by Jeremy Gillen, peterbarnett

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/GfZfDHZHCuYwrHGCd/without-fundamental-advances-misalignment-and-catastropheNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓[125+ Karma Post] ✓
undefined
Feb 2, 2024 • 17min

Leading The Parade

Background Terminology: Counterfactual Impact vs “Leading The Parade”Y’know how a parade or marching band has a person who walks in front waving a fancy-looking stick up and down? Like this guy: The classic 80's comedy Animal House features a great scene in which a prankster steals the stick, and then leads the marching band off the main road and down a dead-end alley.That is not the guy who's supposed to have that stick.In the context of the movie, it's hilarious. It's also, presumably, not at all how parades actually work these days. If you happen to be “leading” a parade, and you go wandering off down a side alley, then (I claim) those following behind will be briefly confused, then ignore you and continue along the parade route. The parade leader may appear to be “leading”, but they do not have any counterfactual impact on the route [...]--- First published: January 31st, 2024 Source: https://www.lesswrong.com/posts/LKC3XfWxPzZXK7Esd/leading-the-parade --- Narrated by TYPE III AUDIO.
undefined
Feb 2, 2024 • 1h 4min

[HUMAN VOICE] "The case for ensuring that powerful AIs are controlled" by ryan_greenblatt, Buck

Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlledNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[Curated Post] ✓[125+ Karma Post] ✓

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app