

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Apr 18, 2024 • 6min
Express interest in an “FHI of the West”
TLDR: I am investigating whether to found a spiritual successor to FHI, housed under Lightcone Infrastructure, providing a rich cultural environment and financial support to researchers and entrepreneurs in the intellectual tradition of the Future of Humanity Institute. Fill out this form or comment below to express interest in being involved either as a researcher, entrepreneurial founder-type, or funder.The Future of Humanity Institute is dead:I knew that this was going to happen in some form or another for a year or two, having heard through the grapevine and private conversations of FHI's university-imposed hiring freeze and fundraising block, and so I have been thinking about how to best fill the hole in the world that FHI left behind. I think FHI was one of the best intellectual institutions in history. Many of the most important concepts[1] in my intellectual vocabulary were developed and popularized under its [...]The original text contained 1 footnote which was omitted from this narration. --- First published: April 18th, 2024 Source: https://www.lesswrong.com/posts/ydheLNeWzgbco2FTb/express-interest-in-an-fhi-of-the-west --- Narrated by TYPE III AUDIO.

Apr 17, 2024 • 24min
Transformers Represent Belief State Geometry in their Residual Stream
Produced while being an affiliate at PIBBSS[1]. The work was done initially with funding from a Lightspeed Grant, and then continued while at PIBBSS. Work done in collaboration with @Paul Riechers, @Lucas Teixeira, @Alexander Gietelink Oldenziel, and Sarah Marzen. Paul was a MATS scholar during some portion of this work. Thanks to Paul, Lucas, Alexander, and @Guillaume Corlouer for suggestions on this writeup.Introduction. What computational structure are we building into LLMs when we train them on next-token prediction? In this post we present evidence that this structure is given by the meta-dynamics of belief updating over hidden states of the data-generating process. We'll explain exactly what this means in the post. We are excited by these results because We have a formalism that relates training data to internal structures in LLMs.Conceptually, our results mean that LLMs synchronize to their internal world model as they move [...]The original text contained 10 footnotes which were omitted from this narration. --- First published: April 16th, 2024 Source: https://www.lesswrong.com/posts/gTZ2SxesbHckJ3CkF/transformers-represent-belief-state-geometry-in-their --- Narrated by TYPE III AUDIO.

Apr 16, 2024 • 2min
Paul Christiano named as US AI Safety Institute Head of AI Safety
This is a linkpost for https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safetyU.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer, Mara Campbell as Acting Chief Operating Officer and Chief of Staff, Rob Reich as Senior Advisor, and Mark Latonero as Head of International Engagement. They will join AISI Director Elizabeth Kelly and Chief Technology Officer Elham Tabassi, who were announced in February. The AISI was established within NIST at the direction of President Biden, including to support the responsibilities assigned to the Department of Commerce under the President's landmark Executive Order.Paul Christiano, Head of AI Safety, will design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security [...]--- First published: April 16th, 2024 Source: https://www.lesswrong.com/posts/63X9s3ENXeaDrbe5t/paul-christiano-named-as-us-ai-safety-institute-head-of-ai Linkpost URL:https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety --- Narrated by TYPE III AUDIO.

Apr 12, 2024 • 1h 15min
[HUMAN VOICE] "On green" by Joe Carlsmith
Explore the symbolism of green in fantasy, reshaping perspectives on nature, respecting super-intelligences, embracing otherness and ethics, ethical preservation and extinction, seeking God's guidance in decision-making, ethical philosophy in a naturalistic context, and the notion of true self and nature.

Apr 12, 2024 • 22min
[HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen
Ricki Heicklen discusses a broader conception of adverse selection beyond financial markets. Topics include applying the concept to everyday scenarios like crowded restaurants and parking, illustrating adverse selection with Laffy Taffys and Movie Pass, navigating risks in trading with market orders, and adapting models in auctions and stock markets.

Apr 12, 2024 • 13min
[HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman
Eric Neyman, PhD candidate, discusses his Algorithmic Bayesian Epistemology thesis, exploring topics like forecasting, rationalist communities, incentivizing experts, and robust aggregation of signals. He delves into the challenges of reaching agreement in forecasting, deductive reasoning algorithms, algorithmic mechanism design, and decision-making constraints.

Apr 12, 2024 • 3min
[HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer
The podcast discusses cognitive strategies and enhancing mental efficiency by exploring ways to think faster and optimize mental pathways. Personal experiences and insights on improving cognitive abilities through self-reflection are shared.

Apr 6, 2024 • 21min
LLMs for Alignment Research: a safety priority?
Gabriel Mukobi, author of a recent short story on LLMs, discusses prioritizing safety in AI research. They explore the role of programming and philosophy in safety work with LLMs, compare collaborative vs autonomous AI development, dive into AI hallucinations, data hunger in deep learning, and enhancing LLMs for safety through expert feedback.

Apr 5, 2024 • 12min
[HUMAN VOICE] "Using axis lines for good or evil" by dynomight
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/Yay8SbQiwErRyDKGb/using-axis-lines-for-good-or-evilNarrated for LessWrong by Perrin Walker.Share feedback on this narration.

Apr 5, 2024 • 50min
[HUMAN VOICE] "Social status part 1/2: negotiations over object-level preferences" by Steven Byrnes
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedSource:https://www.lesswrong.com/posts/SPBm67otKq5ET5CWP/social-status-part-1-2-negotiations-over-object-level Narrated for LessWrong by Perrin Walker.Share feedback on this narration.


