LessWrong (Curated & Popular)

LessWrong
undefined
Nov 9, 2023 • 50min

"Does davidad's uploading moonshot work?" by jacobjabob et al.

Exploring the proposal of uploading human consciousness before 2040, including the challenges and solutions for barcoding transmembrane proteins. Advancements in using visible light for studying molecules and structure are discussed. The potential of using Human Brain Organoids for testing and uploading aspects of the plan. The limitations of analyzing small parts of the brain and the importance of whole brain processes. The potential acceleration of research and engineering with AI. Exploring the process of uploading human brains into computers and discussing cost analysis.
undefined
Nov 9, 2023 • 19min

Comp Sci in 2027 (Short story by Eliezer Yudkowsky)

The podcast explores topics such as handling compiler misbehavior, AI safety, code discrimination, self-reflection letters to AI, regulatory capture in the AI industry, and AI's self-preservation instincts.
undefined
4 snips
Nov 9, 2023 • 42min

"Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk" by 1a3orn

This podcast examines a policy paper arguing for the ban of powerful open-source LLMs and exposes the lack of strong evidence supporting the conclusion. It discusses the potential role of open source AI models in bioweapon creation and the risks of unmitigated LLMs in biology. It explores flaws in an experiment and theoretical arguments on open-source LLMs, as well as the misrepresentation of evidence and funding patterns.
undefined
Nov 9, 2023 • 16min

"My thoughts on the social response to AI risk" by Matthew Barnett

The podcast discusses the social response to AI risk, including recent evidence of society recognizing and addressing these risks. It analyzes the absence of a clear alarm for AI risk and explores the adoption of AI safety regulations. The chapter also delves into the unintended consequences of criminalizing circumvention and emphasizes the importance of thoughtful policymaking.
undefined
Nov 3, 2023 • 6min

"President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence" by Tristan Williams

President Biden's executive order on AI addresses existential risks, shares safety test results, develops standards for AI systems, and establishes an advanced cybersecurity program. It also focuses on efforts in the military and intelligence community, establishment of international frameworks, protecting Americans from fraud, privacy preservation, addressing algorithmic discrimination, and mitigating impact on jobs.
undefined
Nov 3, 2023 • 21min

"Thoughts on the AI Safety Summit company policy requests and responses" by So8res

Amazon, Anthropic, DeepMind, Inflection, Meta, Microsoft, and OpenAI outline their AI Safety Policies. The UK government's requests are analyzed, with missing priorities and organizations that excel identified. Topics discussed include preventing model misuse, responsible capability scaling, addressing emerging risks in AGI development, and ranking AI safety policies of various companies. The importance of monitoring risks and evaluating proposals for monitoring risks and benefits is also explored.
undefined
Oct 31, 2023 • 2h 40min

[Human Voice] "Book Review: Going Infinite" by Zvi

Sam Bankman-Fried, financial figure featured in Michael Lewis's book Going Infinite, discusses the psychology of the main character, the concept of fraud, art versus entertainment, Sam's persona transformation, strategic calculations in a PR campaign, questionable practices in effective altruism, manipulative practices in the cryptocurrency market, managing conflicts in a company, unforeseen consequences of Serum, investment decisions and political adaptation, FTX's strategy of reputation washing, the collapse of FTX and tensions with Binance, the mystery of missing money and market manipulation, dating and power dynamics in effective altruism, aftermath of the book publication and criticism, and reflections on SPF, Alameda, and FTX.
undefined
Oct 30, 2023 • 11min

"Announcing Timaeus" by Jesse Hoogland et al.

Timaeus, a new AI safety research organization, discusses their focus on making fundamental breakthroughs in technical AI alignment. They are currently working on singular learning theory and developmental interpretability to prevent the development of dangerous capabilities. The podcast covers their research agenda, academic outreach, recent hiring, collaborations, risks, and significance of the name 'Timious'.
undefined
Oct 30, 2023 • 10min

"At 87, Pearl is still able to change his mind" by rotatingpaguro

Judea Pearl, famous researcher known for Bayesian networks and statistical formalization of causality, discusses the need for a causal model and challenges machine learning's limitation to statistics-level reasoning. They explore surprising changes in perspective on causal queries and GPT capabilities, levels of causation in AI, and ethical implications in the shift towards general AI.
undefined
Oct 30, 2023 • 12min

"We're Not Ready: thoughts on "pausing" and responsible scaling policies" by Holden Karnofsky

The podcast explores the speaker's concerns about the risks of transformative AI and the need for protective measures. It discusses the idea of pausing investment in AI, explores the potential outcomes of different types of pauses, and highlights the benefits and challenges of advocating for a scaling pause. It also explores the need for a pause in AI development and the challenges in designing risk-reducing regulation.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app