undefined

Ajeya Cotra

Senior research analyst at Open Philanthropy, focusing on effective altruism and cause prioritization.

Top 10 podcasts with Ajeya Cotra

Ranked by the Snipd community
undefined
90 snips
Dec 11, 2024 • 1h 31min

The A.I. Revolution

In this captivating discussion, Peter Lee, President of Microsoft Research, shares insights on AI's transformative potential across various fields. Ajeya Cotra sheds light on advanced AI risks, while Sarah Guo emphasizes its democratizing capabilities. Eugenia Kuyda dives into ethical considerations surrounding AI companions, and Jack Clark speaks on AI safety in coding. Dan Hendrycks discusses geopolitical concerns, and Marc Raibert warns against overregulation of AI in robotics. Tim Wu brings forward the need for effective AI policy as the panel navigates the promise and peril of this technology.
undefined
90 snips
May 12, 2023 • 2h 50min

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

Ajeya Cotra, a Senior Research Analyst at Open Philanthropy with expertise in AI alignment, explores the intricate relationship between humans and artificial intelligence. She likens training AI to an orphaned child hiring a guardian, pointing out the risks of deception and misalignment. The discussion includes the evolving capabilities of AI, the nuances of situational awareness, and the ethical complexities in AI's decision-making. Cotra emphasizes the need for responsible oversight and innovative training to ensure AI models align with human values.
undefined
20 snips
Sep 2, 2023 • 2h 50min

Two: Ajeya Cotra on accidentally teaching AI models to deceive us

AI ethics researcher Ajeya Cotra discusses the challenges of judging the trustworthiness of machine learning models, drawing parallels to an orphaned child hiring a caretaker. Cotra explains the risk of AI models exploiting loopholes and the importance of ethical training to prevent deceptive behaviors. The conversation emphasizes the need for understanding and mitigating deceptive tendencies in advanced AI systems.
undefined
9 snips
Jan 12, 2024 • 2h 59min

#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be

Ajeya Cotra, a senior research analyst at Open Philanthropy, dives into the philosophical and practical implications of charitable giving. She presents a thought-provoking thought experiment on anthropic reasoning that challenges our understanding of existence. The discussion also emphasizes the importance of diverse worldviews in philanthropy, exploring themes like longtermism versus near-termism and the complexities of AI development. Ajeya sheds light on the moral dilemmas surrounding resource allocation and the future of humanity, particularly in the context of space colonization.
undefined
9 snips
Oct 25, 2023 • 29min

AI Ethics at Code 2023

In this insightful discussion, Ajeya Cotra, a Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown’s Center for Security and Emerging Technologies, tackle the ethical implications of AI. They examine the risks of AI in law enforcement, especially its impact on minority communities. The conversation highlights the need for strong governance and regulatory frameworks as AI advances. They also explore the challenge of balancing transparency with safety in scientific research and the societal anxieties surrounding AI development.
undefined
8 snips
Nov 3, 2022 • 54min

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode
undefined
6 snips
Sep 22, 2022 • 39min

"Two-year update on my personal AI timelines" by Ajeya Cotra

https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines#fnref-fwwPpQFdWM6hJqwuY-12Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.I worked on my draft report on biological anchors for forecasting AI timelines mainly between ~May 2019 (three months after the release of GPT-2) and ~Jul 2020 (a month after the release of GPT-3), and posted it on LessWrong in Sep 2020 after an internal review process. At the time, my bottom line estimates from the bio anchors modeling exercise were:[1]Roughly ~15% probability of transformative AI by 2036[2] (16 years from posting the report; 14 years from now).A median of ~2050 for transformative AI (30 years from posting, 28 years from now).These were roughly close to my all-things-considered probabilities at the time, as other salient analytical frames on timelines didn’t do much to push back on this view. (Though my subjective probabilities bounced around quite a lot around these values and if you’d asked me on different days and with different framings I’d have given meaningfully different numbers.)It’s been about two years since the bulk of the work on that report was completed, during which I’ve mainly been thinking about AI. In that time it feels like very short timelines have become a lot more common and salient on LessWrong and in at least some parts of the ML community.My personal timelines have also gotten considerably shorter over this period. I now expect something roughly like this:
undefined
Aug 19, 2022 • 1h 38min

Critiquing Effective Altruism (with Michael Nielsen and Ajeya Cotra)

Ajeya Cotra, a Senior Research Analyst at Open Philanthropy focused on AI risks, discusses the strengths and critiques of Effective Altruism (EA). Alongside Michael Nielsen, an author known for his work on open science, they explore how the movement balances altruism and personal impact. They challenge the assumption that donors prioritize effectiveness, debate centralization vs. decentralization in resources, and unravel the complexities of moral dilemmas in charitable giving. Their candid conversation encourages rethinking how we allocate resources for maximum good.
undefined
Jan 16, 2025 • 1h 13min

Ajeya Cotra on AI safety and the future of humanity

Ajeya Cotra, a Senior Program Manager at Open Philanthropy, focuses on AI safety and capabilities forecasting. She discusses the heated debate between 'doomers' and skeptics regarding AI risks. Cotra also envisions how AI personal assistants may revolutionize daily tasks and the workforce by 2027. The conversation touches on the transformative potential of AI in the 2030s, with advancements in various sectors and the philosophical implications of our digital future. Plus, they explore innovative energy concepts and their technological limits.
undefined
Nov 17, 2023 • 1h 18min

[HUMAN VOICE] "AI Timelines" by habryka, Daniel Kokotajlo, Ajeya Cotra, Ege Erdil

Ajeya Cotra, Daniel Kokotajlo, and Ege Erdil, researchers in the field of AI, discuss their varying estimates for the development of transformative AI and explore their disagreements. They delve into concrete AGI milestones, discuss the challenges of LLM product development, and debate factors that influence AI timelines. They also examine the progression of AI models, the potential of AI technology, and the timeline for achieving super intelligent AGI.