AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Considerations around the allocation of attention in AI risk involve both the ethics of attention, focusing on how attention should be allocated and influencing others' attention, and the political philosophy of attention. The distribution of attention closely relates to concerns about AI risk and harm, highlighting the need for a nuanced approach. It argues against the idea of attention being a zero-sum game and emphasizes the importance of considering the broader impact of attention allocation.
Attention allocation is discussed as a moral skill that can have independent ethical implications, separate from satisfying other norms. The example of spying on a partner to uncover infidelity illustrates how attention allocation can be ethically evaluated. The philosophy suggests that attention allocation involves responding appropriately to what demands specific responses, highlighting the intrinsic significance of how attention is allocated.
The meta-ethical considerations in justifying normative ethical theories point towards coherent justification through reflective equilibrium. Different moral theories are compared and contrasted to build compelling ethical frameworks. The discussion centers on dentological perspectives emphasizing moral equality and intrinsic values, contrasting with consequentialist views. The focus is on building theories rooted in coherence and explanatory power.
The interplay between safety research and AI capabilities is explored, highlighting how prioritizing safety can enhance capabilities. Examples like ChatGPT's functionality due to alignment research showcase how safety not only ensures ethical use but also improves performance. The evolution of AI models and the role of responsible AI practices in shaping technological advancements and societal impacts are emphasized.
The discourse on accountability, power dynamics, and decision-making in tech industry leadership is examined, drawing parallels with concerns over centralized decision-making in major tech companies. The importance of distributed decision-making and ethical considerations in technology development is emphasized to avoid unaccountable authority influencing societal transformations. The need for responsible AI practices and ethical oversight in tech innovation to prevent regulatory challenges and ensure societal benefit is highlighted.
The speaker discusses the importance of anticipating possible future technological changes when engaging in normative philosophy. They emphasize the significance of feasibility horizons in assessing the outcomes that can be expected based on existing systems, such as integrating LLMs with other AI tools to address limitations. By focusing on the feasibility horizon and considering scientific advances beyond it, the speaker stresses the unpredictability of technological advancements and the need to prioritize research within known boundaries to avoid wasted efforts.
The speaker highlights the importance of building robust and resilient research communities, regulations, and norms to manage risks associated with future technologies. By advocating for a focus on technologies within the feasibility horizon and establishing structures to address risks from known technologies, the speaker suggests that preparing for potential risks involves creating a strong foundation of research and practices supported by collaborative efforts across disciplines.
The discussion delves into the necessity of integrating sociotechnical approaches in AI safety research. By emphasizing the limitations of narrowly technical solutions and the need for a broader system safety perspective, the speaker underscores the importance of understanding the societal impacts of AI systems. They advocate for a more comprehensive and interdisciplinary approach that considers the socio-technical implications of AI development.
The conversation highlights the challenges of legitimizing interdisciplinary work in academia and the importance of building credibility for such research. By engaging in interdisciplinary conferences, supporting multi-disciplinary research initiatives, and valuing cross-disciplinary collaborations, the speaker encourages scholars to contribute to the recognition and advancement of interdisciplinary studies. They emphasize the role of academic venues in fostering diverse research perspectives and creating pathways for early career researchers to pursue interdisciplinary work.
The discussion underscores the significance of public interest research in promoting ethical development within the AI field. The speaker encourages academics, engineers, and industry professionals to engage in interdisciplinary collaborations and support initiatives that prioritize societal impacts and ethical considerations in AI research. By advocating for transparent and accountable research practices, the speaker emphasizes the need to advance interdisciplinary approaches and create pathways for ethical AI development.
Episode 124
You may think you’re doing a priori reasoning, but actually you’re just over-generalizing from your current experience of technology.
I spoke with Professor Seth Lazar about:
* Why managing near-term and long-term risks isn’t always zero-sum
* How to think through axioms and systems in political philosphy
* Coordination problems, economic incentives, and other difficulties in developing publicly beneficial AI
Seth is Professor of Philosophy at the Australian National University, an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI. He has worked on the ethics of war, self-defense, and risk, and now leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research projects on the moral and political philosophy of AI.
Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (00:54) Ad read — MLOps conference
* (01:32) The allocation of attention — attention, moral skill, and algorithmic recommendation
* (03:53) Attention allocation as an independent good (or bad)
* (08:22) Axioms in political philosophy
* (11:55) Explaining judgments, multiplying entities, parsimony, intuitive disgust
* (15:05) AI safety / catastrophic risk concerns
* (22:10) Superintelligence arguments, reasoning about technology
* (28:42) Attacking current and future harms from AI systems — does one draw resources from the other?
* (35:55) GPT-2, model weights, related debates
* (39:11) Power and economics—coordination problems, company incentives
* (50:42) Morality tales, relationship between safety and capabilities
* (55:44) Feasibility horizons, prediction uncertainty, and doing moral philosophy
* (1:02:28) What is a feasibility horizon?
* (1:08:36) Safety guarantees, speed of improvements, the “Pause AI” letter
* (1:14:25) Sociotechnical lenses, narrowly technical solutions
* (1:19:47) Experiments for responsibly integrating AI systems into society
* (1:26:53) Helpful/honest/harmless and antagonistic AI systems
* (1:33:35) Managing incentives conducive to developing technology in the public interest
* (1:40:27) Interdisciplinary academic work, disciplinary purity, power in academia
* (1:46:54) How we can help legitimize and support interdisciplinary work
* (1:50:07) Outro
Links:
* Resources
* Attention, moral skill, and algorithmic recommendation
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode