

“AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy” by Katalina Hernandez, Kabir Kumar
[LessWrong Community event announcement: https://www.lesswrong.com/events/rRLPycsLdjFpZ4cKe/ai-safety-law-a-thon-we-need-more-technical-ai-safety]
Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on more "obvious" contractual considerations, IP rights or privacy clauses when giving advice to their clients- not on whether [...]
---
Outline:
(01:16) Whos coming?
(02:33) The technical AI Safety challenge: What to expect if you join
(03:40) Logistics
---
First published:
September 10th, 2025
---
Narrated by TYPE III AUDIO.