David Bau is an Assistant Professor studying the structure and interpretation of deep networks, and the co-author on "Locating and Editing Factual Associations in GPT" which introduced Rank-One Model Editing (ROME), a method that allows users to alter the weights of a GPT model, for instance by forcing it to output that the Eiffel Tower is in Rome.
David is a leading researcher in interpretability, with an interest in how this could help AI Safety. The main thesis of David's lab is that understanding the rich internal structure of deep networks is a grand and fundamental research question with many practical implications, and they aim to lay the groundwork for human-AI collaborative software engineering, where humans and machine-learned models both teach and learn from each other.
David's lab: https://baulab.info/
Patron: https://www.patreon.com/theinsideview
Twitter: https://twitter.com/MichaelTrazzi
Website: https://theinsideview.ai
TOC
[00:00] Intro
[01:16] Interpretability
[02:27] AI Safety, Out of Domain behavior
[04:23] It's difficult to predict which AI application might become dangerous or impactful
[06:00] ROME / Locating and Editing Factual Associations in GPT
[13:04] Background story for the ROME paper
[15:41] Twitter Q: where does key value abstraction break down in LLMs?
[19:03] Twitter Q: what are the tradeoffs in studying the largest models?
[20:22] Twitter Q: are there competitive and cleaner architectures than the transformer?
[21:15] Twitter Q: is decoder-only a contributor to the messiness? or is time-dependence beneficial?
[22:45] Twitter Q: how could ROME deal with superposition?
[23:30] Twitter Q: where is the Eiffel tower actually located?