Interconnects

Stop "reinventing" everything to "solve" alignment

Apr 17, 2024
Delve into integrating non-computing science into reinforcement learning for AI alignment. Explore social choice theory for diverse human feedback. Discover OLMo 1.7 7B model with good benchmarks and open design. Unveil insights on pluralistic alignment in AI systems for inclusivity.
Ask episode
Chapters
Transcript
Episode notes