AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Interpretability in Deep Learning
I think interpretability is necessary, but not sufficient. We need to get interpretability to work so that we can then do alignment. I'm seeing this as mostly a positive thing. This is a trend in the direction of maybe being able to unwind these things. So it should at least make me somewhat more optimistic that we're going to fix this stuff or find a way to fix it in time since we are trying. It costs them a lot of money to do this. They have 300,000 neurons in GPT-2. Think about how many times they had to run GPT-4. To do it right, it would not be cheap to do it right.