AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How Many Questions Would the Model Generate for a Given Challenge Paragraph?
Most of the knowledge was grammatical and understandable, which is expected because we know that neural language models are very good at generating fluent text. But something that was interesting is that when we asked human evalators to judge this generated knowledge, they didn't find we looked at the knowledge that was in practice helpful for the model. So i think there is also some limitation here about when you ask people to consider whether some kind of knowledge is helpful for making a decision. We're still thinking of ways to guide the generation to towards a helpful knowledge or maybe ways to judge whether knowledge is its going to be helpful or not.