AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using Perplexity Score for Document Quality Assessment in Re-ranking
Implementing a re-ranker involves sending one chunk at a time with the initial query to a large language model to obtain log likelihoods of each token and calculate perplexity. A high perplexity indicates confusion and uncertainty in the language model, whereas a low perplexity signifies confidence in generating tokens. A good perplexity score suggests a high-quality document that assists the language model in generating a confident answer. This heuristic can be applied with various models to determine document quality in re-ranking, focusing on perplexity rather than the actual answer itself.