Reranking Passages with Coarse-to-Fine Neural Retriever Enhanced by List-Context Information

23 Aug 2023  ·  Hongyin Zhu ·

Passage reranking is a critical task in various applications, particularly when dealing with large volumes of documents. Existing neural architectures have limitations in retrieving the most relevant passage for a given question because the semantics of the segmented passages are often incomplete, and they typically match the question to each passage individually, rarely considering contextual information from other passages that could provide comparative and reference information. This paper presents a list-context attention mechanism to augment the passage representation by incorporating the list-context information from other candidates. The proposed coarse-to-fine (C2F) neural retriever addresses the out-of-memory limitation of the passage attention mechanism by dividing the list-context modeling process into two sub-processes with a cache policy learning algorithm, enabling the efficient encoding of context information from a large number of candidate answers. This method can be generally used to encode context information from any number of candidate answers in one pass. Different from most multi-stage information retrieval architectures, this model integrates the coarse and fine rankers into the joint optimization process, allowing for feedback between the two layers to update the model simultaneously. Experiments demonstrate the effectiveness of the proposed approach.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here