Scene-Text Oriented Reffering Expression Comprehension

Abstract—Referring expression comprehension (REC) aims to identify and locate a specific object in visual scenes referred to by a natural language expression. Existing studies of REC only focus on basic visual attributes and neglect scene text. Since scene text has the functions of object identification and disambiguation, it is naturally and frequently used to refer to objects. However, existing methods do not explicitly recognize text in images and fail to align scene text mentioned in expressions with the text shown in images, resulting in object localization errors. This study takes the first step toward addressing these limitations. First, we introduce a new task called scene-text oriented referring expression comprehension, which aims to align visual cues and textual semantics of scene text with referring expressions and visual contents. Second, we propose a scene text awareness network that can bridge the gap between texts from two modalities by grounding visual representations of expressioncorrelated scene texts. Specifically, we propose a correlated text extraction module to solve the problem of lacking semantic understanding, and a correlated region activation module to address the fixed alignment problem and absent alignment problem. These modules ensure that the proposed method focuses on local regions that are most relevant to scene text, thus mitigating the misalignment of scene text with irrelevant regions. Third, to conduct quantitative evaluations, we establish a new benchmark dataset called RefText. Experimental results demonstrate that the proposed method can effectively comprehend scene-text oriented referring expressions and achieves excellent performance. Index Terms—Referring expression comprehension, scene text representation, multimodal alignment.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods