Assessing the Verifiability of Attributions in News Text
When reporting the news, journalists rely on the statements of stakeholders, experts, and officials. The attribution of such a statement is verifiable if its fidelity to the source can be confirmed or denied. In this paper, we develop a new NLP task: determining the verifiability of an attribution based on linguistic cues. We operationalize the notion of verifiability as a score between 0 and 1 using human judgments in a comparison-based approach. Using crowdsourcing, we create a dataset of verifiability-scored attributions, and demonstrate a model that achieves an RMSE of 0.057 and Spearman{'}s rank correlation of 0.95 to human-generated scores. We discuss the application of this technique to the analysis of mass media.
PDF Abstract IJCNLP 2017 PDF IJCNLP 2017 Abstract