ERQA: Edge-Restoration Quality Assessment for Video Super-Resolution

19 Oct 2021  ·  Anastasia Kirillova, Eugene Lyapustin, Anastasia Antsiferova, Dmitry Vatolin ·

Despite the growing popularity of video super-resolution (VSR), there is still no good way to assess the quality of the restored details in upscaled frames. Some SR methods may produce the wrong digit or an entirely different face. Whether a method's results are trustworthy depends on how well it restores truthful details. Image super-resolution can use natural distributions to produce a high-resolution image that is only somewhat similar to the real one. VSR enables exploration of additional information in neighboring frames to restore details from the original scene. The ERQA metric, which we propose in this paper, aims to estimate a model's ability to restore real details using VSR. On the assumption that edges are significant for detail and character recognition, we chose edge fidelity as the foundation for this metric. Experimental validation of our work is based on the MSU Video Super-Resolution Benchmark, which includes the most difficult patterns for detail restoration and verifies the fidelity of details from the original frame. Code for the proposed metric is publicly available at https://github.com/msu-video-group/ERQA.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Quality Assessment MSU SR-QA Dataset ERQA SROCC 0.59345 # 22
PLCC 0.60188 # 16
KLCC 0.47785 # 22
Type FR # 1

Methods


No methods listed for this paper. Add relevant methods here