Triple-view Convolutional Neural Networks for COVID-19 Diagnosis with Chest X-ray

27 Oct 2020  ·  Jianjia Zhang ·

The Coronavirus Disease 2019 (COVID-19) is affecting increasingly large number of people worldwide, posing significant stress to the health care systems. Early and accurate diagnosis of COVID-19 is critical in screening of infected patients and breaking the person-to-person transmission. Chest X-ray (CXR) based computer-aided diagnosis of COVID-19 using deep learning becomes a promising solution to this end. However, the diverse and various radiographic features of COVID-19 make it challenging, especially when considering each CXR scan typically only generates one single image. Data scarcity is another issue since collecting large-scale medical CXR data set could be difficult at present. Therefore, how to extract more informative and relevant features from the limited samples available becomes essential. To address these issues, unlike traditional methods processing each CXR image from a single view, this paper proposes triple-view convolutional neural networks for COVID-19 diagnosis with CXR images. Specifically, the proposed networks extract individual features from three views of each CXR image, i.e., the left lung view, the right lung view and the overall view, in three streams and then integrate them for joint diagnosis. The proposed network structure respects the anatomical structure of human lungs and is well aligned with clinical diagnosis of COVID-19 in practice. In addition, the labeling of the views does not require experts' domain knowledge, which is needed by many existing methods. The experimental results show that the proposed method achieves state-of-the-art performance, especially in the more challenging three class classification task, and admits wide generality and high flexibility.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here