Weakly Supervised Semantic Segmentation of Echocardiography Videos via Multi-level Features Selection

Echocardiogram illustrates what the capacity it owns of detecting the global and regional functions of the heart. With obvious benefits of non-invasion, visuality and mobility, it has become an indispensable technology for clinical evaluation of cardiac function. However, the uncertainty in measurement of ultrasonic equipment and inter-reader variability are always inevitable. Regarding of this situation, researchers have proposed many methods for cardiac function assessment based on deep learning. In this paper, we propose UDeep, an encoder-decoder model for left ventricular segmentation of echocardiography, which pays attention to both multi-scale high-level semantic information and multi-scale low-level fine-grained information. Our model maintains sensitivity to semantic edges, so as to accurately segment the left ventricle. The encoder extracts multiple scales high-level semantic features through a computation efficient backbone named Separated Xception and the Atrous Spacial Pyramid Pooling module. A new decoder module consisting of several Upsampling Fusion Modules (UPFMs), at the same time, is applied to fuse features of different levels. To improve the generalization of our model to different echocardiography images, we propose Pseudo-Segmentation Penalty loss function. Our model accurately segments the left ventricle with a Dice Similarity Coefficient of 0.9290 on the test set of echocardiography videos dataset.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
LV Segmentation Echonet-Dynamic SepXception Test DSC 92.90 # 2
Test ES DSC 91.73 # 2
Test ED DSC 93.64 # 2

Methods


No methods listed for this paper. Add relevant methods here