Self-Supervised Learning Based on Spatial Awareness for Medical Image Analysis

Medical image analysis is one of the research fields that had huge benefits from deep learning in recent years. To earn a good performance, the learning model requires large scale data with full annotation. However, it is a big burden to collect a sufficient number of labeled data for the training. Since there are more unlabeled data than labeled ones in most of medical applications, self-supervised learning has been utilized to improve the performance. However, most of current methods for self-supervised learning try to understand only semantic features of the data, but have not fully utilized properties inherent in medical images. Specifically, in CT or MR images, the spatial or structural information contained in the dataset has not been fully considered. In this paper, we propose a novel method for self-supervised learning in medical image analysis that can exploit both semantic and spatial features at the same time. The proposed method is experimented in the problems of organ segmentation, intracranial hemorrhage detection and the results show the effectiveness of the method.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here