2 code implementations • 7 Feb 2021 • Chetan L. Srinidhi, Seung Wook Kim, Fu-Der Chen, Anne L. Martel
In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
Histopathological Image Classification Representation Learning +1