no code implementations • 8 Oct 2023 • Zhong-Yu Li, Bo-Wen Yin, Yongxiang Liu, Li Liu, Ming-Ming Cheng
Thus, we propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model.
2 code implementations • 14 Jun 2022 • ShangHua Gao, Zhong-Yu Li, Qi Han, Ming-Ming Cheng, Liang Wang
Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combinations further.
Ranked #2 on Instance Segmentation on COCO 2017 val (AP metric)
1 code implementation • 10 Jun 2022 • Zhong-Yu Li, ShangHua Gao, Ming-Ming Cheng
Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i. e., spatial/channel self-relations, for self-supervised learning.
Ranked #2 on Semantic Segmentation on ImageNet-S
3 code implementations • 6 Jun 2021 • ShangHua Gao, Zhong-Yu Li, Ming-Hsuan Yang, Ming-Ming Cheng, Junwei Han, Philip Torr
In this work, we propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to help the research progress.
Ranked #1 on Unsupervised Semantic Segmentation on ImageNet-S-300
2 code implementations • CVPR 2021 • Shang-Hua Gao, Qi Han, Zhong-Yu Li, Pai Peng, Liang Wang, Ming-Ming Cheng
Our search scheme exploits both global search to find the coarse combinations and local search to get the refined receptive field combination patterns further.
Ranked #20 on Action Segmentation on Breakfast