no code implementations • 28 May 2024 • Youngwan Lee, Jeffrey Ryan Willette, Jonghee Kim, Sung Ju Hwang
To further investigate the reason for better generalization of the self-supervised ViT when trained by MAE (MAE-ViT) and the effect of the gradient correction of RC-MAE from the perspective of optimization, we visualize the loss landscapes of the self-supervised vision transformer by both MAE and RC-MAE and compare them with the supervised ViT (Sup-ViT).
1 code implementation • 5 Oct 2022 • Youngwan Lee, Jeffrey Willette, Jonghee Kim, Juho Lee, Sung Ju Hwang
Masked image modeling (MIM) has become a popular strategy for self-supervised learning~(SSL) of visual representations with Vision Transformers.
3 code implementations • CVPR 2022 • Youngwan Lee, Jonghee Kim, Jeff Willette, Sung Ju Hwang
While Convolutional Neural Networks (CNNs) have been the dominant architectures for such tasks, recently introduced Vision Transformers (ViTs) aim to replace them as a backbone.
Ranked #38 on Instance Segmentation on COCO minival