1 code implementation • 26 Dec 2023 • Suho Park, SuBeen Lee, Sangeek Hyun, Hyun Seok Seong, Jae-Pil Heo
Based on these two scores, we define a query background relevant score that captures the similarity between the backgrounds of the query and the support, and utilize it to scale support background features to adaptively restrict the impact of disruptive support backgrounds.
2 code implementations • 15 Nov 2023 • WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo
Dummy tokens conditioned by text query take portions of the attention weights, preventing irrelevant video clips from being represented by the text query.
Ranked #1 on Highlight Detection on TvSum
1 code implementation • 28 Jul 2023 • SuBeen Lee, WonJun Moon, Hyun Seok Seong, Jae-Pil Heo
While TDM influences high-level feature maps by task-adaptive calibration of channel-wise importance, we further introduce Instance Attention Module (IAM) operating in intermediate layers of feature extractors to instance-wisely highlight object-relevant channels, by extending QAM.
1 code implementation • CVPR 2023 • Hyun Seok Seong, WonJun Moon, SuBeen Lee, Jae-Pil Heo
Specifically, we add the loss propagating to local hidden positives, semantically similar nearby patches, in proportion to the predefined similarity scores.
Ranked #2 on Unsupervised Semantic Segmentation on Potsdam-3
1 code implementation • CVPR 2022 • SuBeen Lee, WonJun Moon, Jae-Pil Heo
Specifically, TDM learns task-specific channel weights based on two novel components: Support Attention Module (SAM) and Query Attention Module (QAM).
Ranked #10 on Few-Shot Image Classification on CUB 200 5-way 5-shot (using extra training data)