no code implementations • 22 Nov 2023 • Shijie Wang, Qi Zhao, Minh Quan Do, Nakul Agarwal, Kwonjoon Lee, Chen Sun
What makes good video representations for video understanding, such as anticipating future activities, or answering video-conditioned questions?
1 code implementation • 31 Oct 2023 • Ce Zhang, Changcheng Fu, Shijie Wang, Nakul Agarwal, Kwonjoon Lee, Chiho Choi, Chen Sun
To recognize and predict human-object interactions, we use a Transformer-based neural architecture which allows the "retrieval" of relevant objects for action anticipation at various time scales.
Ranked #2 on Long Term Action Anticipation on Ego4D
no code implementations • 9 Oct 2023 • Kaiwen Zhou, Kwonjoon Lee, Teruhisa Misu, Xin Eric Wang
We categorize the problem of VCR into visual commonsense understanding (VCU) and visual commonsense inference (VCI).
no code implementations • 31 Jul 2023 • Qi Zhao, Shijie Wang, Ce Zhang, Changcheng Fu, Minh Quan Do, Nakul Agarwal, Kwonjoon Lee, Chen Sun
We propose to formulate the LTA task from two perspectives: a bottom-up approach that predicts the next actions autoregressively by modeling temporal dynamics; and a top-down approach that infers the goal of the actor and plans the needed procedure to accomplish the goal.
no code implementations • CVPR 2023 • Hyung-gun Chi, Kwonjoon Lee, Nakul Agarwal, Yi Xu, Karthik Ramani, Chiho Choi
SALF is challenging because it requires understanding the underlying physics of video observations to predict future action locations accurately.
3 code implementations • ICLR 2022 • Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu
Recently, Vision Transformers (ViTs) have shown competitive performance on image recognition while requiring less vision-specific inductive biases.
Ranked #68 on Image Generation on CIFAR-10
no code implementations • CVPR 2021 • Gaurav Parmar, Dacheng Li, Kwonjoon Lee, Zhuowen Tu
Our model, named dual contradistinctive generative autoencoder (DC-VAE), integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for there construction/synthesis), both being contradistinctive.
Ranked #2 on Image Generation on LSUN Bedroom 128 x 128
no code implementations • ICLR 2020 • Siyang Wang, Justin Lazarow, Kwonjoon Lee, Zhuowen Tu
We tackle the problem of modeling sequential visual phenomena.
1 code implementation • CVPR 2020 • Justin Lazarow, Kwonjoon Lee, Kunyu Shi, Zhuowen Tu
Panoptic segmentation requires segments of both "things" (countable object instances) and "stuff" (uncountable and amorphous regions) within a single output.
Ranked #22 on Panoptic Segmentation on COCO test-dev
7 code implementations • CVPR 2019 • Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks.
Ranked #12 on Few-Shot Image Classification on FC100 5-way (1-shot)
no code implementations • 6 Dec 2017 • Zhiwei Jia, Haoshen Hong, Siyang Wang, Kwonjoon Lee, Zhuowen Tu
We study the intrinsic transformation of feature maps across convolutional network layers with explicit top-down control.
1 code implementation • CVPR 2018 • Kwonjoon Lee, Weijian Xu, Fan Fan, Zhuowen Tu
We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model.