no code implementations • 26 Mar 2024 • Mingfu Liang, Jong-Chyi Su, Samuel Schulter, Sparsh Garg, Shiyu Zhao, Ying Wu, Manmohan Chandraker
This necessitates an expensive process of continuously curating and annotating data with significant human effort.
1 code implementation • 28 Feb 2023 • Sangwoo Mo, Jong-Chyi Su, Chih-Yao Ma, Mido Assran, Ishan Misra, Licheng Yu, Sean Bell
Semi-supervised learning aims to train a model using limited labels.
1 code implementation • CVPR 2023 • Tsu-Jui Fu, Licheng Yu, Ning Zhang, Cheng-Yang Fu, Jong-Chyi Su, William Yang Wang, Sean Bell
Inspired by this, we introduce a novel task, text-guided video completion (TVC), which requests the model to generate a video from partial frames guided by an instruction.
Ranked #3 on Video Prediction on BAIR Robot Pushing
1 code implementation • 23 Nov 2021 • Jong-Chyi Su, Subhransu Maji
We propose techniques to incorporate coarse taxonomic labels to train image classifiers in fine-grained domains.
2 code implementations • 2 Jun 2021 • Jong-Chyi Su, Subhransu Maji
Semi-iNat is a challenging dataset for semi-supervised classification with a long-tailed distribution of classes, fine-grained categories, and domain shifts between labeled and unlabeled data.
1 code implementation • CVPR 2021 • Jong-Chyi Su, Zezhou Cheng, Subhransu Maji
We evaluate the effectiveness of semi-supervised learning (SSL) on a realistic benchmark where data exhibits considerable class imbalance and contains images from novel classes.
2 code implementations • 11 Mar 2021 • Jong-Chyi Su, Subhransu Maji
From this collection, we sample a subset of classes and their labels, while adding the images from the remaining classes to the unlabeled set of images.
1 code implementation • ICCV 2021 • Zezhou Cheng, Jong-Chyi Su, Subhransu Maji
Given a collection of images, humans are able to discover landmarks by modeling the shared geometric structure across instances.
2 code implementations • ECCV 2020 • Jong-Chyi Su, Subhransu Maji, Bharath Hariharan
We investigate the role of self-supervised learning (SSL) in the context of few-shot learning.
no code implementations • 17 Jun 2019 • Jong-Chyi Su, Subhransu Maji, Bharath Hariharan
We present a technique to improve the transferability of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions.
no code implementations • 16 Apr 2019 • Jong-Chyi Su, Yi-Hsuan Tsai, Kihyuk Sohn, Buyu Liu, Subhransu Maji, Manmohan Chandraker
Our approach, active adversarial domain adaptation (AADA), explores a duality between two related problems: adversarial domain alignment and importance sampling for adapting models across domains.
no code implementations • 7 Sep 2018 • Jong-Chyi Su, Matheus Gadelha, Rui Wang, Subhransu Maji
We investigate the role of representations and architectures for classifying 3D shapes in terms of their computational efficiency, generalization, and robustness to adversarial transformations.
no code implementations • ICCV 2017 • Jong-Chyi Su, Chenyun Wu, Huaizu Jiang, Subhransu Maji
We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category.
no code implementations • 1 Apr 2016 • Jong-Chyi Su, Subhransu Maji
Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning.