no code implementations • 8 Apr 2024 • Fengrui Tian, Yaoyao Liu, Adam Kortylewski, Yueqi Duan, Shaoyi Du, Alan Yuille, Angtian Wang
Instead of using manually annotated images, we leverage diffusion models (e. g., Zero-1-to-3) to generate a set of images under controlled pose differences and propose to learn our object pose estimator with those images.
no code implementations • 15 Dec 2023 • Qian Wang, Yaoyao Liu, Hefei Ling, Yingwei Li, Qihao Liu, Ping Li, Jiazhong Chen, Alan Yuille, Ning Yu
In response to the rapidly evolving nature of adversarial attacks against visual classifiers on a monthly basis, numerous defenses have been proposed to generalize against as many known attacks as possible.
1 code implementation • 30 Nov 2023 • Ruxiao Duan, Yaoyao Liu, Jieneng Chen, Adam Kortylewski, Alan Yuille
Replay-based methods in class-incremental learning (CIL) have attained remarkable success, as replaying the exemplars of old classes can significantly mitigate catastrophic forgetting.
no code implementations • 24 Oct 2023 • Yaoyao Liu, YingYing Li, Bernt Schiele, Qianru Sun
In experiments, we show that our method 1) is surprisingly effective even when there is no class overlap between placebos and original old class data, 2) does not require any additional supervision or memory budget, and 3) significantly outperforms a number of top-performing CIL methods, in particular when using lower memory budgets for old class exemplars, e. g., five exemplars per class.
no code implementations • 13 Jun 2023 • Wufei Ma, Qihao Liu, Jiahao Wang, Angtian Wang, Xiaoding Yuan, Yi Zhang, Zihao Xiao, Guofeng Zhang, Beijia Lu, Ruxiao Duan, Yongrui Qi, Adam Kortylewski, Yaoyao Liu, Alan Yuille
With explicit 3D geometry control, we can easily change the 3D structures of the objects in the generated images and obtain ground-truth 3D annotations automatically.
1 code implementation • 1 Jun 2023 • Yixiao Zhang, Xinyi Li, Huimiao Chen, Alan Yuille, Yaoyao Liu, Zongwei Zhou
The ability to dynamically extend a model to new data and classes is critical for multiple organ and tumor segmentation.
no code implementations • CVPR 2023 • Yaoyao Liu, Bernt Schiele, Andrea Vedaldi, Christian Rupprecht
Incremental object detection (IOD) aims to train an object detector in phases, each with annotations for new object categories.
Class-Incremental Object Detection Knowledge Distillation +3
1 code implementation • CVPR 2023 • Zilin Luo, Yaoyao Liu, Bernt Schiele, Qianru Sun
Exemplar-based class-incremental learning (CIL) finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the "few-shot" abides by the limited memory budget.
3 code implementations • NeurIPS 2021 • Yaoyao Liu, Bernt Schiele, Qianru Sun
Class-Incremental Learning (CIL) [40] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase.
1 code implementation • 11 Jan 2023 • Yaoyao Liu, YingYing Li, Bernt Schiele, Qianru Sun
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
no code implementations • 29 Sep 2021 • Yaoyao Liu, Bernt Schiele, Qianru Sun
However, we empirically observe that this both harms learning of new classes and also underperforms to distil old class knowledge from the previous phase model.
2 code implementations • CVPR 2021 • Yaoyao Liu, Bernt Schiele, Qianru Sun
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
2 code implementations • CVPR 2020 • Yaoyao Liu, Yu-Ting Su, An-An Liu, Bernt Schiele, Qianru Sun
However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones.
1 code implementation • 7 Oct 2019 • Qianru Sun, Yaoyao Liu, Zhaozheng Chen, Tat-Seng Chua, Bernt Schiele
In this paper, we propose a novel approach called meta-transfer learning (MTL) which learns to transfer the weights of a deep NN for few-shot learning tasks.
1 code implementation • NeurIPS 2019 • Xinzhe Li, Qianru Sun, Yaoyao Liu, Shibao Zheng, Qin Zhou, Tat-Seng Chua, Bernt Schiele
On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning.
1 code implementation • ECCV 2020 • Yaoyao Liu, Bernt Schiele, Qianru Sun
"Empirical" means that the hyperparameters, e. g., used for learning and ensembling the epoch-wise models, are generated by hyperprior learners conditional on task-specific data.
2 code implementations • CVPR 2019 • Qianru Sun, Yaoyao Liu, Tat-Seng Chua, Bernt Schiele
In this paper we propose a novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks.