no code implementations • 24 Mar 2024 • Libo Huang, Zhulin An, Yan Zeng, Chuanguang Yang, Xinqiang Yu, Yongjun Xu
Exemplar-Free Class Incremental Learning (efCIL) aims to continuously incorporate the knowledge from new classes while retaining previously learned information, without storing any old-class exemplars (i. e., samples).
1 code implementation • 28 Sep 2023 • Ruiqi Liu, Boyu Diao, Libo Huang, Zhulin An, Yongjun Xu
In E2Net, we propose Representative Network Distillation to identify the representative core subnet by assessing parameter quantity and output similarity with the working network, distilling analogous subnets within the working network to mitigate reliance on rehearsal buffers and facilitating knowledge transfer across previous tasks.
1 code implementation • 24 Jul 2023 • Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Boyu Diao, Yongjun Xu
The unified method is applied to distill several student models trained on CC3M+12M.
no code implementations • 20 Apr 2023 • Libo Huang, Yan Zeng, Chuanguang Yang, Zhulin An, Boyu Diao, Yongjun Xu
Most successful CIL methods incrementally train a feature extractor with the aid of stored exemplars, or estimate the feature distribution with the stored prototypes.
no code implementations • 10 Feb 2023 • Yan Zeng, Ruichu Cai, Fuchun Sun, Libo Huang, Zhifeng Hao
While Reinforcement Learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability.
1 code implementation • 28 Apr 2022 • Lianqing Zheng, Zhixiong Ma, Xichan Zhu, Bin Tan, Sen Li, Kai Long, Weiqi Sun, Sihan Chen, Lu Zhang, Mengyue Wan, Libo Huang, Jie Bai
The next-generation high-resolution automotive radar (4D radar) can provide additional elevation measurement and denser point clouds, which has great potential for 3D sensing in autonomous driving.
no code implementations • 17 Jan 2022 • Libo Huang, Zhulin An, Xiang Zhi, Yongjun Xu
Generative models often incur the catastrophic forgetting problem when they are used to sequentially learning multiple tasks, i. e., lifelong generative learning.
no code implementations • 1 Nov 2021 • Gan Tong, Libo Huang
Convolution operators are the fundamental component of convolutional neural networks, and it is also the most time-consuming part of network training and inference.
no code implementations • 3 Feb 2021 • Xiaogang Jia, Wei Chen, Zhengfa Liang, Mingfei Wu, Yusong Tan, Libo Huang
This is because different cost volumes play a crucial role in balancing speed and accuracy.
1 code implementation • 20 Nov 2020 • Libo Huang, Lu Gan, Bingo Wing-Kuen Ling
Finally, taking the best of the clustering validity indices into the proposed model, we derive an automatic spike sorting method.
Signal Processing