no code implementations • 26 Nov 2023 • Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le
Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.
no code implementations • 16 Nov 2023 • Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
no code implementations • 29 May 2023 • Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran
In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task.
no code implementations • 24 Nov 2022 • Hoang Phan, Lam Tran, Ngoc N. Tran, Nhat Ho, Dinh Phung, Trung Le
Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural networks that allows learning more than one objective by a single backbone.
no code implementations • 5 Aug 2021 • Lam Tran, Thuc Nguyen, Hyunil Kim, Deokjai Choi
However, most existing approaches stored the enrolled gait pattern insecurely for matching with the validating pattern, thus, posed critical security and privacy issues.
1 code implementation • 26 Jul 2021 • Quyen Tran, Lam Tran, Linh Chu Hai, Linh Ngo Van, Khoat Than
In addition, we go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
no code implementations • CVPR 2016 • Haichuan Yang, Yijun Huang, Lam Tran, Ji Liu, Shuai Huang
In this paper, we proposed a general bilevel exclusive sparsity formulation to pursue the diversity by restricting the overall sparsity and the sparsity in each group.