no code implementations • 2 May 2024 • Jingyao Wang, Wenwen Qiang, Changwen Zheng
It can learn high-quality representations from unlabeled data and achieve promising empirical performance on multiple downstream tasks.
no code implementations • 18 Apr 2024 • Jingyao Wang, Yunhan Tian, Yuxuan Yang, Xiaoxin Chen, Changwen Zheng, Wenwen Qiang
Micro-expressions (MEs) are involuntary movements revealing people's hidden feelings, which has attracted numerous interests for its objectivity in emotion detection.
1 code implementation • 16 Apr 2024 • Jianqi Zhang, Jingyao Wang, Wenwen Qiang, Fanjiang Xu, Changwen Zheng, Fuchun Sun, Hui Xiong
Motivated by these findings, we introduce two new PEs: Temporal Position Encoding (T-PE) for temporal tokens and Variable Positional Encoding (V-PE) for variable tokens.
no code implementations • 18 Mar 2024 • Hang Gao, Jiaguo Yuan, Jiangmeng Li, Chengyu Yao, Fengge Wu, Junsuo Zhao, Changwen Zheng
PLL is a critical weakly supervised learning problem, where each training instance is associated with a set of candidate labels, including both the true label and additional noisy labels.
1 code implementation • 25 Jan 2024 • Jiangmeng Li, Fei Song, Yifan Jin, Wenwen Qiang, Changwen Zheng, Fuchun Sun, Hui Xiong
From the perspective of distribution analyses, we disclose that the intrinsic issues behind the phenomenon are the over-multitudinous conceptual knowledge contained in PLMs and the abridged knowledge for target downstream domains, which jointly result in that PLMs mis-locate the knowledge distributions corresponding to the target domains in the universal knowledge embedding space.
no code implementations • 19 Jan 2024 • Chuxiong Sun, Zehua Zang, Jiabao Li, Jiangmeng Li, Xiao Xu, Rui Wang, Changwen Zheng
This process enables agents to collectively use evidence garnered from multiple perspectives, fostering trusted and cooperative behaviors.
1 code implementation • 21 Dec 2023 • Jiangmeng Li, Yifan Jin, Hang Gao, Wenwen Qiang, Changwen Zheng, Fuchun Sun
To this end, we propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning, which introduces knowledge distillations to empower GCL models to learn the hierarchical topology isomorphism expertise, including the graph-tier and subgraph-tier.
1 code implementation • 16 Dec 2023 • Qirui Ji, Jiangmeng Li, Jie Hu, Rui Wang, Changwen Zheng, Fanjiang Xu
To this end, with the purpose of exploring the intrinsic rationale of graphs, we accordingly propose to capture the dimensional rationale from graphs, which has not received sufficient attention in the literature.
1 code implementation • 15 Dec 2023 • Hang Gao, Chengyu Yao, Jiangmeng Li, Lingyu Si, Yifan Jin, Fengge Wu, Changwen Zheng, Huaping Liu
In order to comprehensively analyze various GNN models from a causal learning perspective, we constructed an artificially synthesized dataset with known and controllable causal relationships between data and labels.
1 code implementation • 10 Dec 2023 • Jingyao Wang, Yi Ren, Zeen Song, Jianqi Zhang, Changwen Zheng, Wenwen Qiang
However, our experiments reveal an unexpected result: there is negative knowledge transfer between tasks, affecting generalization performance.
no code implementations • 28 Aug 2023 • Jingyao Wang, Zeen Song, Wenwen Qiang, Changwen Zheng
The long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision, mimicking three advantages of human cognition: i) no need for labels, ii) robustness to data scarcity, and iii) learning from experience.
no code implementations • 21 Aug 2023 • Jiangmeng Li, Hang Gao, Wenwen Qiang, Changwen Zheng
To this end, we rethink the existing multi-view learning paradigm from the perspective of information theory and then propose a novel information theoretical framework for generalized multi-view learning.
no code implementations • 18 Jul 2023 • Jingyao Wang, Luntian Mou, Changwen Zheng, Wen Gao
In this paper, we propose a novel Contrastive Self-Supervised Learning framework for Robust Handwriting Authentication (CSSL-RHA) to address these issues.
1 code implementation • 18 Jul 2023 • Jingyao Wang, Wenwen Qiang, Xingzhe Su, Changwen Zheng, Fuchun Sun, Hui Xiong
We obtain three conclusions: (i) there is no universal task sampling strategy that can guarantee the optimal performance of meta-learning models; (ii) over-constraining task diversity may incur the risk of under-fitting or over-fitting during training; and (iii) the generalization performance of meta-learning models are affected by task diversity, task entropy, and task difficulty.
no code implementations • 18 Jul 2023 • Zeen Song, Xingzhe Su, Jingyao Wang, Wenwen Qiang, Changwen Zheng, Fuchun Sun
In recent years, self-supervised learning (SSL) has emerged as a promising approach for extracting valuable representations from unlabeled data.
no code implementations • 17 Jul 2023 • Xingzhe Su, Daixi Jia, Fengge Wu, Junsuo Zhao, Changwen Zheng, Wenwen Qiang
In response, we propose a plug-and-play method named Manifold Guidance Sampling, which is also the first unsupervised method to mitigate bias issue in DDPMs.
no code implementations • 28 Jun 2023 • Lingyu Si, Hongwei Dong, Wenwen Qiang, Junzhi Yu, Wenlong Zhai, Changwen Zheng, Fanjiang Xu, Fuchun Sun
To address this issue, in this paper, we discover the correlation between feature discriminability and dimensional structure (DS) by analyzing and observing features extracted from simple and hard tasks.
no code implementations • 31 May 2023 • Xingzhe Su, Changwen Zheng, Wenwen Qiang, Fengge Wu, Junsuo Zhao, Fuchun Sun, Hui Xiong
This study identifies a previously overlooked issue: GANs exhibit a heightened susceptibility to overfitting on remote sensing images. To address this challenge, this paper analyzes the characteristics of remote sensing images and proposes manifold constraint regularization, a novel approach that tackles overfitting of GANs on remote sensing images for the first time.
no code implementations • 9 Mar 2023 • Xingzhe Su, Wenwen Qiang, Jie Hu, Fengge Wu, Changwen Zheng, Fuchun Sun
Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information.
no code implementations • 20 Jan 2023 • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Xingzhe Su, Fengge Wu, Changwen Zheng, Fuchun Sun
By further observing the ramifications of introducing expertise logic into graph representation learning, we conclude that leading the GNNs to learn human expertise can improve the model performance.
no code implementations • 25 Nov 2022 • Gang Li, Heliang Zheng, Chaoyue Wang, Chang Li, Changwen Zheng, DaCheng Tao
Text-guided diffusion models have shown superior performance in image/video generation and editing.
2 code implementations • 16 Sep 2022 • Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Farid Razzak, Ji-Rong Wen, Hui Xiong
To this end, we propose a methodology, specifically consistency and complementarity network (CoCoNet), which avails of strict global inter-view consistency and local cross-view complementarity preserving regularization to comprehensively learn representations from multiple views.
2 code implementations • 16 Sep 2022 • Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong
As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample.
1 code implementation • 26 Aug 2022 • Jiangmeng Li, Yanan Zhang, Wenwen Qiang, Lingyu Si, Chengbo Jiao, Xiaohui Hu, Changwen Zheng, Fuchun Sun
To understand the reasons behind this phenomenon, we revisit the learning paradigm of knowledge distillation on the few-shot object detection task from the causal theoretic standpoint, and accordingly, develop a Structural Causal Model.
1 code implementation • 18 Aug 2022 • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, Fuchun Sun
This observation reveals that there exist confounders in graphs, which may interfere with the model learning semantic information, and current graph representation learning methods have not eliminated their influence.
no code implementations • 29 Jun 2022 • Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong
Contrastive learning (CL)-based self-supervised learning models learn visual representations in a pairwise manner.
1 code implementation • 21 Jun 2022 • Gang Li, Heliang Zheng, Daqing Liu, Chaoyue Wang, Bing Su, Changwen Zheng
In this paper, we explore a potential visual analogue of words, i. e., semantic parts, and we integrate semantic information into the training process of MAE by proposing a Semantic-Guided Masking strategy.
no code implementations • 23 May 2022 • Jiangmeng Li, Wenyi Mo, Wenwen Qiang, Bing Su, Changwen Zheng
Vision-language models are pre-trained by aligning image-text pairs in a common space so that the models can deal with open-set visual concepts by learning semantic information from textual labels.
2 code implementations • 10 Mar 2022 • Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Hui Xiong
We perform a meta learning technique to build the augmentation generator that updates its network parameters by considering the performance of the encoder.
no code implementations • 8 Mar 2022 • Wenwen Qiang, Jiangmeng Li, Changwen Zheng, Bing Su, Hui Xiong
We conduct theoretical analysis on the robustness of the proposed RLPGA and prove that the robust informative-theoretic-based loss and the local preserving module are beneficial to reduce the empirical risk of the target domain.
1 code implementation • 11 Jan 2022 • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Fuchun Sun, Changwen Zheng
To this end, we propose a novel approach to learning a graph augmenter that can generate an augmentation with uniformity and informativeness.
2 code implementations • 24 Dec 2021 • Gang Li, Di Xu, Xing Cheng, Lingyu Si, Changwen Zheng
Although vision Transformers have achieved excellent performance as backbone models in many vision tasks, most of them intend to capture global relations of all tokens in an image or a window, which disrupts the inherent spatial and local correlations between patches in 2D structure.
no code implementations • 29 Sep 2021 • Wenwen Qiang, Jiangmeng Li, Jie Hu, Bing Su, Changwen Zheng, Hui Xiong
In this paper, we give an analysis of the existing representation learning framework of unsupervised domain adaptation and show that the learned feature representations of the source domain samples are with discriminability, compressibility, and transferability.
no code implementations • 6 Sep 2021 • Jiangmeng Li, Wenwen Qiang, Hang Gao, Bing Su, Farid Razzak, Jie Hu, Changwen Zheng, Hui Xiong
To this end, we rethink the existing multi-view learning paradigm from the information theoretical perspective and then propose a novel information theoretical framework for generalized multi-view learning.