1 code implementation • 18 Jul 2023 • Jingyao Wang, Wenwen Qiang, Xingzhe Su, Changwen Zheng, Fuchun Sun, Hui Xiong
We obtain three conclusions: (i) there is no universal task sampling strategy that can guarantee the optimal performance of meta-learning models; (ii) over-constraining task diversity may incur the risk of under-fitting or over-fitting during training; and (iii) the generalization performance of meta-learning models are affected by task diversity, task entropy, and task difficulty.
no code implementations • 18 Jul 2023 • Zeen Song, Xingzhe Su, Jingyao Wang, Wenwen Qiang, Changwen Zheng, Fuchun Sun
In recent years, self-supervised learning (SSL) has emerged as a promising approach for extracting valuable representations from unlabeled data.
no code implementations • 17 Jul 2023 • Xingzhe Su, Daixi Jia, Fengge Wu, Junsuo Zhao, Changwen Zheng, Wenwen Qiang
In response, we propose a plug-and-play method named Manifold Guidance Sampling, which is also the first unsupervised method to mitigate bias issue in DDPMs.
no code implementations • 31 May 2023 • Xingzhe Su, Changwen Zheng, Wenwen Qiang, Fengge Wu, Junsuo Zhao, Fuchun Sun, Hui Xiong
This study identifies a previously overlooked issue: GANs exhibit a heightened susceptibility to overfitting on remote sensing images. To address this challenge, this paper analyzes the characteristics of remote sensing images and proposes manifold constraint regularization, a novel approach that tackles overfitting of GANs on remote sensing images for the first time.
no code implementations • 9 Mar 2023 • Xingzhe Su, Wenwen Qiang, Jie Hu, Fengge Wu, Changwen Zheng, Fuchun Sun
Based on this SCM, we theoretically prove that the quality of generated images is positively correlated with the amount of feature information.
no code implementations • 20 Jan 2023 • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Xingzhe Su, Fengge Wu, Changwen Zheng, Fuchun Sun
By further observing the ramifications of introducing expertise logic into graph representation learning, we conclude that leading the GNNs to learn human expertise can improve the model performance.