1 code implementation • 31 May 2023 • Yi Luo, Guangchun Luo, Ke Qin, Aiguo Chen
Node classifiers are required to comprehensively reduce prediction errors, training resources, and inference latency in the industry.
2 code implementations • 19 Nov 2022 • Yi Luo, Guiduo Duan, Guangchun Luo, Aiguo Chen
The unification facilitates the exchange between the two subdomains and inspires more studies.
1 code implementation • 18 May 2022 • Liang Liu, Peng Chen, Guangchun Luo, Zhao Kang, Yonggang Luo, Sanchu Han
With the explosive growth of multi-source data, multi-view clustering has attracted great attention in recent years.
1 code implementation • Mathematics 2022 • Yi Luo, Guangchun Luo, Ke Yan, Aiguo Chen
Following the application of Deep Learning to graphic data, Graph Neural Networks (GNNs) have become the dominant method for Node Classification on graphs in recent years.
Ranked #1 on Node Classification on Amazon Photo
1 code implementation • 10 Aug 2021 • Changshu Liu, Liangjian Wen, Zhao Kang, Guangchun Luo, Ling Tian
Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph.
1 code implementation • 18 Jun 2021 • Zhengrui Ma, Zhao Kang, Guangchun Luo, Ling Tian
The success of subspace clustering depends on the assumption that the data can be separated into different subspaces.
1 code implementation • 30 Aug 2020 • Kaiyang Li, Guangchun Luo, Yang Ye, Wei Li, Shihao Ji, Zhipeng Cai
In this paper, we propose Adversarial Privacy Graph Embedding (APGE), a graph adversarial training framework that integrates the disentangling and purging mechanisms to remove users' private information from learned node representations.
1 code implementation • 21 Jul 2020 • Jie Wu, Tianshui Chen, Hefeng Wu, Zhi Yang, Guangchun Luo, Liang Lin
This is primarily due to (i) the conservative characteristic of traditional training objectives that drives the model to generate correct but hardly discriminative captions for similar images and (ii) the uneven word distribution of the ground-truth captions, which encourages generating highly frequent words/phrases while suppressing the less frequent but more concrete ones.
1 code implementation • 13 Sep 2018 • Jean-Paul Ainam, Ke Qin, Guisong Liu, Guangchun Luo
Finally, we assign a non-uniform label distribution to the generated samples and define a regularized loss function for training.