no code implementations • 15 Mar 2024 • Ruihao Zhang, Zhengyu Chen, Teng Xiao, Yueyang Wang, Kun Kuang
We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs.
no code implementations • 13 Mar 2024 • Teng Xiao, Chao Cui, Huaisheng Zhu, Vasant G. Honavar
Recent advancements in biology and chemistry have leveraged multi-modal learning, integrating molecules and their natural language descriptions to enhance drug discovery.
1 code implementation • 11 Mar 2024 • Huaisheng Zhu, Teng Xiao, Vasant G Honavar
However, practical applications call for methods that generate diverse, and ideally novel, molecules with the desired properties.
1 code implementation • 3 Mar 2024 • Shiqi Chen, Miao Xiong, Junteng Liu, Zhengxuan Wu, Teng Xiao, Siyang Gao, Junxian He
Large language models (LLMs) frequently hallucinate and produce factual errors, yet our understanding of why they make these errors remains limited.
no code implementations • 17 Jan 2024 • Teng Xiao, Suhang Wang
Probabilistic learning to rank (LTR) has been the dominating approach for optimizing the ranking metric, but cannot maximize long-term rewards.
no code implementations • 19 Dec 2023 • Zhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, Fei Wu
In this paper, we study the problem of the generalization ability of GNNs in Out-Of-Distribution (OOD) settings.
1 code implementation • NeurIPS 2023 • Teng Xiao, Huaisheng Zhu, Zhengyu Chen, Suhang Wang
Experimental results show that the simple GraphACL significantly outperforms state-of-the-art graph contrastive learning and self-supervised learning methods on homophilic and heterophilic graphs.
1 code implementation • NeurIPS 2023 • Minhua Lin, Teng Xiao, Enyan Dai, Xiang Zhang, Suhang Wang
Extensive experiments on real-world datasets demonstrate the effectiveness of our proposed method in providing effective certifiable robustness and enhancing the robustness of any GCL model.
no code implementations • 1 Oct 2023 • Teng Xiao, Donglin Wang
This paper studies the problem of learning interactive recommender systems from logged feedbacks without any exploration in online environments.
1 code implementation • 1 Oct 2023 • Teng Xiao, Zhengyu Chen, Donglin Wang, Suhang Wang
To compensate for this, in this paper, we present learning to propagate, a general learning framework that not only learns the GNN parameters for prediction but more importantly, can explicitly learn the interpretable and personalized propagate strategies for different nodes and various types of graphs.
1 code implementation • 10 Jul 2023 • Zhimeng Guo, Jialiang Li, Teng Xiao, Yao Ma, Suhang Wang
Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
no code implementations • 19 Jun 2023 • Huaisheng Zhu, Guoji Fu, Zhimeng Guo, Zhiwei Zhang, Teng Xiao, Suhang Wang
Graph Neural Networks (GNNs) have shown great power in various domains.
1 code implementation • 3 Apr 2023 • Zhimeng Guo, Teng Xiao, Zongyu Wu, Charu Aggarwal, Hui Liu, Suhang Wang
To facilitate the development of this promising direction, in this survey, we categorize and comprehensively review papers on graph counterfactual learning.
no code implementations • 7 Jun 2022 • Teng Xiao, Zhengyu Chen, Suhang Wang
In this paper, we propose a theoretical understanding of why existing unbiased learning objectives work for unbiased recommendation.
no code implementations • 7 Jun 2022 • Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, Suhang Wang
This paper studies the problem of conducting self-supervised learning for node representation learning on graphs.
1 code implementation • NeurIPS 2019 • Zaiqiao Meng, Shangsong Liang, Jinyuan Fang, Teng Xiao
Deep generative models (DGMs) have achieved remarkable advances.
no code implementations • 12 Oct 2018 • Teng Xiao, Shangsong Liang, Hong Shen, Zaiqiao Meng
Specifically, we consider both the generative processes of users and items, and the prior of latent factors of users and items to be side informationspecific, which enables our model to alleviate matrix sparsity and learn better latent representations of users and items.