1 code implementation • 15 Dec 2023 • Youhua Li, Hanwen Du, Yongxin Ni, Pengpeng Zhao, Qi Guo, Fajie Yuan, Xiaofang Zhou
To align the cross-modal item representations, we propose a novel next-item enhanced cross-modal contrastive learning objective, which is equipped with both inter- and intra-modality negative samples and explicitly incorporates the transition patterns of user behaviors into the item encoders.
1 code implementation • 22 Oct 2023 • Xiuyuan Qin, Huanhuan Yuan, Pengpeng Zhao, Guanfeng Liu, Fuzhen Zhuang, Victor S. Sheng
In this paper, Intent contrastive learning with Cross Subsequences for sequential Recommendation (ICSRec) is proposed to model users' latent intentions.
no code implementations • 21 Oct 2023 • Yongjing Hao, Pengpeng Zhao, Junhua Fang, Jianfeng Qu, Guanfeng Liu, Fuzhen Zhuang, Victor S. Sheng, Xiaofang Zhou
In this paper, we propose a Meta-optimized Seq2Seq Generator and Contrastive Learning (Meta-SGCL) for sequential recommendation, which applies the meta-optimized two-step training strategy to adaptive generate contrastive views.
1 code implementation • 7 May 2023 • Xinyu Du, Huanhuan Yuan, Pengpeng Zhao, Junhua Fang, Guanfeng Liu, Yanchi Liu, Victor S. Sheng, Xiaofang Zhou
Sequential recommendation (SR) aims to model user preferences by capturing behavior patterns from their item historical interaction data.
1 code implementation • 28 Apr 2023 • Hanwen Du, Huanhuan Yuan, Pengpeng Zhao, Fuzhen Zhuang, Guanfeng Liu, Lei Zhao, Victor S. Sheng
Our framework adopts multiple parallel networks as an ensemble of sequence encoders and recommends items based on the output distributions of all these networks.
1 code implementation • 22 Apr 2023 • Huanhuan Yuan, Pengpeng Zhao, Xuefeng Xian, Guanfeng Liu, Victor S. Sheng, Lei Zhao
To better capture the uncertainty and evolution of user tastes, SR-PLR embeds users and items with a probabilistic method and conducts probabilistic logical reasoning on users' interaction patterns.
1 code implementation • 18 Apr 2023 • Xinyu Du, Huanhuan Yuan, Pengpeng Zhao, Jianfeng Qu, Fuzhen Zhuang, Guanfeng Liu, Victor S. Sheng
However, many recent studies represent that current self-attention based models are low-pass filters and are inadequate to capture high-frequency information.
1 code implementation • 16 Apr 2023 • Xiuyuan Qin, Huanhuan Yuan, Pengpeng Zhao, Junhua Fang, Fuzhen Zhuang, Guanfeng Liu, Victor Sheng
By applying both data augmentation and learnable model augmentation operations, this work innovates the standard CL framework by contrasting data and model augmented views for adaptively capturing the informative features hidden in stochastic data augmentation.
no code implementations • 10 Apr 2023 • Hanwen Du, Huanhuan Yuan, Zhen Huang, Pengpeng Zhao, Xiaofang Zhou
Generative models, such as Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN), have been successfully applied in sequential recommendation.
1 code implementation • 3 Sep 2022 • Yufeng Zhang, Weiqing Wang, Hongzhi Yin, Pengpeng Zhao, Wei Chen, Lei Zhao
A more challenging scenario is that emerging KGs consist of only unseen entities, called as disconnected emerging KGs (DEKGs).
1 code implementation • 8 Aug 2022 • Hanwen Du, Hui Shi, Pengpeng Zhao, Deqing Wang, Victor S. Sheng, Yanchi Liu, Guanfeng Liu, Lei Zhao
Contrastive learning with Transformer-based sequence encoder has gained predominance for sequential recommendation.
no code implementations • 21 Apr 2022 • Yongjing Hao, Pengpeng Zhao, Xuefeng Xian, Guanfeng Liu, Deqing Wang, Lei Zhao, Yanchi Liu, Victor S. Sheng
To this end, in this paper, we propose a Learnable Model Augmentation self-supervised learning for sequential Recommendation (LMA4Rec).
1 code implementation • 29 Jan 2022 • Jiaan Wang, Beiqi Zou, Zhixu Li, Jianfeng Qu, Pengpeng Zhao, An Liu, Lei Zhao
Story ending generation is an interesting and challenging task, which aims to generate a coherent and reasonable ending given a story context.
no code implementations • 31 Dec 2021 • Dongbo Xi, Fuzhen Zhuang, Yanchi Liu, HengShu Zhu, Pengpeng Zhao, Chang Tan, Qing He
To this end, in this paper, we propose a novel neural network approach to identify the missing POI categories by integrating both bi-directional global non-personal transition patterns and personal preferences of users.
no code implementations • 20 Nov 2021 • Yaxing Fang, Pengpeng Zhao, Guanfeng Liu, Yanchi Liu, Victor S. Sheng, Lei Zhao, Xiaofang Zhou
Graph Convolution Network (GCN) has been widely applied in recommender systems for its representation learning capability on user and item embeddings.
no code implementations • 20 Nov 2021 • Yunyi Li, Pengpeng Zhao, Guanfeng Liu, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Xiaofang Zhou
In this paper, we propose an Edge-Enhanced Global Disentangled Graph Neural Network (EGD-GNN) model to capture the relation information between items for global item representation and local user intention learning.
no code implementations • 12 Jul 2020 • Dongbo Xi, Fuzhen Zhuang, Yongchun Zhu, Pengpeng Zhao, Xiangliang Zhang, Qing He
In this paper, we propose a Graph Factorization Machine (GFM) which utilizes the popular Factorization Machine to aggregate multi-order interactions from neighborhood for recommendation.
no code implementations • 13 Nov 2019 • Xuesong Shi, Dongjiang Li, Pengpeng Zhao, Qinbin Tian, Yuxin Tian, Qiwei Long, Chunhao Zhu, Jingwei Song, Fei Qiao, Le Song, Yangquan Guo, Zhigang Wang, Yimin Zhang, Baoxing Qin, Wei Yang, Fangshi Wang, Rosa H. M. Chan, Qi She
We also design benchmarking metrics for lifelong SLAM, with which the robustness and accuracy of pose estimation are evaluated separately.
no code implementations • 29 May 2019 • Jian Liu, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Fuzheng Zhuang, Jiajie Xu, Xiaofang Zhou, Hui Xiong
Then, we integrate the aesthetic features into a cross-domain network to transfer users' domain independent aesthetic preferences.
no code implementations • 18 Jun 2018 • Pengpeng Zhao, Haifeng Zhu, Yanchi Liu, Zhixu Li, Jiajie Xu, Victor S. Sheng
Furthermore, to reduce the number of parameters and improve efficiency, we further integrate coupled input and forget gates with our proposed model.