no code implementations • 18 Jul 2023 • Menghan Wang, Jinming Yang, Yuchen Guo, Yuming Shen, Mengying Zhu, Yanlin Wang
Inspired by recent advances on differentiable sorting, in this paper, we propose a novel multi-task framework that leverages orders of user behaviors to predict user post-click conversion in an end-to-end approach.
no code implementations • 9 May 2023 • Hanqi Yan, Lin Gui, Menghan Wang, Kun Zhang, Yulan He
Explainable recommender systems can explain their recommendation decisions, enhancing user trust in the systems.
no code implementations • 3 Feb 2023 • Fanglan Zheng, Menghan Wang, Kun Li, Jiang Tian, Xiaojia Xiang
In this manuscript (ms), we propose causal inference based single-branch ensemble trees for uplift modeling, namely CIET.
no code implementations • 7 Sep 2022 • Danru Xu, Erdun Gao, Wei Huang, Menghan Wang, Andy Song, Mingming Gong
Learning the underlying Bayesian Networks (BNs), represented by directed acyclic graphs (DAGs), of the concerned events from purely-observational data is a crucial part of evidential reasoning.
1 code implementation • 12 Jun 2022 • Zongyuan Huang, Shengyuan Xu, Menghan Wang, Hansi Wu, Yanyan Xu, Yaohui Jin
Next location prediction is one decisive task in individual human mobility modeling and is usually viewed as sequence modeling, solved with Markov or RNN-based methods.
no code implementations • 18 Apr 2022 • Menghan Wang, Yuchen Guo, Zhenqi Zhao, Guangzheng Hu, Yuming Shen, Mingming Gong, Philip Torr
To alleviate the influence of the annotation bias, we perform a momentum update to ensure a consistent item representation.
no code implementations • 31 Jan 2022 • Jiaguo Yu, Yuming Shen, Menghan Wang, Haofeng Zhang, Philip H. S. Torr
In this paper, we tackle this problem by introducing Naturally-Sorted Hashing (NSH).
no code implementations • NeurIPS 2021 • Yuming Shen, Ziyi Shen, Menghan Wang, Jie Qin, Philip H. S. Torr, Ling Shao
On one hand, with the corresponding assignment variables being the weight, a weighted aggregation along the data points implements the set representation of a cluster.
1 code implementation • 20 May 2020 • Menghan Wang, Yujie Lin, Guli Lin, Keping Yang, Xiao-Ming Wu
Most existing methods can be categorized as \emph{multi-view representation fusion}; they first build one graph and then integrate multi-view data into a single compact representation for each node in the graph.
no code implementations • 12 Dec 2019 • Menghan Wang, Kun Zhang, Gulin Li, Keping Yang, Luo Si
We generalize the propagation strategies of current GCNs as a \emph{"Sink$\to$Source"} mode, which seems to be an underlying cause of the two challenges.
7 code implementations • 16 May 2019 • Yufei Feng, Fuyu Lv, Weichen Shen, Menghan Wang, Fei Sun, Yu Zhu, Keping Yang
Easy-to-use, Modular and Extendible package of deep-learning based CTR models. DeepFM, DeepInterestNetwork(DIN), DeepInterestEvolutionNetwork(DIEN), DeepCrossNetwork(DCN), AttentionalFactorizationMachine(AFM), Neural Factorization Machine(NFM), AutoInt, Deep Session Interest Network(DSIN)
no code implementations • NeurIPS 2018 • Menghan Wang, Mingming Gong, Xiaolin Zheng, Kun Zhang
Recent studies modeled \emph{exposure}, a latent missingness variable which indicates whether an item is missing to a user, to give each missing entry a confidence of being negative feedback.
no code implementations • 30 Nov 2017 • Menghan Wang, Xiaolin Zheng, Yang Yang, Kun Zhang
We assume that people get information of products from their online friends and they do not have to share similar preferences, which is less restrictive and seems closer to reality.
no code implementations • 1 Mar 2014 • Menghan Wang
First, the thesis presents a discussion of characteristics and optimal policy finding Markov Decision Process as well as a brief introduction to dynamic Bayesian decision network, which is inherently equal to MDP.
no code implementations • 28 Feb 2014 • Meera Sitharam, Mohamad Tarifi, Menghan Wang
We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a sparse representation of data points, by learning \emph{dictionary vectors} upon which the data points can be written as sparse linear combinations.