no code implementations • 25 May 2023 • Zheyan Shen, Han Yu, Peng Cui, Jiashuo Liu, Xingxuan Zhang, Linjun Zhou, Furui Liu
Moreover, we propose a Meta Adaptive Task Sampling (MATS) procedure to differentiate base tasks according to their semantic and domain-shift similarity to the novel task.
no code implementations • 2 Dec 2022 • Han Yu, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, Xingxuan Zhang
The problem of covariate-shift generalization has attracted intensive research attention.
2 code implementations • CVPR 2023 • Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
1 code implementation • 9 Feb 2022 • Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors.
no code implementations • NeurIPS 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
1 code implementation • 3 Nov 2021 • Renzhe Xu, Xingxuan Zhang, Zheyan Shen, Tong Zhang, Peng Cui
Afterward, we prove that under ideal conditions, independence-driven importance weighting algorithms could identify the variables in this set.
1 code implementation • 24 Oct 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • 31 Aug 2021 • Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
This paper represents the first comprehensive, systematic review of OOD generalization, encompassing a spectrum of aspects from problem definition, methodological development, and evaluation procedures, to the implications and future directions of the field.
no code implementations • CVPR 2022 • Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin Liu
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.
no code implementations • 30 Jun 2021 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li
In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target.
1 code implementation • 9 May 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.
2 code implementations • CVPR 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, Zheyan Shen
Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise.
Ranked #28 on Domain Generalization on VLCS
no code implementations • 1 Jan 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Yue He, Linjun Zhou, Zheyan Shen
We propose to address this problem by removing the dependencies between features via reweighting training samples, which results in a more balanced distribution and helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between features and labels.
no code implementations • NeurIPS 2020 • Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He
Estimating counterfactual outcome of different treatments from observational data is an important problem to assist decision making in a variety of fields.
1 code implementation • 18 Jun 2020 • Renzhe Xu, Peng Cui, Kun Kuang, Bo Li, Linjun Zhou, Zheyan Shen, Wei Cui
In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices.
no code implementations • 8 Jun 2020 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.
no code implementations • 28 Nov 2019 • Zheyan Shen, Peng Cui, Tong Zhang, Kun Kuang
We consider the problem of learning linear prediction models with model misspecification bias.
no code implementations • 7 Jun 2019 • Yue He, Zheyan Shen, Peng Cui
The experimental results demonstrate that NICO can well support the training of ConvNet model from scratch, and a batch balancing module can help ConvNets to perform better in Non-I. I. D.
no code implementations • 22 Aug 2017 • Zheyan Shen, Peng Cui, Kun Kuang, Bo Li, Peixuan Chen
However, this ideal assumption is often violated in real applications, where selection bias may arise between training and testing process.