no code implementations • 19 May 2023 • Yivan Zhang, Masashi Sugiyama
Disentangling the explanatory factors in complex data is a promising approach for generalizable and data-efficient representation learning.
no code implementations • 11 May 2023 • Yivan Zhang, Masashi Sugiyama
Disentangling the factors of variation in data is a fundamental concept in machine learning and has been studied in various ways by different researchers, leading to a multitude of definitions.
no code implementations • 3 Aug 2022 • Yivan Zhang, Jindong Wang, Xing Xie, Masashi Sugiyama
To formally analyze this issue, we provide a unique algebraic formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
1 code implementation • 25 Mar 2021 • Yivan Zhang, Masashi Sugiyama
Label noise in multiclass classification is a major obstacle to the deployment of learning systems.
1 code implementation • 4 Feb 2021 • Yivan Zhang, Gang Niu, Masashi Sugiyama
To estimate the transition matrix from noisy data, existing methods often need to estimate the noisy class-posterior, which could be unreliable due to the overconfidence of neural networks.
no code implementations • 22 Oct 2020 • Nontawat Charoenphakdee, Zhenghang Cui, Yivan Zhang, Masashi Sugiyama
The goal of classification with rejection is to avoid risky misclassification in error-critical applications such as medical diagnosis and product inspection.
1 code implementation • NeurIPS 2020 • Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals.
1 code implementation • 10 Oct 2019 • Yivan Zhang, Nontawat Charoenphakdee, Masashi Sugiyama
Weakly-supervised learning is a paradigm for alleviating the scarcity of labeled data by leveraging lower-quality but larger-scale supervision signals.