no code implementations • 29 May 2024 • Robi Bhattacharjee, Nick Rittler, Kamalika Chaudhuri
Instead of relying on the discrepancy, we adopt an Invariant-Risk-Minimization (IRM)-like assumption connecting the distributions, and characterize conditions under which data from a source distribution is sufficient for accurate classification of the target.
no code implementations • 19 Nov 2022 • Nick Rittler, Kamalika Chaudhuri
$k$-nearest neighbor classification is a popular non-parametric method because of desirable properties like automatic adaption to distributional scale changes.
1 code implementation • 23 Mar 2022 • Nick Rittler, Carlo Graziani, Jiali Wang, Rao Kotamarthi
We discuss an approach to probabilistic forecasting based on two chained machine-learning steps: a dimensional reduction step that learns a reduction map of predictor information to a low-dimensional space in a manner designed to preserve information about forecast quantities; and a density estimation step that uses the probabilistic machine learning technique of normalizing flows to compute the joint probability density of reduced predictors and forecast quantities.