Search Results for author: Ruicheng Xian

Found 6 papers, 3 papers with code

Optimal Group Fair Classifiers from Linear Post-Processing

1 code implementation7 May 2024 Ruicheng Xian, Han Zhao

We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria covering statistical parity, equal opportunity, and equalized odds, applicable to multi-class problems and both attribute-aware and attribute-blind settings.

Attribute Fairness

Differentially Private Post-Processing for Fair Regression

1 code implementation7 May 2024 Ruicheng Xian, Qiaobo Li, Gautam Kamath, Han Zhao

This paper describes a differentially private post-processing algorithm for learning fair regressors satisfying statistical parity, addressing privacy concerns of machine learning models trained on sensitive data, as well as fairness concerns of their potential to propagate historical biases.

Density Estimation Fairness +1

Fair and Optimal Classification via Post-Processing

1 code implementation3 Nov 2022 Ruicheng Xian, Lang Yin, Han Zhao

To mitigate the bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance.

Attribute Classification +3

Learning Invariant Representations on Multilingual Language Models for Unsupervised Cross-Lingual Transfer

no code implementations ICLR 2022 Ruicheng Xian, Heng Ji, Han Zhao

Recent advances in neural modeling have produced deep multilingual language models capable of extracting cross-lingual knowledge from unparallel texts, as evidenced by their decent zero-shot transfer performance.

Cross-Lingual Transfer Domain Adaptation

Neural tangent kernels, transportation mappings, and universal approximation

no code implementations ICLR 2020 Ziwei Ji, Matus Telgarsky, Ruicheng Xian

This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization.

Approximation power of random neural networks

no code implementations18 Jun 2019 Bolton Bailey, Ziwei Ji, Matus Telgarsky, Ruicheng Xian

This paper investigates the approximation power of three types of random neural networks: (a) infinite width networks, with weights following an arbitrary distribution; (b) finite width networks obtained by subsampling the preceding infinite width networks; (c) finite width networks obtained by starting with standard Gaussian initialization, and then adding a vanishingly small correction to the weights.

Cannot find the paper you are looking for? You can Submit a new open access paper.