Search Results for author: Qijia Jiang

Found 7 papers, 1 papers with code

From Estimation to Sampling for Bayesian Linear Regression with Spike-and-Slab Prior

no code implementations9 Jul 2023 Qijia Jiang

We consider Bayesian linear regression with sparsity-inducing prior and design efficient sampling algorithms leveraging posterior contraction properties.

regression valid

On the Dissipation of Ideal Hamiltonian Monte Carlo Sampler

no code implementations15 Sep 2022 Qijia Jiang

We report on what seems to be an intriguing connection between variable integration time and partial velocity refreshment of Ideal Hamiltonian Monte Carlo samplers, both of which can be used for reducing the dissipative behavior of the dynamics.

Mirror Langevin Monte Carlo: the Case Under Isoperimetry

no code implementations NeurIPS 2021 Qijia Jiang

Motivated by the connection between sampling and optimization, we study a mirror descent analogue of Langevin dynamics and analyze three different discretization schemes, giving nonasymptotic convergence rate under functional inequalities such as Log-Sobolev in the corresponding metric.

Learning the Truth From Only One Side of the Story

no code implementations8 Jun 2020 Heinrich Jiang, Qijia Jiang, Aldo Pacchiano

Learning under one-sided feedback (i. e., where we only observe the labels for examples we predicted positively on) is a fundamental problem in machine learning -- applications include lending and recommendation systems.

Recommendation Systems

Optimizing Black-box Metrics with Adaptive Surrogates

no code implementations ICML 2020 Qijia Jiang, Olaoluwa Adigun, Harikrishna Narasimhan, Mahdi Milani Fard, Maya Gupta

We address the problem of training models with black-box and hard-to-optimize metrics by expressing the metric as a monotonic function of a small number of easy-to-optimize surrogates.

Complexity of Highly Parallel Non-Smooth Convex Optimization

no code implementations NeurIPS 2019 Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford

Namely we consider optimization algorithms interacting with a highly parallel gradient oracle, that is one that can answer $\mathrm{poly}(d)$ gradient queries in parallel.

Subgradient Descent Learns Orthogonal Dictionaries

1 code implementation ICLR 2019 Yu Bai, Qijia Jiang, Ju Sun

This paper concerns dictionary learning, i. e., sparse coding, a fundamental representation learning problem.

Dictionary Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.