no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
no code implementations • 6 Apr 2023 • Edric Tam, David Dunson
We propose to use the Fiedler value of the neural network's underlying graph as a tool for regularization.
no code implementations • 1 Feb 2023 • Tao Tang, Simon Mak, David Dunson
A widely-used emulator is the Gaussian process (GP), which provides a flexible framework for efficient prediction and uncertainty quantification.
1 code implementation • 8 Dec 2022 • Yizi Zhang, Meimei Liu, Zhengwu Zhang, David Dunson
We applied the proposed model to data from the Adolescent Brain Cognitive Development (ABCD) study and the Human Connectome Project (HCP) to investigate how our motion-invariant connectomes facilitate understanding of the brain network and its relationship with cognition.
1 code implementation • 10 Oct 2022 • Haoming Yang, Steven Winter, Zhengwu Zhang, David Dunson
One of the central problems in neuroscience is understanding how brain structure relates to function.
1 code implementation • 28 Jan 2022 • Edric Tam, David Dunson
For comparing graphs that do not have any ambiguities due to basis symmetries (i. e. the spectrums are simple), we show that the ELD becomes a natural pseudo-metric that enjoys nice properties such as invariance under graph isomorphism.
1 code implementation • 9 Jul 2021 • Ruda Zhang, Simon Mak, David Dunson
In PROM, each parameter point can be associated with a subspace, which is used for Petrov-Galerkin projections of large system matrices.
no code implementations • 23 Oct 2020 • Sean Plummer, Shuang Zhou, Anirban Bhattacharya, David Dunson, Debdeep Pati
More recently, transformation-based models have been used in variational inference (VI) to construct flexible implicit families of variational distributions.
no code implementations • 18 Aug 2020 • Deborshee Sen, Theodore Papamarkou, David Dunson
We attempt to solve these problems by deploying Markov chain Monte Carlo sampling algorithms (MCMC) for Bayesian inference in ANN models with latent variables.
no code implementations • 17 Aug 2020 • Debolina Paul, Saptarshi Chakraborty, Didong Li, David Dunson
In a rich variety of real data clustering applications, PEA is shown to do as well as k-means for simple datasets, while dramatically improving performance in more complex settings.
1 code implementation • 10 Apr 2020 • Austin Talbot, David Dunson, Kafui Dzirasa, David Carlson
Targeted stimulation of the brain has the potential to treat mental illnesses.
no code implementations • ICML 2020 • Edric Tam, David Dunson
We propose to use the Fiedler value of the neural network's underlying graph as a tool for regularization.
1 code implementation • 27 Dec 2019 • Kelly R. Moran, David Dunson, Amy H. Herring
Our model also enables the prediction of chemical dose-response profiles based on chemical structure (that is, without in vivo or in vitro testing) by taking advantage of a large database of chemicals that have already been tested for toxicity in HTS programs.
Applications
1 code implementation • 20 Aug 2019 • Kelly R. Moran, Elizabeth L. Turner, David Dunson, Amy H. Herring
In low-resource settings where vital registration of death is not routine it is often of critical interest to determine and study the cause of death (COD) for individuals and the cause-specific mortality fraction (CSMF) for populations.
Applications
no code implementations • 24 Apr 2019 • Xu Zhu, David Dunson
To the best of our knowledge, this is the first analysis in the model-free setting whose established regret matches the lower bound up to a logarithmic factor.
no code implementations • 21 May 2018 • Jieren Xu, Yitong Li, Haizhao Yang, David Dunson, Ingrid Daubechies
This paper proposes a novel kernel-based optimization scheme to handle tasks in the analysis, e. g., signal spectral estimation and single-channel source separation of 1D non-stationary oscillatory data.
no code implementations • 15 Feb 2018 • Jun Lu, Meng Li, David Dunson
Dirichlet process mixture (DPM) models tend to produce many small clusters regardless of whether they are needed to accurately characterize the data - this is particularly true for large data sets.
no code implementations • 3 Jan 2018 • Mu Niu, Pokman Cheung, Lizhen Lin, Zhenwen Dai, Neil Lawrence, David Dunson
in-GPs respect the potentially complex boundary or interior conditions as well as the intrinsic geometry of the spaces.
1 code implementation • 23 May 2017 • Akihiko Nishimura, David Dunson, Jianfeng Lu
Hamiltonian Monte Carlo has emerged as a standard tool for posterior computation.
Computation
no code implementations • 26 Jan 2017 • Shuang Zhou, Debdeep Pati, Anirban Bhattacharya, David Dunson
In this article, we study rates of posterior contraction in univariate density estimation for a class of non-linear latent variable models where unobserved U(0, 1) latent variables are related to the response variables via a random non-linear regression with an additive error.
Statistics Theory Statistics Theory
no code implementations • NeurIPS 2016 • Xiangyu Wang, David Dunson, Chenlei Leng
The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space).
no code implementations • 7 Jun 2015 • Xiangyu Wang, David Dunson, Chenlei Leng
Ordinary least squares (OLS) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size.
no code implementations • NeurIPS 2014 • Xiangyu Wang, Peichao Peng, David Dunson
For massive data sets, efficient computation commonly relies on distributed algorithms that store and process subsets of the data on different machines, minimizing communication costs.
no code implementations • 22 Apr 2013 • Bruno Cornelis, Yun Yang, Joshua T. Vogelstein, Ann Dooms, Ingrid Daubechies, David Dunson
The preservation of our cultural heritage is of paramount importance.