1 code implementation • 9 Feb 2024 • Michael S. Yao, Yimeng Zeng, Hamsa Bastani, Jacob Gardner, James C. Gee, Osbert Bastani
To address this limitation, we propose generative adversarial Bayesian optimization (GABO) using adaptive source critic regularization, a task-agnostic framework for Bayesian optimization that employs a Lipschitz-bounded source critic model to constrain the optimization trajectory to regions where the surrogate function is reliable.
2 code implementations • 15 Feb 2023 • Alexander Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob Gardner, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert Bastani, Amir Yazdanbakhsh
Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
1 code implementation • 9 Feb 2023 • Yinjun Wu, Adam Stein, Jacob Gardner, Mayur Naik
In this paper, we study how to learn to identify such a meta sample set from a large, imperfect training set, that is subsequently cleaned and used to optimize performance in the meta re-weighting setting.
1 code implementation • 8 Feb 2023 • Natalie Maus, Patrick Chao, Eric Wong, Jacob Gardner
Prompting interfaces allow users to quickly adjust the output of generative models in both vision and language.
1 code implementation • 20 Oct 2022 • Natalie Maus, Kaiwen Wu, David Eriksson, Jacob Gardner
Bayesian optimization (BO) is a popular approach for sample-efficient optimization of black-box objective functions.
no code implementations • 31 May 2019 • Martin Jankowiak, Jacob Gardner
We construct flexible likelihoods for multi-output Gaussian process models that leverage neural networks as components.
2 code implementations • CVPR 2017 • Paul Upchurch, Jacob Gardner, Geoff Pleiss, Robert Pless, Noah Snavely, Kavita Bala, Kilian Weinberger
We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation.
no code implementations • NeurIPS 2015 • Jacob Gardner, Gustavo Malkomes, Roman Garnett, Kilian Q. Weinberger, Dennis Barbour, John P. Cunningham
Using this and a previously published model for healthy responses, the proposed method is shown to be capable of diagnosing the presence or absence of NIHL with drastically fewer samples than existing approaches.