no code implementations • 6 May 2024 • Xingyou Song, Yingtao Tian, Robert Tjarko Lange, Chansoo Lee, Yujin Tang, Yutian Chen
Their incorporation has been rapid and transformative, marking a significant paradigm shift in the field of machine learning research.
1 code implementation • 22 Feb 2024 • Xingyou Song, Oscar Li, Chansoo Lee, Bangding Yang, Daiyi Peng, Sagi Perel, Yutian Chen
Over the broad landscape of experimental design, regression has been a powerful tool to accurately predict the outcome metrics of a system or model given a set of parameters, but has been traditionally restricted to methods which are only applicable to a specific task.
no code implementations • 26 Aug 2022 • Jonathan Lorraine, Nihesh Anderson, Chansoo Lee, Quentin de Laroussilhe, Mehadi Hassen
However, we cannot test the changes on production tasks.
1 code implementation • 27 Jul 2022 • Xingyou Song, Sagi Perel, Chansoo Lee, Greg Kochanski, Daniel Golovin
Vizier is the de-facto blackbox and hyperparameter optimization service across Google, having optimized some of Google's largest products and research efforts.
1 code implementation • 7 Jul 2022 • Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zelda Mariet, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani
Contrary to a common belief that BO is suited to optimizing black-box functions, it actually requires domain knowledge on characteristics of those functions to deploy BO successfully.
1 code implementation • 26 May 2022 • Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas
Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.
4 code implementations • 16 Sep 2021 • Zi Wang, George E. Dahl, Kevin Swersky, Chansoo Lee, Zachary Nado, Justin Gilmer, Jasper Snoek, Zoubin Ghahramani
Contrary to a common expectation that BO is suited to optimizing black-box functions, it actually requires domain knowledge about those functions to deploy BO successfully.
no code implementations • ICLR 2020 • Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, Qiuyi Zhang
Zeroth-order optimization is the process of minimizing an objective $f(x)$, given oracle access to evaluations at adaptively chosen inputs $x$.
no code implementations • NeurIPS 2019 • Jacob Abernethy, Young Hun Jung, Chansoo Lee, Audra McMillan, Ambuj Tewari
In this paper, we use differential privacy as a lens to examine online learning in both full and partial information settings.
no code implementations • NeurIPS 2015 • Jacob Abernethy, Chansoo Lee, Ambuj Tewari
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing.
no code implementations • NeurIPS 2016 • Satyen Kale, Chansoo Lee, Dávid Pál
We show that several online combinatorial optimization problems that admit efficient no-regret algorithms become computationally hard in the sleeping setting where a subset of actions becomes unavailable in each round.
no code implementations • 10 Jul 2015 • Jacob Abernethy, Chansoo Lee, Ambuj Tewari
Smoothing the maximum eigenvalue function is important for applications in semidefinite optimization and online learning.
no code implementations • 23 May 2014 • Jacob Abernethy, Chansoo Lee, Abhinav Sinha, Ambuj Tewari
We present a new optimization-theoretic approach to analyzing Follow-the-Leader style algorithms, particularly in the setting where perturbations are used as a tool for regularization.