1 code implementation • 23 Dec 2020 • Vinod Nair, Sergey Bartunov, Felix Gimeno, Ingrid von Glehn, Pawel Lichocki, Ivan Lobov, Brendan O'Donoghue, Nicolas Sonnerat, Christian Tjandraatmadja, Pengming Wang, Ravichandra Addanki, Tharindi Hapuarachchi, Thomas Keck, James Keeling, Pushmeet Kohli, Ira Ktena, Yujia Li, Oriol Vinyals, Yori Zwols
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
1 code implementation • NeurIPS 2020 • Arthur Delarue, Ross Anderson, Christian Tjandraatmadja
We develop a framework for value-function-based deep reinforcement learning with a combinatorial action space, in which the action selection problem is explicitly formulated as a mixed-integer optimization problem.
no code implementations • NeurIPS 2020 • Christian Tjandraatmadja, Ross Anderson, Joey Huchette, Will Ma, Krunal Patel, Juan Pablo Vielma
We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons.
no code implementations • ICLR 2020 • Moonkyung Ryu, Yin-Lam Chow, Ross Anderson, Christian Tjandraatmadja, Craig Boutilier
Value-based reinforcement learning (RL) methods like Q-learning have shown success in a variety of domains.
no code implementations • 20 Nov 2018 • Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma
We present an ideal mixed-integer programming (MIP) formulation for a rectified linear unit (ReLU) appearing in a trained neural network.
1 code implementation • 5 Nov 2018 • Ross Anderson, Joey Huchette, Christian Tjandraatmadja, Juan Pablo Vielma
We present strong convex relaxations for high-dimensional piecewise linear functions that correspond to trained neural networks.
Optimization and Control 90C11
no code implementations • 17 Jun 2018 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
The holy grail of deep learning is to come up with an automatic method to design optimal architectures for different applications.
no code implementations • 6 Nov 2017 • Thiago Serra, Christian Tjandraatmadja, Srikumar Ramalingam
We investigate the complexity of deep neural networks (DNN) that represent piecewise linear (PWL) functions.