1 code implementation • 6 Dec 2023 • Zheqing Zhu, Rodrigo de Salvo Braz, Jalaj Bhandari, Daniel Jiang, Yi Wan, Yonathan Efroni, Liyuan Wang, Ruiyang Xu, Hongbo Guo, Alex Nikulkov, Dmytro Korenkevych, Urun Dogan, Frank Cheng, Zheng Wu, Wanqiao Xu
Reinforcement Learning (RL) offers a versatile framework for achieving long-term goals.
no code implementations • 13 Oct 2023 • Dmytro Korenkevych, Frank Cheng, Artsiom Balakir, Alex Nikulkov, Lingnan Gao, Zhihao Cen, Zuobing Xu, Zheqing Zhu
We use a hybrid agent architecture that combines arbitrary base policies with deep neural networks, where only the optimized base policy parameters are eventually deployed, and the neural network part is discarded after training.
no code implementations • 23 May 2023 • Ruiyang Xu, Jalaj Bhandari, Dmytro Korenkevych, Fan Liu, Yuchen He, Alex Nikulkov, Zheqing Zhu
Auction-based recommender systems are prevalent in online advertising platforms, but they are typically optimized to allocate recommendation slots based on immediate expected return metrics, neglecting the downstream effects of recommendations on user behavior.
1 code implementation • 27 Mar 2019 • Dmytro Korenkevych, A. Rupam Mahmood, Gautham Vasan, James Bergstra
We introduce a family of stationary autoregressive (AR) stochastic processes to facilitate exploration in continuous control domains.
2 code implementations • 20 Sep 2018 • A. Rupam Mahmood, Dmytro Korenkevych, Gautham Vasan, William Ma, James Bergstra
The research community is now able to reproduce, analyze and build quickly on these results due to open source implementations of learning algorithms and simulated benchmark tasks.
2 code implementations • 19 Mar 2018 • A. Rupam Mahmood, Dmytro Korenkevych, Brent J. Komer, James Bergstra
Reinforcement learning is a promising approach to developing hard-to-engineer adaptive solutions for complex and diverse robotic tasks.
no code implementations • 14 Nov 2016 • Dmytro Korenkevych, Yanbo Xue, Zhengbing Bian, Fabian Chudak, William G. Macready, Jason Rolfe, Evgeny Andriyash
We argue that this relates to the fact that we are training a quantum rather than classical Boltzmann distribution in this case.