no code implementations • 7 Mar 2023 • David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbot, Eric Gu
Denoising Diffusion models have demonstrated their proficiency for generative sampling.
no code implementations • 22 Jun 2022 • Chuan Wen, Jianing Qian, Jierui Lin, Jiaye Teng, Dinesh Jayaraman, Yang Gao
Across applications spanning supervised classification and sequential control, deep learning has been reported to find "shortcut" solutions that fail catastrophically under minor changes in the data distribution.
no code implementations • 29 Sep 2021 • Chuan Wen, Jianing Qian, Jierui Lin, Dinesh Jayaraman, Yang Gao
When operating in partially observed settings, it is important for a control policy to fuse information from a history of observations.
no code implementations • 11 Jun 2021 • Chuan Wen, Jierui Lin, Jianing Qian, Yang Gao, Dinesh Jayaraman
Imitation learning trains control policies by mimicking pre-recorded expert demonstrations.
no code implementations • NeurIPS 2020 • Chuan Wen, Jierui Lin, Trevor Darrell, Dinesh Jayaraman, Yang Gao
Imitation learning trains policies to map from input observations to the actions that an expert would choose.
1 code implementation • 17 Jun 2020 • Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu
Additionally, we propose a sketch standardization module to handle different sketch distortions and styles.
1 code implementation • 28 Nov 2019 • Jierui Lin, Min Du, Jian Liu
Although the incentive model for federated learning has not been fully developed, it is supposed that participants are able to get rewards or the privilege to use the final global model, as a compensation for taking efforts to train the model.