no code implementations • 27 Feb 2024 • Daiqing Li, Aleks Kamko, Ehsan Akhgari, Ali Sabet, Linmiao Xu, Suhail Doshi
In this work, we share three insights for achieving state-of-the-art aesthetic quality in text-to-image generative models.
no code implementations • 10 Dec 2023 • Zelong Liu, Alexander Zhou, Arnold Yang, Alara Yilmaz, Maxwell Yoo, Mikey Sullivan, Catherine Zhang, James Grant, Daiqing Li, Zahi A. Fayad, Sean Huver, Timothy Deyer, Xueyan Mei
We showed that using synthetic auto-labeled data from RadImageGAN can significantly improve performance on four diverse downstream segmentation datasets by augmenting real training data and/or developing pre-trained weights for fine-tuning.
no code implementations • ICCV 2023 • Daiqing Li, Huan Ling, Amlan Kar, David Acuna, Seung Wook Kim, Karsten Kreis, Antonio Torralba, Sanja Fidler
In this work, we introduce a self-supervised feature representation learning framework DreamTeacher that utilizes generative networks for pre-training downstream image backbones.
no code implementations • CVPR 2023 • Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler
We first train a scene auto-encoder to express a set of image and pose pairs as a neural field, represented as density and feature voxel grids that can be projected to produce novel views of the scene.
3 code implementations • 22 Sep 2022 • Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, Sanja Fidler
As several industries are moving towards modeling massive 3D virtual worlds, the need for content creation tools that can scale in terms of the quantity, quality, and diversity of 3D content is becoming evident.
no code implementations • CVPR 2022 • Rafid Mahmood, James Lucas, David Acuna, Daiqing Li, Jonah Philion, Jose M. Alvarez, Zhiding Yu, Sanja Fidler, Marc T. Law
Given a small training data set and a learning algorithm, how much more data is necessary to reach a target validation or test performance?
no code implementations • CVPR 2022 • Seung Wook Kim, Karsten Kreis, Daiqing Li, Antonio Torralba, Sanja Fidler
Modern image generative models show remarkable sample quality when trained on a single domain or class of objects.
Generative Adversarial Network Image-to-Image Translation +1
no code implementations • CVPR 2022 • Daiqing Li, Huan Ling, Seung Wook Kim, Karsten Kreis, Adela Barriuso, Sanja Fidler, Antonio Torralba
By training an effective feature segmentation architecture on top of BigGAN, we turn BigGAN into a labeled dataset generator.
1 code implementation • NeurIPS 2021 • Huan Ling, Karsten Kreis, Daiqing Li, Seung Wook Kim, Antonio Torralba, Sanja Fidler
EditGAN builds on a GAN framework that jointly models images and their semantic segmentations, requiring only a handful of labeled examples, making it a scalable tool for editing.
no code implementations • CVPR 2021 • Daiqing Li, Junlin Yang, Karsten Kreis, Antonio Torralba, Sanja Fidler
Training deep networks with limited labeled data while achieving a strong generalization ability is key in the quest to reduce human annotation efforts.
no code implementations • 1 Sep 2020 • Daiqing Li, Amlan Kar, Nishant Ravikumar, Alejandro F. Frangi, Sanja Fidler
Since the model of geometry and material is disentangled from the imaging sensor, it can effectively be trained across multiple medical centers.
no code implementations • ICCV 2019 • Hang Chu, Daiqing Li, David Acuna, Amlan Kar, Maria Shugrina, Xinkai Wei, Ming-Yu Liu, Antonio Torralba, Sanja Fidler
We propose Neural Turtle Graphics (NTG), a novel generative model for spatial graphs, and demonstrate its applications in modeling city road layouts.
no code implementations • CVPR 2018 • Hang Chu, Daiqing Li, Sanja Fidler
The decoder consists of two layers, where the lower layer aims at generating the verbal response and coarse facial expressions, while the second layer fills in the subtle gestures, making the generated output more smooth and natural.