no code implementations • 1 Apr 2024 • Han Cai, Muyang Li, Zhuoyang Zhang, Qinsheng Zhang, Ming-Yu Liu, Song Han
In parallel to prior conditional control methods, CAN controls the image generation process by dynamically manipulating the weight of the neural network.
1 code implementation • 29 Feb 2024 • Muyang Li, Tianle Cai, Jiaxin Cao, Qinsheng Zhang, Han Cai, Junjie Bai, Yangqing Jia, Ming-Yu Liu, Kai Li, Song Han
To overcome this dilemma, we observe the high similarity between the input from adjacent diffusion steps and propose displaced patch parallelism, which takes advantage of the sequential nature of the diffusion process by reusing the pre-computed feature maps from the previous timestep to provide context for the current step.
1 code implementation • 22 Feb 2024 • Yujia Huang, Adishree Ghatare, Yuanzhe Liu, Ziniu Hu, Qinsheng Zhang, Chandramouli S Sastry, Siddharth Gururani, Sageev Oore, Yisong Yue
We propose Stochastic Control Guidance (SCG), a novel guidance method that only requires forward evaluation of rule functions that can work with pre-trained diffusion models in a plug-and-play way, thus achieving training-free guidance for non-differentiable rules for the first time.
1 code implementation • 4 Aug 2023 • Qinsheng Zhang, Jiaming Song, Yongxin Chen
By reformulating the differential equations in DMs and capitalizing on the theory of exponential integrators, we propose refined EI solvers that fulfill all the order conditions, which we designate as Refined Exponential Solver (RES).
no code implementations • CVPR 2023 • Qinsheng Zhang, Jiaming Song, Xun Huang, Yongxin Chen, Ming-Yu Liu
We present DiffCollage, a compositional diffusion model that can generate large content by leveraging diffusion models trained on generating pieces of the large content.
2 code implementations • 2 Nov 2022 • Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, Ming-Yu Liu
Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages.
Ranked #14 on Text-to-Image Generation on MS COCO
1 code implementation • 11 Jun 2022 • Qinsheng Zhang, Molei Tao, Yongxin Chen
In the CLD, a diffusion model by augmenting the diffusion process with velocity, our algorithm achieves an FID score of 2. 26, on CIFAR10, with only 50 number of score function evaluations~(NFEs) and an FID score of 2. 86 with only 27 NFEs.
4 code implementations • 29 Apr 2022 • Qinsheng Zhang, Yongxin Chen
Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality.
1 code implementation • 4 Dec 2021 • Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, Yongxin Chen
Wasserstein gradient flow has emerged as a promising approach to solve optimization problems over the space of probability distributions.
1 code implementation • ICLR 2022 • Qinsheng Zhang, Yongxin Chen
The PIS is built on the Schr\"odinger bridge problem which aims to recover the most likely evolution of a diffusion process given its initial distribution and terminal distribution.
1 code implementation • NeurIPS 2021 • Qinsheng Zhang, Yongxin Chen
Our method is closely related to normalizing flow and diffusion probabilistic models and can be viewed as a combination of the two.
no code implementations • 23 Nov 2020 • Rahul Singh, Qinsheng Zhang, Yongxin Chen
This problem arises when only the population level counts of the number of individuals at each time step are available, from which one seeks to learn the individual hidden Markov model.
no code implementations • 4 Nov 2020 • Qinsheng Zhang, Rahul Singh, Yongxin Chen
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM).
no code implementations • 26 Jun 2020 • Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
We consider incremental inference problems from aggregate data for collective dynamics.
3 code implementations • 25 Jun 2020 • Isabel Haasler, Rahul Singh, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
We study multi-marginal optimal transport problems from a probabilistic graphical model perspective.
no code implementations • L4DC 2020 • Rahul Singh, Qinsheng Zhang, Yongxin Chen
One major obstacle that precludes the success of reinforcement learning in real-world applications is the lack of robustness, either to model uncertainties or external disturbances, of the trained policies.
Distributional Reinforcement Learning reinforcement-learning +1
no code implementations • 31 Mar 2020 • Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
Consequently, the celebrated Sinkhorn/iterative scaling algorithm for multi-marginal optimal transport can be leveraged together with the standard belief propagation algorithm to establish an efficient inference scheme which we call Sinkhorn belief propagation (SBP).