no code implementations • 22 May 2024 • Jiali Cui, Tian Han
Such a prior model can be limited in modelling expressivity, which results in a gap between the generator posterior and the prior model, known as the prior hole problem.
no code implementations • 4 Mar 2024 • Cong Geng, Tian Han, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Søren Hauberg, Bo Li
Generative models have shown strong generation ability while efficient likelihood estimation is less explored.
no code implementations • NeurIPS 2023 • Jiali Cui, Tian Han
To address this issue, we present a joint learning framework that interweaves the maximum likelihood learning algorithm for both the EBM and the complementary generator model.
no code implementations • ICCV 2023 • Jiali Cui, Ying Nian Wu, Tian Han
In this paper, we propose a joint latent space EBM prior model with multi-layer latent variables for effective hierarchical representation learning.
no code implementations • CVPR 2023 • Jiali Cui, Ying Nian Wu, Tian Han
To tackle this issue and learn more expressive prior models, we propose an energy-based model (EBM) on the joint latent space over all layers of latent variables with the multi-layer generator as its backbone.
1 code implementation • 9 Jun 2023 • Deqian Kong, Bo Pang, Tian Han, Ying Nian Wu
To search for molecules with desired properties, we propose a sampling with gradual distribution shifting (SGDS) algorithm, so that after learning the model initially on the training data of existing molecules and their properties, the proposed algorithm gradually shifts the model distribution towards the region supported by molecules with desired values of properties.
no code implementations • 20 Sep 2022 • Hanao Li, Tian Han
In this paper, we present a new unsupervised learning method to enforce sparsity on the latent space for the generator model with a gradually sparsified spike and slab distribution as our prior.
no code implementations • 19 Sep 2022 • Zhisheng Xiao, Tian Han
Instead, we propose to use noise contrastive estimation (NCE) to discriminatively learn the EBM through density ratio estimation between the latent prior density and latent posterior density.
1 code implementation • 12 Dec 2021 • Yizhou Zhao, Liang Qiu, Pan Lu, Feng Shi, Tian Han, Song-Chun Zhu
Current pre-training methods in computer vision focus on natural images in the daily-life context.
1 code implementation • 9 Dec 2021 • Chang Lu, Tian Han, Yue Ning
We further define three diagnosis roles in each visit based on the variation of node properties to model disease transition processes.
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
1 code implementation • 15 Jul 2021 • Feng Shi, Chonghan Lee, Liang Qiu, Yizhou Zhao, Tianyi Shen, Shivran Muralidhar, Tian Han, Song-Chun Zhu, Vijaykrishnan Narayanan
The cognitive system for human action and behavior has evolved into a deep learning regime, and especially the advent of Graph Convolution Networks has transformed the field in recent years.
1 code implementation • EACL 2021 • Bo Pang, Erik Nijkamp, Tian Han, Ying Nian Wu
It is initialized from the prior distribution of the latent variable and then runs a small number (e. g., 20) of Langevin dynamics steps guided by its posterior distribution.
no code implementations • 1 Jan 2021 • Feng Shi, Chen Li, Shijie Bian, Yiqiao Jin, Ziheng Xu, Tian Han, Song-Chun Zhu
The Propositional Satisfiability Problem (SAT), and more generally, the Constraint Satisfaction Problem (CSP), are mathematical questions defined as finding an assignment to a set of objects that satisfies a series of constraints.
no code implementations • NeurIPS Workshop ICBINB 2020 • Bo Pang, Erik Nijkamp, Jiali Cui, Tian Han, Ying Nian Wu
This paper proposes a latent space energy-based prior model for semi-supervised learning.
no code implementations • 19 Oct 2020 • Bo Pang, Tian Han, Ying Nian Wu
Deep generative models have recently been applied to molecule design.
no code implementations • NeurIPS Workshop DL-IG 2020 • Tian Han, Jun Zhang, Ying Nian Wu
This paper reviews the em-projections in information geometry and the recent understanding of variational auto-encoder, and explains that they share a common formulation as joint minimization of the Kullback-Leibler divergence between two manifolds of probability distributions, and the joint minimization can be implemented by alternating projections or alternating gradient descent.
1 code implementation • NeurIPS 2020 • Bo Pang, Tian Han, Erik Nijkamp, Song-Chun Zhu, Ying Nian Wu
Due to the low dimensionality of the latent space and the expressiveness of the top-down network, a simple EBM in latent space can capture regularities in the data effectively, and MCMC sampling in latent space is efficient and mixes well.
no code implementations • CVPR 2020 • Tian Han, Erik Nijkamp, Linqi Zhou, Bo Pang, Song-Chun Zhu, Ying Nian Wu
This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM).
no code implementations • ECCV 2020 • Erik Nijkamp, Bo Pang, Tian Han, Linqi Zhou, Song-Chun Zhu, Ying Nian Wu
Learning such a generative model requires inferring the latent variables for each training example based on the posterior distribution of these latent variables.
no code implementations • 19 Nov 2019 • Dandan Zhu, Tian Han, Linqi Zhou, Xiaokang Yang, Ying Nian Wu
We propose the clustered generator model for clustering which contains both continuous and discrete latent variables.
no code implementations • CVPR 2019 • Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song-Chun Zhu, Ying Nian Wu
This paper proposes the divergence triangle as a framework for joint training of a generator model, energy-based model and inference model.
2 code implementations • 29 Mar 2019 • Erik Nijkamp, Mitch Hill, Tian Han, Song-Chun Zhu, Ying Nian Wu
On the other hand, ConvNet potentials learned with non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images.
1 code implementation • 28 Dec 2018 • Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song-Chun Zhu, Ying Nian Wu
This paper proposes the divergence triangle as a framework for joint training of generator model, energy-based model and inference model.
no code implementations • 9 Oct 2018 • Ying Nian Wu, Ruiqi Gao, Tian Han, Song-Chun Zhu
In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models.
2 code implementations • 16 Jun 2018 • Xianglei Xing, Ruiqi Gao, Tian Han, Song-Chun Zhu, Ying Nian Wu
We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner.
no code implementations • 14 May 2018 • Tian Han, Jiawen Wu, Ying Nian Wu
A recent Cell paper [Chang and Tsao, 2017] reports an interesting discovery.
no code implementations • 28 Jun 2016 • Tian Han, Yang Lu, Song-Chun Zhu, Ying Nian Wu
This paper proposes an alternating back-propagation algorithm for learning the generator network model.