no code implementations • 12 Feb 2024 • Changhao Shi, Gal Mishne
We establish statistical consistency for the penalized maximum likelihood estimation (MLE) of a Cartesian product Laplacian, and propose an efficient algorithm to solve the problem.
no code implementations • 14 Jun 2023 • Changhao Shi, Gal Mishne
A common challenge in applying graph machine learning methods is that the underlying graph of a system is often unknown.
no code implementations • 25 Apr 2023 • Changhao Shi, Haomiao Ni, Kai Li, Shaobo Han, Mingfu Liang, Martin Renqiang Min
We show that this paradigm based on latent classifier guidance is agnostic to pre-trained generative models, and present competitive results for both image generation and sequential manipulation of real and synthetic images.
1 code implementation • CVPR 2023 • Haomiao Ni, Changhao Shi, Kai Li, Sharon X. Huang, Martin Renqiang Min
In this paper, we propose an approach for cI2V using novel latent flow diffusion models (LFDM) that synthesize an optical flow sequence in the latent space based on the given condition to warp the given image.
1 code implementation • NeurIPS 2021 • Changhao Shi, Sivan Schwartz, Shahar Levy, Shay Achvat, Maisan Abboud, Amir Ghanayim, Jackie Schiller, Gal Mishne
To understand the relationship between behavior and neural activity, experiments in neuroscience often include an animal performing a repeated behavior such as a motor task.
no code implementations • 12 Sep 2021 • Jiayu Ding, Yuchen Cao, Changhao Shi
We found that the causes of vulnerability to cropping is not the loss of information on the edge, but the movement of watermark position.
no code implementations • 23 Jan 2021 • Changhao Shi, Chester Holtz, Gal Mishne
To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
no code implementations • 1 Jan 2021 • Chester Holtz, Changhao Shi, Gal Mishne
Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input.
no code implementations • ICLR 2021 • Changhao Shi, Chester Holtz, Gal Mishne
Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation.