Search Results for author: Shao-Yu Chang

Found 4 papers, 0 papers with code

Preserving Image Properties Through Initializations in Diffusion Models

no code implementations4 Jan 2024 Jeffrey Zhang, Shao-Yu Chang, Kedan Li, David Forsyth

The usual practice of training the denoiser with a very noisy image and starting inference with a sample of pure noise leads to inconsistent generated images during inference.

DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing

no code implementations5 Dec 2023 Shao-Yu Chang, Hwann-Tzong Chen, Tyng-Luh Liu

Despite the success in image editing, diffusion models still encounter significant hindrances when it comes to video editing due to the challenge of maintaining spatiotemporal consistency in the object's appearance across frames.

Object Video Editing

Wearing the Same Outfit in Different Ways -- A Controllable Virtual Try-on Method

no code implementations29 Nov 2022 Kedan Li, Jeffrey Zhang, Shao-Yu Chang, David Forsyth

However, no current method can both control how the garment is worn -- including tuck or untuck, opened or closed, high or low on the waist, etc.. -- and generate realistic images that accurately preserve the properties of the original garment.

Virtual Try-on

Cannot find the paper you are looking for? You can Submit a new open access paper.