Search Results for author: Yao-Chih Lee

Found 7 papers, 2 papers with code

VividDream: Generating 3D Scene with Ambient Dynamics

no code implementations30 May 2024 Yao-Chih Lee, Yi-Ting Chen, Andrew Wang, Ting-Hsuan Liao, Brandon Y. Feng, Jia-Bin Huang

An ensemble of animated videos is then generated using video diffusion models with quality refinement techniques and conditioned on renderings of the static 3D scene from the sampled camera trajectories.

Fast View Synthesis of Casual Videos

no code implementations4 Dec 2023 Yao-Chih Lee, Zhoutong Zhang, Kevin Blackburn-Matzen, Simon Niklaus, Jianming Zhang, Jia-Bin Huang, Feng Liu

Specifically, we build a global static scene model using an extended plane-based scene representation to synthesize temporally coherent novel video.

Novel View Synthesis

Text-driven Visual Synthesis with Latent Diffusion Prior

no code implementations16 Feb 2023 Ting-Hsuan Liao, Songwei Ge, Yiran Xu, Yao-Chih Lee, Badour AlBahar, Jia-Bin Huang

There has been tremendous progress in large-scale text-to-image synthesis driven by diffusion models enabling versatile downstream applications such as 3D object synthesis from texts, image editing, and customized generation.

Decoder Image Generation +1

Globally Consistent Video Depth and Pose Estimation with Efficient Test-Time Training

1 code implementation4 Aug 2022 Yao-Chih Lee, Kuan-Wei Tseng, Guan-Sheng Chen, Chu-Song Chen

It can improve the robustness of learning-based methods with flow-guided keyframes and well-established depth prior.

Optical Flow Estimation Pose Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.