no code implementations • 5 Feb 2024 • Maham Tanveer, Yizhi Wang, Ruiqi Wang, Nanxuan Zhao, Ali Mahdavi-Amiri, Hao Zhang
We present AnaMoDiff, a novel diffusion-based method for 2D motion analogies that is applied to raw, unannotated videos of articulated characters.
no code implementations • 3 Dec 2023 • Yizhi Wang, Wallace Lira, Wenqi Wang, Ali Mahdavi-Amiri, Hao Zhang
Our key observation is that object slicing is more advantageous than altering views to reveal occluded structures.
no code implementations • 11 Oct 2023 • Yizhi Wang, Shichuan Xue, Yaxuan Wang, Jiangfang Ding, Weixu Shi, Dongyang Wang, Yong liu, Yingwen Liu, Xiang Fu, Guangyao Huang, Anqi Huang, Mingtang Deng, Junjie Wu
Our work opens up a vista of utilizing QNG in photonics to implement practical near-term quantum applications.
no code implementations • 1 Oct 2023 • Yizhi Wang, Shichuan Xue, Yaxuan Wang, Yong liu, Jiangfang Ding, Weixu Shi, Dongyang Wang, Yingwen Liu, Xiang Fu, Guangyao Huang, Anqi Huang, Mingtang Deng, Junjie Wu
Quantum Generative Adversarial Networks (QGANs), an intersection of quantum computing and machine learning, have attracted widespread attention due to their potential advantages over classical analogs.
no code implementations • 29 May 2023 • Dingdong Yang, Yizhi Wang, Ali Mahdavi-Amiri, Hao Zhang
Our key codes and feature grids are jointly trained continuously with well-defined gradient flows, leading to high usage rates of the feature grids and improved generative modeling compared to discrete Vector Quantization (VQ).
1 code implementation • CVPR 2023 • Yuqing Wang, Yizhi Wang, Longhui Yu, Yuesheng Zhu, Zhouhui Lian
First, we adopt Transformers instead of RNNs to process sequential data and design a relaxation representation for vector outlines, markedly improving the model's capability and stability of synthesizing long and complex outlines.
1 code implementation • ICCV 2023 • Maham Tanveer, Yizhi Wang, Ali Mahdavi-Amiri, Hao Zhang
We introduce a novel method to automatically generate an artistic typography by stylizing one or more letter fonts to visually convey the semantics of an input word, while ensuring that the output remains readable.
1 code implementation • CVPR 2023 • Yizhi Wang, Zeyu Huang, Ariel Shamir, Hui Huang, Hao Zhang, Ruizhen Hu
We introduce anchored radial observations (ARO), a novel shape encoding for learning implicit field representation of 3D shapes that is category-agnostic and generalizable amid significant shape variations.
1 code implementation • CVPR 2022 • Yizhi Wang, Guo Pu, Wenhan Luo, Yexin Wang, Pengfei Xiong, Hongwen Kang, Zhouhui Lian
To train and evaluate our approach, we construct a dataset named as TextLogo3K, consisting of about 3, 500 text logo images and their pixel-level annotations.
2 code implementations • 13 Oct 2021 • Yizhi Wang, Zhouhui Lian
Automatic font generation based on deep learning has aroused a lot of interest in the last decade.
no code implementations • 16 Sep 2020 • Yizhi Wang, Zhouhui Lian
Scene text recognition (STR) has been extensively studied in last few years.
2 code implementations • 16 May 2020 • Yizhi Wang, Yue Gao, Zhouhui Lian
To the best of our knowledge, our model is the first one in the literature which is capable of generating glyph images in new font styles, instead of retrieving existing fonts, according to given values of specified font attributes.
2 code implementations • NeurIPS 2019 • Congchao Wang, Yizhi Wang, Yinxue Wang, Chiung-Ting Wu, Guoqiang Yu
Min-cost flow has been a widely used paradigm for solving data association problems in multi-object tracking (MOT).
1 code implementation • 2 Nov 2019 • Congchao Wang, Yizhi Wang, Guoqiang Yu
When our method served as a sub-module for global data association methods using higher-order constraints, similar efficiency improvement was attained.
1 code implementation • 12 Jul 2019 • Yizhi Wang, Zhouhui Lian, Yingmin Tang, Jianguo Xiao
In this paper, we propose a novel methodology for boosting scene character recognition by learning canonical forms of glyphs, based on the fact that characters appearing in scene images are all derived from their corresponding canonical forms.
no code implementations • NeurIPS 2016 • Yizhi Wang, David J. Miller, Kira Poskanzer, Yue Wang, Lin Tian, Guoqiang Yu
We name the proposed approach graphical time warping (GTW), emphasizing the graphical nature of the solution and that the dependency structure of the warping functions can be represented by a graph.