1 code implementation • 30 Apr 2024 • Chenqi Guo, Shiwei Zhong, Xiaofeng Liu, Qianli Feng, Yinglong Ma
By increasing data augmentation strengths, our key findings reveal a decrease in the Intersection over Union (IoU) of attentions between teacher models, leading to reduced student overfitting and decreased fidelity.
no code implementations • 8 Mar 2023 • Chenqi Guo, Fabian Benitez-Quiroz, Qianli Feng, Aleix Martinez
Our experiments on imbalanced image dataset classification show that, the validation accuracy improvement with such re-balancing method is related to the image similarity between different classes.
no code implementations • CVPR 2023 • Qianli Feng, Raghudeep Gadde, Wentong Liao, Eduard Ramon, Aleix Martinez
We derive a method that yields highly accurate semantic segmentation maps without the use of any additional neural network, layers, manually annotated training data, or supervised training.
1 code implementation • ICCV 2021 • Qianli Feng, Chenqi Guo, Fabian Benitez-Quiroz, Aleix Martinez
With empirical evidence from BigGAN and StyleGAN2, on datasets CelebA, Flower and LSUN-bedroom, we show that dataset size and its complexity play an important role in GANs replication and perceptual quality of the generated images.
no code implementations • 23 Feb 2022 • Qianli Feng, Viraj Shah, Raghudeep Gadde, Pietro Perona, Aleix Martinez
To edit a real photo using Generative Adversarial Networks (GANs), we need a GAN inversion algorithm to identify the latent vector that perfectly reproduces it.
1 code implementation • 12 Apr 2021 • Jeffrey Wen, Fabian Benitez-Quiroz, Qianli Feng, Aleix Martinez
Leveraging the learned structure of the latent space, we find moving in this direction corrects many image artifacts and brings the image into greater realism.
no code implementations • ICCV 2021 • Raghudeep Gadde, Qianli Feng, Aleix M. Martinez
Generative models can synthesize photo-realistic images of a single object.
1 code implementation • 12 Nov 2020 • Stuart Synakowski, Qianli Feng, Aleix Martinez
In this paper, we derive an algorithm that can infer whether the behavior of an agent in a scene is intentional or unintentional based on its 3D kinematics, using the knowledge of self-propelled motion, Newtonian motion and their relationship.
no code implementations • 3 Mar 2017 • C. Fabian Benitez-Quiroz, Ramprakash Srinivasan, Qianli Feng, Yan Wang, Aleix M. Martinez
The second track tested the algorithms' ability to recognize emotion categories in images of facial expressions.