Search Results for author: Chenqi Guo

Found 3 papers, 2 papers with code

Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism

1 code implementation30 Apr 2024 Chenqi Guo, Shiwei Zhong, Xiaofeng Liu, Qianli Feng, Yinglong Ma

By increasing data augmentation strengths, our key findings reveal a decrease in the Intersection over Union (IoU) of attentions between teacher models, leading to reduced student overfitting and decreased fidelity.

Data Augmentation Knowledge Distillation +1

Structural Similarity: When to Use Deep Generative Models on Imbalanced Image Dataset Augmentation

no code implementations8 Mar 2023 Chenqi Guo, Fabian Benitez-Quiroz, Qianli Feng, Aleix Martinez

Our experiments on imbalanced image dataset classification show that, the validation accuracy improvement with such re-balancing method is related to the image similarity between different classes.

Data Augmentation SSIM

When do GANs replicate? On the choice of dataset size

1 code implementation ICCV 2021 Qianli Feng, Chenqi Guo, Fabian Benitez-Quiroz, Aleix Martinez

With empirical evidence from BigGAN and StyleGAN2, on datasets CelebA, Flower and LSUN-bedroom, we show that dataset size and its complexity play an important role in GANs replication and perceptual quality of the generated images.

Cannot find the paper you are looking for? You can Submit a new open access paper.