1 code implementation • 9 Oct 2023 • Ziyao Guo, Kai Wang, George Cazenavette, Hui Li, Kaipeng Zhang, Yang You
The ultimate goal of Dataset Distillation is to synthesize a small synthetic dataset such that a model trained on this synthetic set will perform equally well as a model trained on the full, real dataset.
2 code implementations • CVPR 2023 • George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
Dataset Distillation aims to distill an entire dataset's knowledge into a few synthetic images.
5 code implementations • CVPR 2022 • George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
To efficiently obtain the initial and target network parameters for large-scale datasets, we pre-compute and store training trajectories of expert networks trained on the real dataset.
1 code implementation • 28 May 2021 • George Cazenavette, Manuel Ladron De Guevara
While attention-based transformer networks achieve unparalleled success in nearly all language tasks, the large number of tokens (pixels) found in images coupled with the quadratic activation memory usage makes them prohibitive for problems in computer vision.
no code implementations • 28 May 2021 • George Cazenavette, Simon Lucey
Borrowing from the transformer models that revolutionized the field of natural language processing, self-supervised feature learning for visual tasks has also seen state-of-the-art success using these extremely deep, isotropic networks.
no code implementations • 10 Mar 2021 • Calvin Murdock, George Cazenavette, Simon Lucey
In comparison to classical shallow representation learning techniques, deep neural networks have achieved superior performance in nearly every application benchmark.
no code implementations • CVPR 2021 • George Cazenavette, Calvin Murdock, Simon Lucey
Despite their unmatched performance, deep neural networks remain susceptible to targeted attacks by nearly imperceptible levels of adversarial noise.