1 code implementation • 4 Mar 2024 • Lin Li, Haoyan Guan, Jianing Qiu, Michael Spratling
This work studies the adversarial robustness of VLMs from the novel perspective of the text prompt instead of the extensively studied model weights (frozen in this work).
1 code implementation • 22 Oct 2023 • Jinlai Ning, Michael Spratling
Additionally, we also propose a bottom-heavy version of the backbone, which further improves the performance of tiny object detection while also reducing the required number of parameters by almost half.
1 code implementation • 19 Oct 2023 • Lin Li, Yifei Wang, Chawin Sitawarin, Michael Spratling
Based on this, we are able to predict the upper limit of OOD robustness for existing robust training schemes.
no code implementations • 25 Jul 2023 • Maxime Fontana, Michael Spratling, Miaojing Shi
Second, it presents the different challenges arising from such a multi-objective optimisation scheme.
no code implementations • 12 Jun 2023 • Lin Li, Jianing Qiu, Michael Spratling
This allows our method to efficiently explore a large search space for a more effective DA policy and evolve the policy as training progresses.
1 code implementation • 24 Mar 2023 • Lin Li, Michael Spratling
We find that during training an overall reduction of adversarial loss is achieved by sacrificing a considerable proportion of training samples to be more vulnerable to adversarial attack, which results in an uneven distribution of adversarial vulnerability among data.
no code implementations • 20 Mar 2023 • Jinlai Ning, Haoyan Guan, Michael Spratling
Tiny object detection has become an active area of research because images with tiny targets are common in several important real-world scenarios.
1 code implementation • 24 Jan 2023 • Lin Li, Michael Spratling
Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training.
1 code implementation • 9 Dec 2022 • Lin Li, Michael Spratling
Adversarial training is widely used to improve the robustness of deep neural networks to adversarial attack.
1 code implementation • 27 Oct 2022 • Nikolay Manchev, Michael Spratling
Initialising the synaptic weights of artificial neural networks (ANNs) with orthogonal matrices is known to alleviate vanishing and exploding gradient problems.
no code implementations • 21 Oct 2022 • Haoyan Guan, Michael Spratling
To successfully associate prototypes with class labels and extract a background prototype that is capable of predicting a mask for the background regions of the image, the machinery for extracting and using foreground prototypes is induced to become more discriminative between different classes.
no code implementations • 21 Oct 2022 • Haoyan Guan, Michael Spratling
To overcome this issue, we propose CobNet which utilises information about the background that is extracted from the query images without annotations of those images.
1 code implementation • 15 Jul 2022 • Chaoqin Huang, Haoyan Guan, Aofan Jiang, Ya zhang, Michael Spratling, Yan-Feng Wang
Inspired by how humans detect anomalies, i. e., comparing an image in question to normal images, we here leverage registration, an image alignment task that is inherently generalizable across categories, as the proxy task, to train a category-agnostic anomaly detection model.
Ranked #72 on Anomaly Detection on MVTec AD