no code implementations • 9 Apr 2024 • Thomas L. Lee, Sigrid Passano Hellan, Linus Ericsson, Elliot J. Crowley, Amos Storkey
In continual learning (CL) -- where a learner trains on a stream of data -- standard hyperparameter optimisation (HPO) cannot be applied, as a learner does not have access to all of the data at the same time.
1 code implementation • 26 Mar 2024 • Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Ericsson, Zhenyu Wang, Jiaming Liu, Elliot J. Crowley
In this paper, we further adapt the selective scanning process of Mamba to the visual domain, enhancing its ability to learn features from two-dimensional images by (i) a continuous 2D scanning process that improves spatial continuity by ensuring adjacency of tokens in the scanning sequence, and (ii) direction-aware updating which enables the model to discern the spatial relations of tokens by encoding directional information.
no code implementations • 15 Nov 2023 • Cian Eastwood, Julius von Kügelgen, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim
Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data.
no code implementations • 7 Sep 2023 • Linus Ericsson, Da Li, Timothy M. Hospedales
However, the domain shift scenario raises a second more subtle challenge: the difficulty of performing hyperparameter optimisation (HPO) for these adaptation algorithms without access to a labelled validation set.
no code implementations • 14 May 2023 • Raman Dutt, Linus Ericsson, Pedro Sanchez, Sotirios A. Tsaftaris, Timothy Hospedales
We present a comprehensive evaluation of Parameter-Efficient Fine-Tuning (PEFT) techniques for diverse medical image analysis tasks.
no code implementations • 16 Nov 2022 • Nanqing Dong, Linus Ericsson, Yongxin Yang, Ales Leonardis, Steven McDonagh
In this work, we propose a simple pretext task that provides an effective pre-training for the RPN, towards efficiently improving downstream object detection performance.
1 code implementation • 22 Nov 2021 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used.
no code implementations • 18 Oct 2021 • Linus Ericsson, Henry Gouk, Chen Change Loy, Timothy M. Hospedales
Self-supervised representation learning methods aim to provide powerful deep feature learning without the requirement of large annotated datasets, thus alleviating the annotation bottleneck that is one of the main barriers to practical deployment of deep learning today.
1 code implementation • CVPR 2021 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction.
no code implementations • 22 Jun 2020 • Linus Ericsson
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy by prioritising the most useful instances.
Ranked #88 on Image Classification on STL-10
no code implementations • 22 Jun 2020 • Linus Ericsson, Henry Gouk, Timothy M. Hospedales
We show that by learning Bayesian instance weights for the unlabelled data, we can improve the downstream classification accuracy by prioritising the most useful instances.