no code implementations • 30 Apr 2024 • Zhangyong Tang, Tianyang Xu, ZhenHua Feng, XueFeng Zhu, He Wang, Pengcheng Shao, Chunyang Cheng, Xiao-Jun Wu, Muhammad Awais, Sara Atito, Josef Kittler
We propose a new method based on a mixture of experts, namely MoETrack, as a baseline fusion strategy.
1 code implementation • 31 Mar 2024 • Jiantao Wu, Shentong Mo, Sara Atito, ZhenHua Feng, Josef Kittler, Muhammad Awais
Recently, masked image modeling (MIM), an important self-supervised learning (SSL) method, has drawn attention for its effectiveness in learning data representation from unlabeled data.
no code implementations • 22 Feb 2024 • Abhieet Parida, Daniel Capellan-Martin, Sara Atito, Muhammad Awais, Maria J. Ledesma-Carbayo, Marius G. Linguraru, Syed Muhammad Anwar
In this context, we introduce Diverse Concept Modeling (DiCoM), a novel self-supervised training paradigm that leverages a student teacher framework for learning diverse concepts and hence effective representation of the CXR data.
no code implementations • 2 Dec 2023 • Jiantao Wu, Shentong Mo, Sara Atito, Josef Kittler, ZhenHua Feng, Muhammad Awais
Recently, self-supervised metric learning has raised attention for the potential to learn a generic distance function.
no code implementations • 13 Nov 2023 • Umar Marikkar, Sara Atito, Muhammad Awais, Adam Mahdi
Vision Transformers (ViTs) are widely adopted in medical imaging tasks, and some existing efforts have been directed towards vision-language training for Chest X-rays (CXRs).
no code implementations • 11 Sep 2023 • Cong Wu, Xiao-Jun Wu, Josef Kittler, Tianyang Xu, Sara Atito, Muhammad Awais, ZhenHua Feng
Contrastive learning has achieved great success in skeleton-based action recognition.
no code implementations • 22 Aug 2023 • Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, ZhenHua Feng, Josef Kittler
Self-supervised pretraining (SSP) has emerged as a popular technique in machine learning, enabling the extraction of meaningful feature representations without labelled data.
1 code implementation • 22 Mar 2023 • Jiantao Wu, Shentong Mo, Muhammad Awais, Sara Atito, Xingshen Zhang, Lin Wang, Xiang Yang
One major challenge of disentanglement learning with variational autoencoders is the trade-off between disentanglement and reconstruction fidelity.
1 code implementation • 23 Nov 2022 • Sara Atito, Muhammad Awais, Wenwu Wang, Mark D Plumbley, Josef Kittler
Transformers, which were originally developed for natural language processing, have recently generated significant interest in the computer vision and audio communities due to their flexibility in learning long-range relationships.
no code implementations • 23 Nov 2022 • Syed Muhammad Anwar, Abhijeet Parida, Sara Atito, Muhammad Awais, Gustavo Nino, Josef Kitler, Marius George Linguraru
However, the traditional diagnostic tool design methods based on supervised learning are burdened by the need to provide training data annotation, which should be of good quality for better clinical outcomes.
Ranked #1 on Semantic Segmentation on Montgomery County X-ray Set
no code implementations • 29 Aug 2022 • Sara Atito, Syed Muhammad Anwar, Muhammad Awais, Josef Kitler
The availability of large scale data with high quality ground truth labels is a challenge when developing supervised machine learning solutions for healthcare domain.
1 code implementation • 30 May 2022 • Sara Atito, Muhammad Awais, Josef Kittler
This has motivated the research in self-supervised transformer pretraining, which does not need to decode the semantic information conveyed by labels to link it to the image properties, but rather focuses directly on extracting a concise representation of the image data that reflects the notion of similarity, and is invariant to nuisance factors.
no code implementations • 30 Nov 2021 • Sara Atito, Muhammad Awais, Ammarah Farooq, ZhenHua Feng, Josef Kittler
In this aspect the proposed SSL frame-work MC-SSL0. 0 is a step towards Multi-Concept Self-Supervised Learning (MC-SSL) that goes beyond modelling single dominant label in an image to effectively utilise the information from all the concepts present in it.
2 code implementations • 8 Apr 2021 • Sara Atito, Muhammad Awais, Josef Kittler
We also observed that SiT is good for few shot learning and also showed that it is learning useful representation by simply training a linear classifier on top of the learned features from SiT.