Feature-based Transformer with Incomplete Multimodal Brain Images for Diagnosis of Neurodegenerative Diseases

Benefiting from complementary information, multimodal brain imaging analysis has distinct advantages over single-modal methods for the diagnosis of neurodegenerative diseases such as Alzheimer’s disease. However, multi-modal brain images are often incomplete with missing data in clinical practice due to various issues such as motion, medical costs, and scanner availability. Most existing methods attempted to build machine learning models to directly estimate the missing images. However, since brain images are of high dimension, accurate and efficient estimation of missing data is quite challenging, and not all voxels in the brain images are associated with the disease. In this paper, we propose a multimodal feature-based transformer to impute multimodal brain features with missing data for the diagnosis of neurodegenerative disease. The proposed method consists of a feature regression subnetwork and a multimodal fusion subnetwork based on transformer, for completion of the features of missing data and also multimodal diagnosis of disease. Different from previous methods for the generation of missing images, our method imputes high-level and disease-related features for multimodal classification. Experiments on the ADNI database with 1,364 subjects show better performance of our method over the state-of-the-art methods in disease diagnosis with missing multimodal data.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here