Early detection of Alzheimer's Disease (AD) is crucial to ensure timely interventions and optimize treatment outcomes for patients. While integrating multi-modal neuroimages, such as MRI and PET, has shown great promise, limited research has been done to effectively handle incomplete multi-modal image datasets in the integration. To this end, we propose a deep learning-based framework that employs Mutual Knowledge Distillation (MKD) to jointly model different sub-cohorts based on their respective available image modalities. In MKD, the model with more modalities (e.g., MRI and PET) is considered a teacher while the model with fewer modalities (e.g., only MRI) is considered a student. Our proposed MKD framework includes three key components: First, we design a teacher model that is student-oriented, namely the Student-oriented Multi-modal Teacher (SMT), through multi-modal information disentanglement. Second, we train the student model by not only minimizing its classification errors but also learning from the SMT teacher. Third, we update the teacher model by transfer learning from the student's feature extractor because the student model is trained with more samples. Evaluations on Alzheimer's Disease Neuroimaging Initiative (ADNI) datasets highlight the effectiveness of our method. Our work demonstrates the potential of using AI for addressing the challenges of incomplete multi-modal neuroimage datasets, opening new avenues for advancing early AD detection and treatment strategies.
阿尔茨海默病(AD)的早期检测对于确保及时干预和优化患者的治疗效果至关重要。虽然整合多模态神经影像,如磁共振成像(MRI)和正电子发射断层扫描(PET)已显示出巨大的潜力,但在整合过程中有效处理不完整的多模态影像数据集的研究还很有限。为此,我们提出了一个基于深度学习的框架,该框架采用互知识蒸馏(MKD),根据各自可用的影像模态对不同的子队列进行联合建模。在MKD中,具有更多模态(如MRI和PET)的模型被视为教师模型,而具有较少模态(如只有MRI)的模型被视为学生模型。我们提出的MKD框架包括三个关键部分:首先,我们通过多模态信息解耦设计了一个以学生为导向的教师模型,即面向学生的多模态教师(SMT)。其次,我们训练学生模型时,不仅要使其分类误差最小化,还要向SMT教师学习。第三,由于学生模型是用更多的样本进行训练的,我们通过从学生的特征提取器进行迁移学习来更新教师模型。对阿尔茨海默病神经影像学倡议(ADNI)数据集的评估突出了我们方法的有效性。我们的工作展示了利用人工智能解决不完整多模态神经影像数据集挑战的潜力,为推进阿尔茨海默病的早期检测和治疗策略开辟了新的途径。