L
o
a
d
i
n
g
.
.
.
IEEE Sensors Journal ( Volume: 22) Date of Publication: 19 September 2022

Multimodal Multitask Neural Network for Motor Imagery Classification With EEG and fNIRS Signals

Qun He; Lufeng Feng; Guoqian Jiang*; and Ping Xie*

Abstract

Brain–computer interface (BCI) based on motor imagery (MI) can control external applications by decoding different brain physiological signals, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Traditional unimodal-based MI decoding methods cannot obtain satisfactory classification performance due to the limited representation ability in EEG or fNIRS signals. Usually, different brain signals have complementarity with different sensitivity to different MI patterns. To improve the recognition rate and generalization ability of MI, we propose a novel end-to-end multimodal multitask neural network (M2NN) model with the fusion of EEG and fNIRS signals. M2NN method integrates the spatial–temporal feature extraction module, multimodal feature fusion module, and multitask learning (MTL) module. Specifically, the MTL module includes two learning tasks, namely one main classification task for MI and one auxiliary task with deep metric learning. This approach was evaluated using a public multimodal dataset, and experimental results show that M2NN achieved the classification accuracy improvement of 8.92%, 6.97%, and 8.62% higher than multitask unimodal EEG signal model (MEEG), multitask unimodal HbR signal model (MHbR), and multimodal single-task (MDNN), respectively. Classification accuracies of multitasking methods of MEEG, MHbR, and M2NN are improved by 4.8%, 4.37%, and 8.62% compared with single-task methods EEG, HbR, and MDNN, respectively. The M2NN method achieved the best classification performance of the six methods, with the average accuracy of 29 subjects being 82.11% ± 7.25%. The effectiveness of multimodal fusion and MTL was verified. The M2NN method is superior to baseline and state-of-the-art (SOTA) methods.

摘要

基于运动图像(MI)的脑机接口(BCI)可通过解码不同的脑生理信号(如脑电图(EEG)和功能性近红外光谱(fNIRS))来控制外部应用程序。由于脑电图或 fNIRS 信号的表示能力有限,传统的基于单模态的 MI 解码方法无法获得令人满意的分类性能。通常,不同的大脑信号具有互补性,对不同的 MI 模式具有不同的敏感性。为了提高 MI 的识别率和泛化能力,我们提出了一种融合脑电图和 fNIRS 信号的新型端到端多模态多任务神经网络(M2NN)模型。M2NN 方法集成了时空特征提取模块、多模态特征融合模块和多任务学习(MTL)模块。具体来说,MTL 模块包括两个学习任务,即一个 MI 的主要分类任务和一个深度度量学习的辅助任务。实验结果表明,与多任务单模态脑电信号模型(MEEG)、多任务单模态 HbR 信号模型(MHbR)和多模态单任务(MDNN)相比,M2NN 的分类准确率分别提高了 8.92%、6.97% 和 8.62%。与单任务方法 EEG、HbR 和 MDNN 相比,多任务方法 MEEG、MHbR 和 M2NN 的分类准确率分别提高了 4.8%、4.37% 和 8.62%。在六种方法中,M2NN 方法的分类效果最好,29 个受试者的平均准确率为 82.11% ± 7.25%。多模态融合和 MTL 的有效性得到了验证。M2NN 方法优于基线方法和最先进的(SOTA)方法。