Abstract
Motor imagery (MI)-based electroencephalography (EEG) stands as a prominent paradigm in the brain-computer interface (BCI) field, which is frequently applied in neural rehabilitation and gaming due to its accessibility and reliability. Despite extensive research dedicated to MI EEG classification algorithms, a notable deficiency still remains: their performance is often optimal only in subject-specific or dataset-specific scenarios, which undermines their generalization capability, hence restricting BCI systems' practical utility in real-world contexts. To address this limitation, this study introduces a cutting-edge approach: a discriminative adversarial network based on spatial-temporal-graph fusion (STG-DAN). This innovation aims to learn features that are not only class-discriminative but also domain-invariant. Specifically, the feature extraction module guarantees the feature discriminativeness by amalgamating spatial-temporal and graph-related features, while the domain alignment module focuses on both global domain and local subdomain. The two modules are incorporated into one adversarial learning framework to facilitate the acquisition of domain-invariant features. Evaluations on two publicly accessible datasets, BCI competition IV 2a and OpenBMI, affirm the superiority of our proposed model (averaged accuracy = 62.94% and 73.01% for the two datasets in cross-subject circumstance, respectively). In cross-dataset circumstances, it also outperforms several state-of-the-art algorithms, attesting to the potency of STG-DAN.