Quantifying spatial domain explanations in BCI using Earth mover's distance

Show simple item record

dc.contributor.author Rajpura, Param
dc.contributor.author Cecotti, Hubert
dc.contributor.author Meena, Yogesh Kumar
dc.contributor.other International Joint Conference on Neural Networks (IJCNN 2024)
dc.coverage.spatial Japan
dc.date.accessioned 2024-10-08T15:06:55Z
dc.date.available 2024-10-08T15:06:55Z
dc.date.issued 2024-06-30
dc.identifier.citation Rajpura, Param; Cecotti, Hubert and Meena, Yogesh Kumar, "Quantifying spatial domain explanations in BCI using Earth mover's distance", in the International Joint Conference on Neural Networks (IJCNN 2024), Yokohama, JP, Jun. 30-Jul. 5, 2024.
dc.identifier.uri https://doi.org/10.1109/IJCNN60899.2024.10650619
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10646
dc.description.abstract Brain-computer interface (BCI) systems facilitate unique communication between humans and computers, benefiting severely disabled individuals. Despite decades of research, BCIs are not fully integrated into clinical and commercial settings. It’s crucial to assess and explain BCI performance, offering clear explanations for potential users to avoid frustration when it doesn’t work as expected. This work investigates the efficacy of different deep learning and Riemannian geometry-based classification models in the context of motor imagery (MI) based BCI using electroencephalography (EEG). We then propose an optimal transport theory-based approach using earth mover’s distance (EMD) to quantify the comparison of the feature relevance map with the domain knowledge of neuroscience. For this, we utilized explainable AI (XAI) techniques for generating feature relevance in the spatial domain to identify important channels for model outcomes. Three state-of-the-art models are implemented - 1) Riemannian geometry-based classifier, 2) EEGNet, and 3) EEG Conformer, and the observed trend in the model’s accuracy across different architectures on the dataset correlates with the proposed feature relevance metrics. The models with diverse architectures perform significantly better when trained on channels relevant to motor imagery than data-driven channel selection. This work focuses attention on the necessity for interpretability and incorporating metrics beyond accuracy, underscores the value of combining domain knowledge and quantifying model interpretations with data-driven approaches in creating reliable and robust Brain-Computer Interfaces (BCIs).
dc.description.statementofresponsibility by Param Rajpura, Hubert Cecotti and Yogesh Kumar Meena
dc.language.iso en_US
dc.publisher Institute of Electrical and Electronics Engineers (IEEE)
dc.subject Explainable AI
dc.subject Brain-computer interface
dc.subject Motor imagery
dc.subject Optimal transport theory
dc.title Quantifying spatial domain explanations in BCI using Earth mover's distance
dc.type Conference Paper


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account