Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space

Show simple item record

dc.contributor.author Rajpura, Param
dc.contributor.author Cecotti, Hubert
dc.contributor.author Meena, Yogesh Kumar
dc.coverage.spatial United Kingdom
dc.date.accessioned 2024-08-23T10:25:16Z
dc.date.available 2024-08-23T10:25:16Z
dc.date.issued 2024-08
dc.identifier.citation Rajpura, Param; Cecotti, Hubert and Meena, Yogesh Kumar, "Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space", Journal of Neural Engineering, DOI: 10.1088/1741-2552/ad6593, vol. 21, no. 4, Aug. 2024.
dc.identifier.issn 1741-2560
dc.identifier.issn 1741-2552
dc.identifier.uri https://doi.org/10.1088/1741-2552/ad6593
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10362
dc.description.abstract Objective. This review paper provides an integrated perspective of Explainable Artificial Intelligence (XAI) techniques applied to Brain–Computer Interfaces (BCIs). BCIs use predictive models to interpret brain signals for various high-stake applications. However, achieving explainability in these complex models is challenging as it compromises accuracy. Trust in these models can be established by incorporating reasoning or causal relationships from domain experts. The field of XAI has emerged to address the need for explainability across various stakeholders, but there is a lack of an integrated perspective in XAI for BCI (XAI4BCI) literature. It is necessary to differentiate key concepts like explainability, interpretability, and understanding, often used interchangeably in this context, and formulate a comprehensive framework. Approach. To understand the need of XAI for BCI, we pose six key research questions for a systematic review and meta-analysis, encompassing its purposes, applications, usability, and technical feasibility. We employ the PRISMA methodology—preferred reporting items for systematic reviews and meta-analyses to review (n = 1246) and analyse (n = 84) studies published in 2015 and onwards for key insights. Main results. The results highlight that current research primarily focuses on interpretability for developers and researchers, aiming to justify outcomes and enhance model performance. We discuss the unique approaches, advantages, and limitations of XAI4BCI from the literature. We draw insights from philosophy, psychology, and social sciences. We propose a design space for XAI4BCI, considering the evolving need to visualise and investigate predictive model outcomes customised for various stakeholders in the BCI development and deployment lifecycle. Significance. This paper is the first to focus solely on reviewing XAI4BCI research articles. This systematic review and meta-analysis findings with the proposed design space prompt important discussions on establishing standards for BCI explanations, highlighting current limitations, and guiding the future of XAI in BCI.
dc.description.statementofresponsibility by Param Rajpura, Hubert Cecotti and Yogesh Kumar Meena
dc.format.extent vol. 21, no. 4
dc.language.iso en_US
dc.publisher IOP Publishing
dc.title Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space
dc.type Article
dc.relation.journal Journal of Neural Engineering


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account