Learning robust deep visual representations from EEG brain recordings

Show simple item record

dc.contributor.author Singh, Prajwal
dc.contributor.author Dalal, Dwip
dc.contributor.author Vashishtha, Gautam
dc.contributor.author Miyapuram, Krishna Prasad
dc.contributor.author Raman, Shanmuganathan
dc.coverage.spatial United States of America
dc.date.accessioned 2023-11-08T15:16:15Z
dc.date.available 2023-11-08T15:16:15Z
dc.date.issued 2023-10
dc.identifier.citation Singh, Prajwal; Dalal, Dwip; Vashishtha, Gautam; Miyapuram, Krishna Prasad and Raman, Shanmuganathan, "Learning robust deep visual representations from EEG brain recordings", arXiv, Cornell University Library, DOI: arXiv:2310.16532, Oct. 2023.
dc.identifier.issn 2331-8422
dc.identifier.uri https://doi.org/10.48550/arXiv.2310.16532
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/9407
dc.description.abstract Decoding the human brain has been a hallmark of neuroscientists and Artificial Intelligence researchers alike. Reconstruction of visual images from brain Electroencephalography (EEG) signals has garnered a lot of interest due to its applications in brain-computer interfacing. This study proposes a two-stage method where the first step is to obtain EEG-derived features for robust learning of deep representations and subsequently utilize the learned representation for image generation and classification. We demonstrate the generalizability of our feature extraction pipeline across three different datasets using deep-learning architectures with supervised and contrastive learning methods. We have performed the zero-shot EEG classification task to support the generalizability claim further. We observed that a subject invariant linearly separable visual representation was learned using EEG data alone in an unimodal setting that gives better k-means accuracy as compared to a joint representation learning between EEG and images. Finally, we propose a novel framework to transform unseen images into the EEG space and reconstruct them with approximation, showcasing the potential for image reconstruction from EEG signals. Our proposed image synthesis method from EEG shows 62.9% and 36.13% inception score improvement on the EEGCVPR40 and the Thoughtviz datasets, which is better than state-of-the-art performance in GAN.
dc.description.statementofresponsibility by Prajwal Singh, Dwip Dalal, Gautam Vashishtha, Krishna Prasad Miyapuram and Shanmuganathan Raman
dc.publisher Cornell University Library
dc.title Learning robust deep visual representations from EEG brain recordings
dc.type Article
dc.relation.journal arXiv


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account