MultiFusionNet: multilayer multimodal fusion of deep neural networks for chest X-ray image classification

Show simple item record

dc.contributor.author Agarwal, Saurabh
dc.contributor.author Arya, K. V.
dc.contributor.author Meena, Yogesh Kumar
dc.coverage.spatial United Kingdom
dc.date.accessioned 2024-08-09T10:31:53Z
dc.date.available 2024-08-09T10:31:53Z
dc.date.issued 2024-07
dc.identifier.citation Agarwal, Saurabh; Arya, K. V. and Meena, Yogesh Kumar, "MultiFusionNet: multilayer multimodal fusion of deep neural networks for chest X-ray image classification", Soft Computing, DOI: 10.1007/s00500-024-09901-x, Jul. 2024.
dc.identifier.issn 1432-7643
dc.identifier.issn 1433-7479
dc.identifier.uri https://doi.org/10.1007/s00500-024-09901-x
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10294
dc.description.abstract Chest X-ray imaging is a critical diagnostic tool for identifying pulmonary diseases. However, manual interpretation of these images is time-consuming and error-prone. Automated systems utilizing convolutional neural networks (CNNs) have shown promise in improving the accuracy and efficiency of chest X-ray image classification. While previous work has mainly focused on using feature maps from the final convolution layer, there is a need to explore the benefits of leveraging additional layers for improved disease classification. Extracting robust features from limited medical image datasets remains a critical challenge. In this paper, we propose a novel deep learning-based multilayer multimodal fusion model that emphasizes extracting features from different layers and fusing them. Our disease detection model considers the discriminatory information captured by each layer. Furthermore, we propose the fusion of different-sized feature maps (FDSFM) module to effectively merge feature maps from diverse layers. The proposed model achieves a significantly higher accuracy of 97.21% and 99.60% for both three-class and two-class classifications, respectively. The proposed multilayer multimodal fusion model, along with the FDSFM module, holds promise for accurate disease classification and can also be extended to other disease classifications in chest X-ray images.
dc.description.statementofresponsibility by Saurabh Agarwal, K. V. Arya and Yogesh Kumar Meena
dc.language.iso en_US
dc.publisher Springer
dc.subject Medical image processing
dc.subject Convolutional neural network (CNN)
dc.subject Multilayer fusion model
dc.subject Multimodal fusion model
dc.subject Disease classifications
dc.subject Chest X-ray image
dc.title MultiFusionNet: multilayer multimodal fusion of deep neural networks for chest X-ray image classification
dc.type Article
dc.relation.journal Soft Computing


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account