Source printer identification from document images acquired using smartphone

Show simple item record

dc.contributor.author Joshi, Sharad
dc.contributor.author Saxena, Suraj
dc.contributor.author Khanna, Nitin
dc.coverage.spatial United States of America
dc.date.accessioned 2024-06-27T12:49:35Z
dc.date.available 2024-06-27T12:49:35Z
dc.date.issued 2024-08
dc.identifier.citation Joshi, Sharad; Saxena, Suraj and Khanna, Nitin, "Source printer identification from document images acquired using smartphone", Journal of Information Security and Applications, DOI: 10.1016/j.jisa.2024.103804, vol. 84, Aug. 2024.
dc.identifier.issn 2214-2126
dc.identifier.uri https://doi.org/10.1016/j.jisa.2024.103804
dc.identifier.uri https://repository.iitgn.ac.in/handle/123456789/10168
dc.description.abstract Vast volumes of printed documents continue to be used for various important as well as trivial applications. Such applications often rely on the information provided in the form of printed text documents whose integrity verification poses a challenge due to time constraints and lack of resources. Source printer identification provides essential information about the origin and integrity of a printed document in a fast and cost-effective manner. Even when fraudulent documents are identified, information about their origin can help stop future frauds. If a smartphone camera replaces scanner for the document acquisition process, document forensics would be more economical, user-friendly, and even faster in many applications where remote and distributed analysis is beneficial. Building on existing methods, we propose to learn a single CNN model from the fusion of letter images and their printer-specific noise residuals. In the absence of any publicly available dataset, we created a new dataset consisting of 2250 document images of text documents printed by eighteen printers and acquired by a smartphone camera at five acquisition settings. The proposed method achieves 98.42% document classification accuracy using images of letter ‘e’ under a 5 × 2 cross-validation approach. Further, when tested using about half a million letters of all types, it achieves 90.33% and 98.01% letter and document classification accuracies, respectively, thus highlighting the ability to learn a discriminative model without dependence on a single letter type. Also, classification accuracies are encouraging under various acquisition settings, including low illumination and change in angle between the document and camera planes.
dc.description.statementofresponsibility by Sharad Joshi, Suraj Saxena and Nitin Khanna
dc.format.extent vol. 84
dc.language.iso en_US
dc.publisher Elsevier
dc.title Source printer identification from document images acquired using smartphone
dc.type Article
dc.relation.journal Journal of Information Security and Applications


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search Digital Repository


Browse

My Account