Connecting the dots: Toward accountable machine-learning printer attribution methods
[Display omitted] . •A novel human-interpretable method for laser printer source attribution is proposed.•The method automatically indicates regions of interest on an investigated document.•Denser and discriminative features are probabilistically selected for each printer.•Features are ranked, reduc...
Saved in:
Published in | Journal of visual communication and image representation Vol. 53; pp. 257 - 272 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.05.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | [Display omitted] .
•A novel human-interpretable method for laser printer source attribution is proposed.•The method automatically indicates regions of interest on an investigated document.•Denser and discriminative features are probabilistically selected for each printer.•Features are ranked, reduced and dropped efficiently using feature importance.•The paper spans multiple further research directions.
Digital forensics is rapidly evolving as a direct consequence of the adoption of machine-learning methods allied with ever-growing amounts of data. Despite the fact that these methods yield more consistent and accurate results, they may face adoption hindrances in practice if their produced results are absent in a human-interpretable form. In this paper, we exemplify how human-interpretable (a.k.a., accountable) extensions can enhance existing algorithms to aid human experts, by introducing a new method for the source printer attribution problem. We leverage the recently proposed Convolutional Texture Gradient Filter (CTGF) algorithm’s ability to capture local printing imperfections to introduce a new method that maps and highlights important attribution features directly onto the investigated printed document. Supported by Random Forest classifiers, we isolate and rank features that are pivotal for differentiating a printer from others, and back-project those features onto the investigated document, giving analysts further evidence about the attribution process. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2018.04.002 |