Cautious classification with nested dichotomies and imprecise probabilities

In some applications of machine learning and information retrieval (e.g. medical diagnosis, image recognition, pre-classification...), it can be preferable to provide less informative but more reliable predictions. This can be done by making partial predictions in the form of class subsets when the...

Full description

Saved in:
Bibliographic Details
Published inSoft computing (Berlin, Germany) Vol. 21; no. 24; pp. 7447 - 7462
Main Authors Yang, Gen, Destercke, Sébastien, Masson, Marie-Hélène
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.12.2017
Springer Nature B.V
Springer Verlag
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In some applications of machine learning and information retrieval (e.g. medical diagnosis, image recognition, pre-classification...), it can be preferable to provide less informative but more reliable predictions. This can be done by making partial predictions in the form of class subsets when the available information is insufficient to provide a reliable unique class. Imprecise probabilistic approaches offer nice tools to learn models from which such cautious predictions can be produced. However, the learning and inference processes of such models are computationally harder than their precise counterparts. In this paper, we introduce and study a particular binary decomposition strategy, nested dichotomies, that offer computational advantages in both the learning (due to the binarization process) and the inference (due to the decomposition strategy) processes. We show with experiments that these computational advantages do not lower the performances of the classifiers, and can even improve them when the class space has some structure.
ISSN:1432-7643
1433-7479
DOI:10.1007/s00500-016-2287-7