Confidence Measures for Deep Learning in Domain Adaptation

In recent years, Deep Neural Networks (DNNs) have led to impressive results in a wide variety of machine learning tasks, typically relying on the existence of a huge amount of supervised data. However, in many applications (e.g., bio–medical image analysis), gathering large sets of labeled data can...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 9; no. 11; p. 2192
Main Authors Bonechi, Simone, Andreini, Paolo, Bianchini, Monica, Pai, Akshay, Scarselli, Franco
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.06.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, Deep Neural Networks (DNNs) have led to impressive results in a wide variety of machine learning tasks, typically relying on the existence of a huge amount of supervised data. However, in many applications (e.g., bio–medical image analysis), gathering large sets of labeled data can be very difficult and costly. Unsupervised domain adaptation exploits data from a source domain, where annotations are available, to train a model able to generalize also to a target domain, where labels are unavailable. Recent research has shown that Generative Adversarial Networks (GANs) can be successfully employed for domain adaptation, although deciding when to stop learning is a major concern for GANs. In this work, we propose some confidence measures that can be used to early stop the GAN training, also showing how such measures can be employed to predict the reliability of the network output. The effectiveness of the proposed approach has been tested in two domain adaptation tasks, with very promising results.
ISSN:2076-3417
2076-3417
DOI:10.3390/app9112192