Incorporating Prior Domain Knowledge into Deep Neural Networks

In recent years, the large amount of labeled data available has also helped tend research toward using minimal domain knowledge, e.g., in deep neural network research. However, in many situations, data is limited and of poor quality. Can domain knowledge be useful in such a setting? In this paper, w...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE International Conference on Big Data (Big Data) pp. 36 - 45
Main Authors Muralidhar, Nikhil, Islam, Mohammad Raihanul, Marwah, Manish, Karpatne, Anuj, Ramakrishnan, Naren
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, the large amount of labeled data available has also helped tend research toward using minimal domain knowledge, e.g., in deep neural network research. However, in many situations, data is limited and of poor quality. Can domain knowledge be useful in such a setting? In this paper, we propose domain adapted neural networks (DANN) to explore how domain knowledge can be integrated into model training for deep networks. In particular, we incorporate loss terms for knowledge available as monotonicity constraints and approximation constraints. We evaluate our model on both synthetic data generated using the popular Bohachevsky function and a real-world dataset for predicting oxygen solubility in water. In both situations, we find that our DANN model outperforms its domain-agnostic counterpart yielding an overall mean performance improvement of 19.5% with a worst- and best-case performance improvement of 4% and 42.7%, respectively.
DOI:10.1109/BigData.2018.8621955