Learning Multi-Domain Adversarial Neural Networks for Text Classification
Deep neural networks have been applied to learn transferable features for adapting text classification models from a source domain to a target domain. Conventional domain adaptation used to adapt models from an individual specific domain with sufficient labeled data to another individual specific ta...
Saved in:
Published in | IEEE access Vol. 7; pp. 40323 - 40332 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep neural networks have been applied to learn transferable features for adapting text classification models from a source domain to a target domain. Conventional domain adaptation used to adapt models from an individual specific domain with sufficient labeled data to another individual specific target domain without any (or with little) labeled data. However, in this paradigm, we lose sight of correlation among different domains where common knowledge could be shared to improve the performance of both the source domain and the target domain. Multi-domain learning proposes learning the sharable features from multiple source domains and the target domain. However, previous work mainly focuses on improving the performance of the target domain and lacks the effective mechanism to ensure that the shared feature space is not contaminated by domain-specific features. In this paper, we use an adversarial training strategy and orthogonality constraints to guarantee that the private and shared features do not collide with each other, which can improve the performances of both the source domains and the target domain. The experimental results, on a standard sentiment domain adaptation dataset and a consumption intention identification dataset labeled by us, show that our approach dramatically outperforms state-of-the-art baselines, and it is general enough to be applied to more scenarios. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2904858 |