Learning Transferable Conceptual Prototypes for Interpretable Unsupervised Domain Adaptation
Despite the great progress of unsupervised domain adaptation (UDA) with the deep neural networks, current UDA models are opaque and cannot provide promising explanations, limiting their applications in the scenarios that require safe and controllable model decisions. At present, a surge of work focu...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite the great progress of unsupervised domain adaptation (UDA) with the
deep neural networks, current UDA models are opaque and cannot provide
promising explanations, limiting their applications in the scenarios that
require safe and controllable model decisions. At present, a surge of work
focuses on designing deep interpretable methods with adequate data annotations
and only a few methods consider the distributional shift problem. Most existing
interpretable UDA methods are post-hoc ones, which cannot facilitate the model
learning process for performance enhancement. In this paper, we propose an
inherently interpretable method, named Transferable Conceptual Prototype
Learning (TCPL), which could simultaneously interpret and improve the processes
of knowledge transfer and decision-making in UDA. To achieve this goal, we
design a hierarchically prototypical module that transfers categorical basic
concepts from the source domain to the target domain and learns domain-shared
prototypes for explaining the underlying reasoning process. With the learned
transferable prototypes, a self-predictive consistent pseudo-label strategy
that fuses confidence, predictions, and prototype information, is designed for
selecting suitable target samples for pseudo annotations and gradually
narrowing down the domain gap. Comprehensive experiments show that the proposed
method can not only provide effective and intuitive explanations but also
outperform previous state-of-the-arts. |
---|---|
DOI: | 10.48550/arxiv.2310.08071 |