PaRT: Parallel Learning Towards Robust and Transparent AI
This paper takes a parallel learning approach for robust and transparent AI. A deep neural network is trained in parallel on multiple tasks, where each task is trained only on a subset of the network resources. Each subset consists of network segments, that can be combined and shared across specific...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.01.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper takes a parallel learning approach for robust and transparent AI.
A deep neural network is trained in parallel on multiple tasks, where each task
is trained only on a subset of the network resources. Each subset consists of
network segments, that can be combined and shared across specific tasks. Tasks
can share resources with other tasks, while having independent task-related
network resources. Therefore, the trained network can share similar
representations across various tasks, while also enabling independent
task-related representations. The above allows for some crucial outcomes. (1)
The parallel nature of our approach negates the issue of catastrophic
forgetting. (2) The sharing of segments uses network resources more
efficiently. (3) We show that the network does indeed use learned knowledge
from some tasks in other tasks, through shared representations. (4) Through
examination of individual task-related and shared representations, the model
offers transparency in the network and in the relationships across tasks in a
multi-task setting. Evaluation of the proposed approach against complex
competing approaches such as Continual Learning, Neural Architecture Search,
and Multi-task learning shows that it is capable of learning robust
representations. This is the first effort to train a DL model on multiple tasks
in parallel. Our code is available at https://github.com/MahsaPaknezhad/PaRT |
---|---|
DOI: | 10.48550/arxiv.2201.09534 |