Efficient Multimodal Fusion via Interactive Prompting
Large-scale pre-training has brought unimodal fields such as computer vision and natural language processing to a new era. Following this trend, the size of multi-modal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.04.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Large-scale pre-training has brought unimodal fields such as computer vision
and natural language processing to a new era. Following this trend, the size of
multi-modal learning models constantly increases, leading to an urgent need to
reduce the massive computational cost of finetuning these models for downstream
tasks. In this paper, we propose an efficient and flexible multimodal fusion
method, namely PMF, tailored for fusing unimodally pre-trained transformers.
Specifically, we first present a modular multimodal fusion framework that
exhibits high flexibility and facilitates mutual interactions among different
modalities. In addition, we disentangle vanilla prompts into three types in
order to learn different optimizing objectives for multimodal learning. It is
also worth noting that we propose to add prompt vectors only on the deep layers
of the unimodal transformers, thus significantly reducing the training memory
usage. Experiment results show that our proposed method achieves comparable
performance to several other multimodal finetuning methods with less than 3%
trainable parameters and up to 66% saving of training memory usage. |
---|---|
DOI: | 10.48550/arxiv.2304.06306 |