OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression
Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows, have shown to offer an effective data modelling alternative for lossless compression. However, DGMs themselves normally require large storage space and thus contaminate the advantage brought by accurate data density estimation...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.11.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Explicit deep generative models (DGMs), e.g., VAEs and Normalizing Flows,
have shown to offer an effective data modelling alternative for lossless
compression. However, DGMs themselves normally require large storage space and
thus contaminate the advantage brought by accurate data density estimation. To
eliminate the requirement of saving separate models for different target
datasets, we propose a novel setting that starts from a pretrained deep
generative model and compresses the data batches while adapting the model with
a dynamical system for only one epoch. We formalise this setting as that of
One-Shot Online Adaptation (OSOA) of DGMs for lossless compression and propose
a vanilla algorithm under this setting. Experimental results show that vanilla
OSOA can save significant time versus training bespoke models and space versus
using one model for all targets. With the same adaptation step number or
adaptation time, it is shown vanilla OSOA can exhibit better space efficiency,
e.g., $47\%$ less space, than fine-tuning the pretrained model and saving the
fine-tuned model. Moreover, we showcase the potential of OSOA and motivate more
sophisticated OSOA algorithms by showing further space or time efficiency with
multiple updates per batch and early stopping. |
---|---|
DOI: | 10.48550/arxiv.2111.01662 |