Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization
Successful applications of InfoNCE and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning. While featuring superior stability, these estimators crucially depend on costly large-batch training, and they sacrifice bound tightness for...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.07.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Successful applications of InfoNCE and its variants have popularized the use
of contrastive variational mutual information (MI) estimators in machine
learning. While featuring superior stability, these estimators crucially depend
on costly large-batch training, and they sacrifice bound tightness for variance
reduction. To overcome these limitations, we revisit the mathematics of popular
variational MI bounds from the lens of unnormalized statistical modeling and
convex optimization. Our investigation not only yields a new unified
theoretical framework encompassing popular variational MI bounds but also leads
to a novel, simple, and powerful contrastive MI estimator named as FLO.
Theoretically, we show that the FLO estimator is tight, and it provably
converges under stochastic gradient descent. Empirically, our FLO estimator
overcomes the limitations of its predecessors and learns more efficiently. The
utility of FLO is verified using an extensive set of benchmarks, which also
reveals the trade-offs in practical MI estimation. |
---|---|
DOI: | 10.48550/arxiv.2107.01131 |