The Effect of Data Dimensionality on Neural Network Prunability
Practitioners prune neural networks for efficiency gains and generalization improvements, but few scrutinize the factors determining the prunability of a neural network the maximum fraction of weights that pruning can remove without compromising the model's test accuracy. In this work, we study...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.12.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Practitioners prune neural networks for efficiency gains and generalization
improvements, but few scrutinize the factors determining the prunability of a
neural network the maximum fraction of weights that pruning can remove without
compromising the model's test accuracy. In this work, we study the properties
of input data that may contribute to the prunability of a neural network. For
high dimensional input data such as images, text, and audio, the manifold
hypothesis suggests that these high dimensional inputs approximately lie on or
near a significantly lower dimensional manifold. Prior work demonstrates that
the underlying low dimensional structure of the input data may affect the
sample efficiency of learning. In this paper, we investigate whether the low
dimensional structure of the input data affects the prunability of a neural
network. |
---|---|
DOI: | 10.48550/arxiv.2212.00291 |