Visualizing Compression of Deep Learning Models for Classification

Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes whe...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) pp. 1 - 8
Main Authors Dotter, Marissa, Ward, Chris M.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning models have made great strides in tasks like classification and object detection. However, these models are often computationally intensive, require vast amounts of data in the domain, and typically contain millions or even billions of parameters. They are also relative black-boxes when it comes to being able to interpret and analyze their functionality on data or evaluating the suitability of the network for the data that is available. To address these issues, we investigate compression techniques available off-the-shelf that aid in reducing the dimensionality of the parameter space within a Convolutional Neural Network. In this way, compression will allow us to interpret and evaluate the network more efficiently as only important features will be propagated throughout the network.
ISSN:2332-5615
DOI:10.1109/AIPR.2018.8707381