Compressed MobileNet V3:A Light Weight Variant for Resource-Constrained Platforms

Convolutional Neural Networks (CNNs) are ubiquitous in computer vision applications. This is attributed to their excellent performance in image classification which forms the foundation for many complex tasks such as object localization, object tracking, etc. Despite their huge success, the intensiv...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC) pp. 0104 - 0107
Main Authors Kavyashree, Prasad S P, El-Sharkawy, Mohamed
Format Conference Proceeding
LanguageEnglish
Published IEEE 27.01.2021
Subjects
Online AccessGet full text
DOI10.1109/CCWC51732.2021.9376113

Cover

Loading…
More Information
Summary:Convolutional Neural Networks (CNNs) are ubiquitous in computer vision applications. This is attributed to their excellent performance in image classification which forms the foundation for many complex tasks such as object localization, object tracking, etc. Despite their huge success, the intensive computation, memory bandwidth, and energy requirements have made it difficult to deploy them in low power and resource-constrained platforms. To overcome this, many researchers have designed compact models achieving a tradeoff between model size and accuracy. MobileNet V3, the latest variant of MobileNets is one of the CNN models complying with this trend [1]. It has a model size of 15.3 MB with a validation accuracy of 88.93% on the CIFAR-10 dataset[2]. In this paper, we have modified the baseline architecture to further reduce its size to 2.3 MB while achieving an accuracy of 89.13%.
DOI:10.1109/CCWC51732.2021.9376113