Characterizing the Deep Neural Networks Inference Performance of Mobile Applications
Today's mobile applications are increasingly leveraging deep neural networks to provide novel features, such as image and speech recognitions. To use a pre-trained deep neural network, mobile developers can either host it in a cloud server, referred to as cloud-based inference, or ship it with...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.09.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Today's mobile applications are increasingly leveraging deep neural networks
to provide novel features, such as image and speech recognitions. To use a
pre-trained deep neural network, mobile developers can either host it in a
cloud server, referred to as cloud-based inference, or ship it with their
mobile application, referred to as on-device inference. In this work, we
investigate the inference performance of these two common approaches on both
mobile devices and public clouds, using popular convolutional neural networks.
Our measurement study suggests the need for both on-device and cloud-based
inferences for supporting mobile applications. In particular, newer mobile
devices is able to run mobile-optimized CNN models in reasonable time. However,
for older mobile devices or to use more complex CNN models, mobile applications
should opt in for cloud-based inference. We further demonstrate that variable
network conditions can lead to poor cloud-based inference end-to-end time. To
support efficient cloud-based inference, we propose a CNN model selection
algorithm called CNNSelect that dynamically selects the most appropriate CNN
model for each inference request, and adapts its selection to match different
SLAs and execution time budgets that are caused by variable mobile
environments. The key idea of CNNSelect is to make inference speed and accuracy
trade-offs at runtime using a set of CNN models. We demonstrated that CNNSelect
smoothly improves inference accuracy while maintaining SLA attainment in 88.5%
more cases than a greedy baseline. |
---|---|
DOI: | 10.48550/arxiv.1909.04783 |