ModiPick: SLA-aware Accuracy Optimization For Mobile Deep Inference

Mobile applications are increasingly leveraging complex deep learning models to deliver features, e.g., image recognition, that require high prediction accuracy. Such models can be both computation and memory-intensive, even for newer mobile devices, and are therefore commonly hosted in powerful rem...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ogden, Samuel S, Guo, Tian
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 04.09.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mobile applications are increasingly leveraging complex deep learning models to deliver features, e.g., image recognition, that require high prediction accuracy. Such models can be both computation and memory-intensive, even for newer mobile devices, and are therefore commonly hosted in powerful remote servers. However, current cloud-based inference services employ static model selection approach that can be suboptimal for satisfying application SLAs (service level agreements), as they fail to account for inherent dynamic mobile environment. We introduce a cloud-based technique called ModiPick that dynamically selects the most appropriate model for each inference request, and adapts its selection to match different SLAs and execution time budgets that are caused by variable mobile environments. The key idea of ModiPick is to make inference speed and accuracy trade-offs at runtime with a pool of managed deep learning models. As such, ModiPick masks unpredictable inference time budgets and therefore meets SLA targets, while improving accuracy within mobile network constraints. We evaluate ModiPick through experiments based on prototype systems and through simulations. We show that ModiPick achieves comparable inference accuracy to a greedy approach while improving SLA adherence by up to 88.5%.
ISSN:2331-8422