Online Policy Learning for Opportunistic Mobile Computation Offloading

This work considers opportunistic mobile computation offloading between a requestor and a helper. The requestor device may offload some of its computation-intensive tasks to the helper device. The availability of the helper, however, is random. The objective of this work is to find the optimum offlo...

Full description

Saved in:
Bibliographic Details
Published inIEEE Global Communications Conference (Online) pp. 1 - 6
Main Authors Mu, Siqi, Zhong, Zhangdui, Zhao, Dongmei
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2020
Subjects
Online AccessGet full text
ISSN2576-6813
DOI10.1109/GLOBECOM42002.2020.9322467

Cover

Loading…
More Information
Summary:This work considers opportunistic mobile computation offloading between a requestor and a helper. The requestor device may offload some of its computation-intensive tasks to the helper device. The availability of the helper, however, is random. The objective of this work is to find the optimum offloading decisions for the requestor to minimize its energy consumption, subject to a mean delay constraint of the tasks. The problem is formulated as a constrained Markov decision process by taking into consideration the random task arrivals, availability of the helper, and time-varying channel conditions. Optimal offline solution is first obtained through linear programming. An online algorithm is then designed to learn the optimum offloading policy by introducing post-decision states into the problem. Simulation results demonstrate that the proposed online algorithm achieves close-to-optimum performance with much lower complexity.
ISSN:2576-6813
DOI:10.1109/GLOBECOM42002.2020.9322467