Online model-based reinforcement learning for decision-making in long distance routes

In road transportation, long-distance routes require scheduled driving times, breaks, and rest periods, in compliance with the regulations on working conditions for truck drivers, while ensuring goods are delivered within the time windows of each customer. However, routes are subject to uncertain tr...

Full description

Saved in:
Bibliographic Details
Published inTransportation research. Part E, Logistics and transportation review Vol. 164; p. 102790
Main Authors Alcaraz, Juan J., Losilla, Fernando, Caballero-Arnaldos, Luis
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.08.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In road transportation, long-distance routes require scheduled driving times, breaks, and rest periods, in compliance with the regulations on working conditions for truck drivers, while ensuring goods are delivered within the time windows of each customer. However, routes are subject to uncertain travel and service times, and incidents may cause additional delays, making predefined schedules ineffective in many real-life situations. This paper presents a reinforcement learning (RL) algorithm capable of making en-route decisions regarding driving times, breaks, and rest periods, under uncertain conditions. Our proposal aims at maximizing the likelihood of on-time delivery while complying with drivers’ work regulations. We use an online model-based RL strategy that needs no prior training and is more flexible than model-free RL approaches, where the agent must be trained offline before making online decisions. Our proposal combines model predictive control with a rollout strategy and Monte Carlo tree search. At each decision stage, our algorithm anticipates the consequences of all the possible decisions in a number of future stages (the lookahead horizon), and then uses a base policy to generate a sequence of decisions beyond the lookahead horizon. This base policy could be, for example, a set of decision rules based on the experience and expertise of the transportation company covering the routes. Our numerical results show that the policy obtained using our algorithm outperforms not only the base policy (up to 83%), but also a policy obtained offline using deep Q networks (DQN), a state-of-the-art, model-free RL algorithm. •We use RL for making en-route decisions regarding the length of driving and rest periods.•The goal is on-time delivery while complying with the regulations on drivers’ working conditions.•We propose a novel model-based approach that does not need pre-training.•Our algorithm makes efficient decisions in real-time under uncertainty.
ISSN:1366-5545
1878-5794
DOI:10.1016/j.tre.2022.102790