Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning Framework for Congestion Control in Tactical Environments
Conventional Congestion Control (CC) algorithms,such as TCP Cubic, struggle in tactical environments as they misinterpret packet loss and fluctuating network performance as congestion symptoms. Recent efforts, including our own MARLIN, have explored the use of Reinforcement Learning (RL) for CC, but...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
27.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Conventional Congestion Control (CC) algorithms,such as TCP Cubic, struggle
in tactical environments as they misinterpret packet loss and fluctuating
network performance as congestion symptoms. Recent efforts, including our own
MARLIN, have explored the use of Reinforcement Learning (RL) for CC, but they
often fall short of generalization, particularly in competitive, unstable, and
unforeseen scenarios. To address these challenges, this paper proposes an RL
framework that leverages an accurate and parallelizable emulation environment
to reenact the conditions of a tactical network. We also introduce refined RL
formulation and performance evaluation methods tailored for agents operating in
such intricate scenarios. We evaluate our RL learning framework by training a
MARLIN agent in conditions replicating a bottleneck link transition between a
Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
Finally, we compared its performance in file transfer tasks against
Transmission Control Protocol (TCP) Cubic and the default strategy implemented
in the Mockets tactical communication middleware. The results demonstrate that
the MARLIN RL agent outperforms both TCP and Mockets under different
perspectives and highlight the effectiveness of specialized RL solutions in
optimizing CC for tactical network environments. |
---|---|
DOI: | 10.48550/arxiv.2306.15591 |