Accelerating Data Delivery of Latency-Sensitive Applications in Container Overlay Network
Container overlay network, though being widely adopted to enable communication between containers on different hosts, is a key downside for latency-sensitive applications. The state-of-the-art solution seeks to shorten the data path in packet processing by replacing overlay connection file descripto...
Saved in:
Published in | IEEE transactions on parallel and distributed systems Vol. 34; no. 12; pp. 1 - 13 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Container overlay network, though being widely adopted to enable communication between containers on different hosts, is a key downside for latency-sensitive applications. The state-of-the-art solution seeks to shorten the data path in packet processing by replacing overlay connection file descriptors with host namespace ones. While promising, it must block each overlay connection until the relevant host connection is set up, thus heavily influencing the request latency. In this paper, we present ShuntFlow, a systematic data delivery framework that seamlessly integrates the host and overlay networks to reduce the application's request-response latency. ShuntFlow first lets all connections flow in the overlay network directly. Then, it adopts a simple-yet-effective syscall-threshold-based mechanism to pick appropriate connections and switches their data delivery to the host network in a blocking-free way using a multi-threading technique. As such, unnecessary connection switches are prevented; yet, the pre-setup phase dilemma is eliminated. We have implemented a ShuntFlow prototype based on Linux and Docker and evaluated it extensively on a 40Gbps testbed. The results show that ShuntFlow achieves 13% /72% and 19% /69% reductions, in average/tail request-response latency of a web server and an in-memory key-value store, respectively, while incurring less CPU overhead, compared to Slim. |
---|---|
ISSN: | 1045-9219 1558-2183 |
DOI: | 10.1109/TPDS.2023.3300745 |