Towards Power Efficient High Performance Packet I/O

Recently, high performance packet I/O frameworks continue to flourish for their ability to process packets from high-speed links. To achieve high throughput and low latency, high performance packet I/O frameworks usually employ busy polling. As busy polling will burn all CPU cycles even if there...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 31; no. 4; pp. 981 - 996
Main Authors Li, Xuesong, Cheng, Wenxue, Zhang, Tong, Ren, Fengyuan, Yang, Bailong
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, high performance packet I/O frameworks continue to flourish for their ability to process packets from high-speed links. To achieve high throughput and low latency, high performance packet I/O frameworks usually employ busy polling. As busy polling will burn all CPU cycles even if there's no packet to process, these frameworks are quite power inefficient. However, exploiting power management techniques such as DVFS and LPI in the frameworks is challenging, because neither the OS nor the frameworks can provide information (e.g., actual CPU utilization, available idle period, or the target frequency) required by these techniques. In this article, we establish a model that can formulate the packet processing flow of high performance packet I/O to help and address the above challenges. From the model, we can deduce the information needed for power management techniques, and gain the insights to balance the power and latency. After suggesting to use pause instruction to reduce CPU power within short idle period, we propose two approaches to conduct power conservation for high performance packet I/O: one with the aid of traffic information and the other without. Experiments with Intel DPDK show that both approaches can achieve significant power reduction with little latency increase.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1045-9219
1558-2183
DOI:10.1109/TPDS.2019.2957746