Adrias: Interference-Aware Memory Orchestration for Disaggregated Cloud Infrastructures

Workload co-location has become the de-facto approach for hosting applications in Cloud environments, leading, however, to interference and fragmentation in shared resources of the system. To this end, hardware disaggregation is introduced as a novel paradigm, that allows fine-grained tailoring of c...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA) pp. 855 - 869
Main Authors Masouros, Dimosthenis, Pinto, Christian, Gazzetti, Michele, Xydis, Sotirios, Soudris, Dimitrios
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Workload co-location has become the de-facto approach for hosting applications in Cloud environments, leading, however, to interference and fragmentation in shared resources of the system. To this end, hardware disaggregation is introduced as a novel paradigm, that allows fine-grained tailoring of cloud resources to the characteristics of the deployed applications. Towards the realization of hardware disaggregated clouds, novel orchestration frameworks must provide additional knobs to manage the increased scheduling complexity.We present Adrias, a memory orchestration framework for disaggregated cloud systems. Adrias exploits information from low-level performance events and applies deep learning techniques to effectively predict the system state and performance of arriving workloads on memory disaggregated systems, thus, driving cognitive scheduling between local and remote memory allocation modes. We evaluate Adrias on a state-of-art disaggregated testbed and show that it achieves 0.99 and 0.942 R 2 score for system state and application's performance prediction on average respectively. Moreover, Adrias manages to effectively utilize disaggregated memory, by offloading almost 1/3 of deployed applications with less than 15% performance overhead compared to a conventional local memory scheduling, while clearly outperforms naive scheduling approaches (random and round-robin), by providing up to ×2 better performance.
ISSN:2378-203X
DOI:10.1109/HPCA56546.2023.10070939