ModelCI-e: Enabling Continual Learning in Deep Learning Serving Systems
MLOps is about taking experimental ML models to production, i.e., serving the models to actual users. Unfortunately, existing ML serving systems do not adequately handle the dynamic environments in which online data diverges from offline training data, resulting in tedious model updating and deploym...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
06.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | MLOps is about taking experimental ML models to production, i.e., serving the
models to actual users. Unfortunately, existing ML serving systems do not
adequately handle the dynamic environments in which online data diverges from
offline training data, resulting in tedious model updating and deployment
works. This paper implements a lightweight MLOps plugin, termed ModelCI-e
(continuous integration and evolution), to address the issue. Specifically, it
embraces continual learning (CL) and ML deployment techniques, providing
end-to-end supports for model updating and validation without serving engine
customization. ModelCI-e includes 1) a model factory that allows CL researchers
to prototype and benchmark CL models with ease, 2) a CL backend to automate and
orchestrate the model updating efficiently, and 3) a web interface for an ML
team to manage CL service collaboratively. Our preliminary results demonstrate
the usability of ModelCI-e, and indicate that eliminating the interference
between model updating and inference workloads is crucial for higher system
efficiency. |
---|---|
DOI: | 10.48550/arxiv.2106.03122 |