An in-depth analysis of data reduction methods for sustainable deep learning
In recent years, deep learning has gained popularity for its ability to solve complex classification tasks. It provides increasingly better results thanks to the development of more accurate models, the availability of huge volumes of data and the improved computational capabilities of modern comput...
Saved in:
Published in | Open research Europe Vol. 4; p. 101 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Belgium
F1000 Research Ltd
2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In recent years, deep learning has gained popularity for its ability to solve complex classification tasks. It provides increasingly better results thanks to the development of more accurate models, the availability of huge volumes of data and the improved computational capabilities of modern computers. However, these improvements in performance also bring efficiency problems, related to the storage of datasets and models, and to the waste of energy and time involved in both the training and inference processes. In this context, data reduction can help reduce energy consumption when training a deep learning model. In this paper, we present up to eight different methods to reduce the size of a tabular training dataset, and we develop a Python package to apply them. We also introduce a representativeness metric based on topology to measure the similarity between the reduced datasets and the full training dataset. Additionally, we develop a methodology to apply these data reduction methods to image datasets for object detection tasks. Finally, we experimentally compare how these data reduction methods affect the representativeness of the reduced dataset, the energy consumption and the predictive performance of the model. |
---|---|
ISSN: | 2732-5121 2732-5121 |
DOI: | 10.12688/openreseurope.17554.1 |