A Cross-Layer Review of Deep Learning Frameworks to Ease Their Optimization and Reuse
Machine learning and especially Deep Learning (DL) approaches are at the heart of many domains, from computer vision and speech processing to predicting trajectories in autonomous driving and data science. Those approaches mainly build upon Neural Networks (NNs), which are compute-intensive in natur...
Saved in:
Published in | 2020 IEEE 23rd International Symposium on Real-Time Distributed Computing (ISORC) pp. 144 - 145 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.05.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Machine learning and especially Deep Learning (DL) approaches are at the heart of many domains, from computer vision and speech processing to predicting trajectories in autonomous driving and data science. Those approaches mainly build upon Neural Networks (NNs), which are compute-intensive in nature. A plethora of frameworks, libraries and platforms have been deployed for the implementation of those NNs, but end users often lack guidance on what frameworks, platforms and libraries to use to obtain the best implementation for their particular needs. This paper analyzes the DL ecosystem providing a structured view of some of the main frameworks, platforms and libraries for DL implementation. We show how those DL applications build ultimately on some form of linear algebra operations such as matrix multiplication, vector addition, dot product and the like. This analysis allows understanding how optimizations of specific linear algebra functions for specific platforms can be effectively leveraged to maximize specific targets (e.g. performance or power-efficiency) at application level reusing components across frameworks and domains. |
---|---|
ISSN: | 2375-5261 |
DOI: | 10.1109/ISORC49007.2020.00030 |