Interworking technology of neural network and data among deep learning frameworks

Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking t...

Full description

Saved in:
Bibliographic Details
Published inETRI journal Vol. 41; no. 6; pp. 760 - 770
Main Authors Park, Jaebok, Yoo, Seungmok, Yoon, Seokjin, Lee, Kyunghee, Cho, Changsik
Format Journal Article
LanguageEnglish
Published Electronics and Telecommunications Research Institute (ETRI) 01.12.2019
한국전자통신연구원
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.
Bibliography:https://doi.org/10.4218/etrij.2018-0135
ISSN:1225-6463
2233-7326
DOI:10.4218/etrij.2018-0135