GRU: optimization of NPI performance
Currently, artificial intelligence is being used in automatic programming by producing snippets of code. NPI (neural programmer-interpreter) is the most used technology that uses machine learning to implement automatic programming. This paper is aimed to improve the performance of traditional NPI an...
Saved in:
Published in | The Journal of supercomputing Vol. 76; no. 5; pp. 3542 - 3554 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.05.2020
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Currently, artificial intelligence is being used in automatic programming by producing snippets of code. NPI (neural programmer-interpreter) is the most used technology that uses machine learning to implement automatic programming. This paper is aimed to improve the performance of traditional NPI and improve the speed of NPI training without loss of precision. To achieve this goal, we changed the core structure of NPI by adopting the GRU (gated recurrent unit) to replace LSTM (long short-term memory) in NPI. GRU has a control unit that regulates the flow of information within the hidden unit while without single memory unit. Numerical results have been presented to demonstrate the performance of the proposed methodology. That is, GRU-based NPI improved the performance of the original LSTM-based NPI by nearly 33% under the premise of ensuring equal accuracy. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-018-2634-9 |