Activation function computation for neural networks

A computer-implemented method for improving the efficiency of computing an activation function in a neural network system includes initializing, by a controller, weights in a weight vector associated with the neural network system. Further, the method includes receiving, by the controller, an input...

Full description

Saved in:
Bibliographic Details
Main Authors Chen, Chia-Yu, Kim, Kyu-Hyoun, Kim, Seyoung, Kang, Mingu
Format Patent
LanguageEnglish
Published 24.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A computer-implemented method for improving the efficiency of computing an activation function in a neural network system includes initializing, by a controller, weights in a weight vector associated with the neural network system. Further, the method includes receiving, by the controller, an input vector of input values for computing a dot product with the weight vector for the activation function, which determines an output value of a node in the neural network system. The method further includes predicting, by a rectifier linear unit (ReLU), which computes the activation function, that the output value of the node will be negative based on computing an intermediate value for computing the dot product, and based on a magnitude of the intermediate value exceeding a precomputed threshold value. Further, the method includes, in response to the prediction, terminating, by the ReLU, the computation of the dot product, and outputting a 0 as the output value.
Bibliography:Application Number: US202016797587