OPTIMIZING CAPACITY AND LEARNING OF WEIGHTED REAL-VALUED LOGIC

Maximum expressivity can be received representing a ratio between maximum and minimum input weights to a neuron of a neural network implementing a weighted real-valued logic gate. Operator arity can be received associated with the neuron. Logical constraints associated with the weighted real-valued...

Full description

Saved in:
Bibliographic Details
Main Authors Riegel, Ryan Nelson, Luus, Francois Pierre, Khan, Naweed Aghmad, Vos, Etienne Eben, Makondo, Ndivhuwo, Akhalwaya, Ismail Yunus
Format Patent
LanguageEnglish
Published 18.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Maximum expressivity can be received representing a ratio between maximum and minimum input weights to a neuron of a neural network implementing a weighted real-valued logic gate. Operator arity can be received associated with the neuron. Logical constraints associated with the weighted real-valued logic gate can be determined in terms of weights associated with inputs to the neuron, a threshold-of-truth, and a neuron threshold for activation. The threshold-of-truth can be determined as a parameter used in an activation function of the neuron, based on solving an activation optimization formulated based on the logical constraints, the activation optimization maximizing a product of expressivity representing a distribution width of input weights to the neuron and gradient quality for the neuron given the operator arity and the maximum expressivity. The neural network of logical neurons can be trained using the activation function at the neuron, the activation function using the determined threshold-of-truth.
Bibliography:Application Number: US202015931223