Dynamic Spiking Framework for Graph Neural Networks
The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks (GNNs) is gradually attracting attention due to the low power consumption and high efficiency in processing the non-Euclidean data represented by graphs. However, as a common problem, dynamic graph representation learning fa...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The integration of Spiking Neural Networks (SNNs) and Graph Neural Networks
(GNNs) is gradually attracting attention due to the low power consumption and
high efficiency in processing the non-Euclidean data represented by graphs.
However, as a common problem, dynamic graph representation learning faces
challenges such as high complexity and large memory overheads. Current work
often uses SNNs instead of Recurrent Neural Networks (RNNs) by using binary
features instead of continuous ones for efficient training, which would
overlooks graph structure information and leads to the loss of details during
propagation. Additionally, optimizing dynamic spiking models typically requires
propagation of information across time steps, which increases memory
requirements. To address these challenges, we present a framework named
\underline{Dy}namic \underline{S}p\underline{i}king \underline{G}raph
\underline{N}eural Networks (\method{}). To mitigate the information loss
problem, \method{} propagates early-layer information directly to the last
layer for information compensation. To accommodate the memory requirements, we
apply the implicit differentiation on the equilibrium state, which does not
rely on the exact reverse of the forward computation. While traditional
implicit differentiation methods are usually used for static situations,
\method{} extends it to the dynamic graph setting. Extensive experiments on
three large-scale real-world dynamic graph datasets validate the effectiveness
of \method{} on dynamic node classification tasks with lower computational
costs. |
---|---|
DOI: | 10.48550/arxiv.2401.05373 |