Graph Convolutional Networks With Autoencoder-Based Compression And Multi-Layer Graph Learning

This work aims to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of a...

Full description

Saved in:
Bibliographic Details
Published inICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 3593 - 3597
Main Authors Giusti, Lorenzo, Battiloro, Claudio, Di Lorenzo, Paolo, Barbarossa, Sergio
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This work aims to propose a novel architecture and training strategy for graph convolutional networks (GCN). The proposed architecture, named Autoencoder-Aided GCN (AA-GCN), compresses the convolutional features in an information-rich embedding at multiple hidden layers, exploiting the presence of autoencoders before the point-wise nonlinearities. Then, we propose a novel end-to-end training procedure that learns different graph representations per layer, jointly with the GCN weights and auto-encoder parameters. As a result, the proposed strategy improves the computational scalability of the GCN, learning the best graph representations at each layer in a data-driven fashion. Several numerical results on synthetic and real data illustrate how our architecture and training procedure compares favorably with other state-of-the-art solutions, both in terms of robustness and learning performance.
ISSN:2379-190X
DOI:10.1109/ICASSP43922.2022.9746161