Attending to Topological Spaces: The Cellular Transformer

Topological Deep Learning seeks to enhance the predictive performance of neural network models by harnessing topological structures in input data. Topological neural networks operate on spaces such as cell complexes and hypergraphs, that can be seen as generalizations of graphs. In this work, we int...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ballester, Rubén, Hernández-García, Pablo, Papillon, Mathilde, Battiloro, Claudio, Miolane, Nina, Birdal, Tolga, Casacuberta, Carles, Escalera, Sergio, Hajij, Mustafa
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 23.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Topological Deep Learning seeks to enhance the predictive performance of neural network models by harnessing topological structures in input data. Topological neural networks operate on spaces such as cell complexes and hypergraphs, that can be seen as generalizations of graphs. In this work, we introduce the Cellular Transformer (CT), a novel architecture that generalizes graph-based transformers to cell complexes. First, we propose a new formulation of the usual self- and cross-attention mechanisms, tailored to leverage incidence relations in cell complexes, e.g., edge-face and node-edge relations. Additionally, we propose a set of topological positional encodings specifically designed for cell complexes. By transforming three graph datasets into cell complex datasets, our experiments reveal that CT not only achieves state-of-the-art performance, but it does so without the need for more complex enhancements such as virtual nodes, in-domain structural encodings, or graph rewiring.
ISSN:2331-8422