SSAM: a span spatial attention model for recognizing named entities

Mapping a sentence into a two-dimensional (2D) representation can flatten nested semantic structures and build multi-granular span dependencies in named entity recognition. Existing approaches to recognizing named entities often classify each entity span independently, which ignores the spatial stru...

Full description

Saved in:
Bibliographic Details
Published inScientific reports Vol. 15; no. 1; pp. 10313 - 13
Main Authors Wang, Kai, Wen, Kunjian, Chen, Yanping, Qin, Yongbin
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 25.03.2025
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mapping a sentence into a two-dimensional (2D) representation can flatten nested semantic structures and build multi-granular span dependencies in named entity recognition. Existing approaches to recognizing named entities often classify each entity span independently, which ignores the spatial structures between neighboring spans. To address this issue, we propose a Span Spatial Attention Model (SSAM) that consists of a token encoder, a span generation module, and a 2D spatial attention network. The SSAM employs a two-channel span generation strategy to capture multi-granular features. Unlike traditional attention implemented on a sequential sentence representation, spatial attention is applied to a 2D sentence representation, enabling the model to learn the spatial structures of the sentence. This allows the SSAM to adaptively encode important features and suppress non-essential information in the 2D sentence representation. Experimental results on the GENIA, ACE2005, and ACE2004 datasets demonstrate that our proposed model achieves state-of-the-art performance, with F1-scores of 81.82%, 89.04%, and 89.24%, respectively. The code is available at https://github.com/Gzuwkj/SpatialAttentionForNer .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-87722-0