AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in di...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Portelli, Beatrice, Scaboro, Simone, Chersoni, Emmanuele, Santus, Enrico, Serra, Giuseppe
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 07.09.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper describes the models developed by the AILAB-Udine team for the SMM4H 22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main take-aways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.
ISSN:2331-8422