Visualizing and Explaining Language Models

During the last decade, Natural Language Processing has become, after Computer Vision, the second field of Artificial Intelligence that was massively changed by the advent of Deep Learning. Regardless of the architecture, the language models of the day need to be able to process or generate text, as...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Braşoveanu, Adrian M P, Andonie, Răzvan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 30.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:During the last decade, Natural Language Processing has become, after Computer Vision, the second field of Artificial Intelligence that was massively changed by the advent of Deep Learning. Regardless of the architecture, the language models of the day need to be able to process or generate text, as well as predict missing words, sentences or relations depending on the task. Due to their black-box nature, such models are difficult to interpret and explain to third parties. Visualization is often the bridge that language model designers use to explain their work, as the coloring of the salient words and phrases, clustering or neuron activations can be used to quickly understand the underlying models. This paper showcases the techniques used in some of the most popular Deep Learning for NLP visualizations, with a special focus on interpretability and explainability.
ISSN:2331-8422