Neuron-Level Knowledge Attribution in Large Language Models
Identifying important neurons for final predictions is essential for understanding the mechanisms of large language models. Due to computational constraints, current attribution techniques struggle to operate at neuron level. In this paper, we propose a static method for pinpointing significant neur...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Identifying important neurons for final predictions is essential for
understanding the mechanisms of large language models. Due to computational
constraints, current attribution techniques struggle to operate at neuron
level. In this paper, we propose a static method for pinpointing significant
neurons. Compared to seven other methods, our approach demonstrates superior
performance across three metrics. Additionally, since most static methods
typically only identify "value neurons" directly contributing to the final
prediction, we propose a method for identifying "query neurons" which activate
these "value neurons". Finally, we apply our methods to analyze six types of
knowledge across both attention and feed-forward network (FFN) layers. Our
method and analysis are helpful for understanding the mechanisms of knowledge
storage and set the stage for future research in knowledge editing. The code is
available on https://github.com/zepingyu0512/neuron-attribution. |
---|---|
DOI: | 10.48550/arxiv.2312.12141 |