Jumping Ahead: Improving Reconstruction Fidelity with JumpReLU Sparse Autoencoders
Sparse autoencoders (SAEs) are a promising unsupervised approach for identifying causally relevant and interpretable linear features in a language model's (LM) activations. To be useful for downstream tasks, SAEs need to decompose LM activations faithfully; yet to be interpretable the decomposi...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Sparse autoencoders (SAEs) are a promising unsupervised approach for
identifying causally relevant and interpretable linear features in a language
model's (LM) activations. To be useful for downstream tasks, SAEs need to
decompose LM activations faithfully; yet to be interpretable the decomposition
must be sparse -- two objectives that are in tension. In this paper, we
introduce JumpReLU SAEs, which achieve state-of-the-art reconstruction fidelity
at a given sparsity level on Gemma 2 9B activations, compared to other recent
advances such as Gated and TopK SAEs. We also show that this improvement does
not come at the cost of interpretability through manual and automated
interpretability studies. JumpReLU SAEs are a simple modification of vanilla
(ReLU) SAEs -- where we replace the ReLU with a discontinuous JumpReLU
activation function -- and are similarly efficient to train and run. By
utilising straight-through-estimators (STEs) in a principled manner, we show
how it is possible to train JumpReLU SAEs effectively despite the discontinuous
JumpReLU function introduced in the SAE's forward pass. Similarly, we use STEs
to directly train L0 to be sparse, instead of training on proxies such as L1,
avoiding problems like shrinkage. |
---|---|
DOI: | 10.48550/arxiv.2407.14435 |