Dataset Difficulty and the Role of Inductive Bias
Motivated by the goals of dataset pruning and defect identification, a growing body of methods have been developed to score individual examples within a dataset. These methods, which we call "example difficulty scores", are typically used to rank or categorize examples, but the consistency...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
03.01.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Motivated by the goals of dataset pruning and defect identification, a
growing body of methods have been developed to score individual examples within
a dataset. These methods, which we call "example difficulty scores", are
typically used to rank or categorize examples, but the consistency of rankings
between different training runs, scoring methods, and model architectures is
generally unknown. To determine how example rankings vary due to these random
and controlled effects, we systematically compare different formulations of
scores over a range of runs and model architectures. We find that scores
largely share the following traits: they are noisy over individual runs of a
model, strongly correlated with a single notion of difficulty, and reveal
examples that range from being highly sensitive to insensitive to the inductive
biases of certain model architectures. Drawing from statistical genetics, we
develop a simple method for fingerprinting model architectures using a few
sensitive examples. These findings guide practitioners in maximizing the
consistency of their scores (e.g. by choosing appropriate scoring methods,
number of runs, and subsets of examples), and establishes comprehensive
baselines for evaluating scores in the future. |
---|---|
DOI: | 10.48550/arxiv.2401.01867 |