Does the perceived quality of interdisciplinary research vary between fields?

PurposeTo assess whether interdisciplinary research evaluation scores vary between fields.Design/methodology/approachThe authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple s...

Full description

Saved in:
Bibliographic Details
Published inJournal of documentation Vol. 79; no. 6; pp. 1514 - 1531
Main Authors Thelwall, Mike, Kousha, Kayvan, Stuart, Emma, Makita, Meiko, Abdoli, Mahshid, Wilson, Paul, Levitt, Jonathan M.
Format Journal Article
LanguageEnglish
Published Bradford Emerald Publishing Limited 24.10.2023
Emerald Group Publishing Limited
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:PurposeTo assess whether interdisciplinary research evaluation scores vary between fields.Design/methodology/approachThe authors investigate whether published refereed journal articles were scored differently by expert assessors (two per output, agreeing a score and norm referencing) from multiple subject-based Units of Assessment (UoAs) in the REF2021 UK national research assessment exercise. The primary raw data was 8,015 journal articles published 2014–2020 and evaluated by multiple UoAs, and the agreement rates were compared to the estimated agreement rates for articles multiply-evaluated within a single UoA.FindingsThe authors estimated a 53% agreement rate on a four-point quality scale between UoAs for the same article and a within-UoA agreement rate of 70%. This suggests that quality scores vary more between fields than within fields for interdisciplinary research. There were also some hierarchies between fields, in the sense of UoAs that tended to give higher scores for the same article than others.Research limitations/implicationsThe results apply to one country and type of research evaluation. The agreement rate percentage estimates are both based on untested assumptions about the extent of cross-checking scores for the same articles in the REF, so the inferences about the agreement rates are tenuous.Practical implicationsThe results underline the importance of choosing relevant fields for any type of research evaluation.Originality/valueThis is the first evaluation of the extent to which a careful peer-review exercise generates different scores for the same articles between disciplines.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0022-0418
1758-7379
DOI:10.1108/JD-01-2023-0012