How Reliable Are Impact Factors for Ranking Water Resources Journals? An Analysis of the 70,878 Citation Records of the 2002 and 2021 Top 10 Journals for the Last Two Decades

Despite regularly being criticized, the Impact Factor (IF) remains a widely used journal analysis metric. In the Water Resources category of the Web of Science, a thorough investigation of the reliability of this metric is lacking. This study analyzed the citation records of the top 10 journals from...

Full description

Saved in:
Bibliographic Details
Published inWater resources research Vol. 59; no. 3
Main Author Pauwels, Valentijn R. N.
Format Journal Article
LanguageEnglish
Published 01.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Despite regularly being criticized, the Impact Factor (IF) remains a widely used journal analysis metric. In the Water Resources category of the Web of Science, a thorough investigation of the reliability of this metric is lacking. This study analyzed the citation records of the top 10 journals from 2002 through 2021 for the last two decades. Important to note, there are inconsistencies in the Web of Science database. Contradictory to many fields, few papers are uncited. However, the Simpson paradox applies. More specifically, on the one hand, the correlation between citations to papers 7 years post publication and 1 or 2 years post publication (which determine the IF) has increased, which supports this metric. On the other hand, many of the most highly cited papers at the time of the IF calculation are lowly cited 5 years afterward. Additionally, most currently very highly cited papers were relatively uncited when they were used in the IF calculation. This raises doubt about the usefulness of this metric. For a limited number of journals, papers from individual nations strongly influence the IF. Also, for a few journals, the IFs are strongly influenced by review papers, but leaving these out of the IF does change the top 3 and 10 of the rankings. The general conclusion is that IFs do contain some valuable information about the journals' citation impact, but that using them to rank journals is questionable practice. Plain Language Summary Impact Factors (IFs) continue to be very frequently used to evaluate researchers' publication records. They are calculated in a very simplistic manner, and are essentially an attempt to summarize a very complicated issue (publication quality) into one single metric. This paper shows that care must be taken when applying this practice. Not only does the data set contain errors, there are a number of other reasons why these IFs are not reliable indicators of a journal's quality. Key Points In many cases the calculation of the Impact Factor (IF) by the Web of Science is contradictory with their own data Another number of issues have been identified which raise doubt on the IF as a measure of journal quality A number of recommendations for a better use of the literature databases have been made
ISSN:0043-1397
1944-7973
DOI:10.1029/2022WR033352