Search This Blog

Consequences of using the journal impact factor

An interesting paper that should be mandatory literature for everybody making decisions on grant or job application, especially for those people impressed by high profile journals on publication lists:
    Deep Impact: Unintended consequences of journal rank
    Björn Brembs, Marcus Munafò
    arXiv:1301.3748 [cs.DL]
It's a literature review that sends a clear message about the journal impact factor. The authors argue the impact factor is useless in the best case and harmful to science in the worst case.

The annually updated Thomson Reuters journal impact factor (IF) is, in principle, the number of citations to articles in a journal divided by the number of all articles in that journal. In practice, there is some ambiguity about what counts as "article" that is subject of negotiation with Thomson Reuters. For example, journals that publish editorials will not want them to count among the articles because they get rarely cited in the scientific literature. Unfortunately, this freedom in negotiation results in a lack of transparency that casts doubt on the objectivity of the IF. While I knew that, the problem seems to be worse than I thought. Brembs and Munafò quote some findings:
"For instance, the numerator and denominator values for Current Biology in 2002 and 2003 indicate that while the number of citations remained relatively constant, the number of published articles dropped...
In an attempt to test the accuracy of the ranking of some of their journals by IF, Rockefeller University Press purchased access to the citation data of their journals and some competitors. They found numerous discrepancies between the data they received and the published rankings, sometimes leading to differences of up to 19% [86]. When asked to explain this discrepancy, Thomson Reuters replied that they routinely use several different databases and had accidentally sent Rockefeller University Press the wrong one. Despite this, a second database sent also did not match the published records. This is only one of a number reported errors and inconsistencies [87,88]."
(For references in this and the following quotes, please see Brembs and Munafò's paper.)

That is already a bad starting point. But more interesting is that, even though there are surveys confirming that the IF captures quite well researcher's perception of high impact, if one looks at the numbers, it actually doesn't tell much about the promise of articles in these journals:

"[J]ournal rank is a measurable, but unexpectedly weak predictor of future citations [26,55–59]... The data presented in a recent analysis of the development of [the] correlations between journal rank and future citations over the period from 1902-2009 reveal[s that]... the coefficient of determination between journal rank and citations was always in the range of ~0.1 to 0.3 (i.e., very low)."
And that is despite there being reasons to expect a correlation because high profile journals put some effort into publicizing articles and you can expect people to cite high IF journals just to polish their reference list. However,
"The only measure of citation count that does correlate strongly with journal rank (negatively) is the number of articles without any citations at all [63], supporting the argument that fewer articles in high-ranking journals go unread...

Even the assumption that selectivity might confer a citation advantage is challenged by evidence that, in the citation analysis by Google Scholar, only the most highly selective journals such as Nature and Science come out ahead over unselective preprint repositories such as ArXiv and RePEc (Research Papers in Economics) [64]."
So IFs of journals in publication lists don't tell you much. That scores as useless, but what's the harm? Well, there are some indications that studies published in high IF journals are less reliable, ie more likely to contain exaggerated claims and cannot later be reproduced.
"There are several converging lines of evidence which indicate that publications in high ranking journals are not only more likely to be fraudulent than articles in lower ranking journals, but also more likely to present discoveries which are less reliable (i.e., are inflated, or cannot subsequently be replicated).

Some of the sociological mechanisms behind these correlations have been documented, such as pressure to publish (preferably positive results in high-ranking journals), leading to the potential for decreased ethical standards [51] and increased publication bias in highly competitive fields [16]. The general increase in competitiveness, and the precariousness of scientific careers [52], may also lead to an increased publication bias across the sciences [53]. This evidence supports earlier propositions about social pressure being a major factor driving misconduct and publication bias [54], eventually culminating in retractions in the most extreme cases."
The "decline effect" (effects getting less pronounced in replications) and the problems with reproducability of published research findings have recently gotten quite some attention. The consequences for science that Brembs and Munafò warn of are
"It is conceivable that, for the last few decades, research institutions world-wide may have been hiring and promoting scientists who excel at marketing their work to top journals, but who are not necessarily equally good at conducting their research. Conversely, these institutions may have purged excellent scientists from their ranks, whose marketing skills did not meet institutional requirements. If this interpretation of the data is correct, we now have a generation of excellent marketers (possibly, but not necessarily also excellent scientists) as the leading figures of the scientific enterprise, constituting another potentially major contributing factor to the rise in retractions. This generation is now in charge of training the next generation of scientists, with all the foreseeable consequences for the reliability of scientific publications in the future."
Or, as I like to put it, you really have to be careful what secondary critera (publications in journals with high impact factor) you use to substitute for the primary goal (good science). If you use the wrong criteria you'll not only not reach an optimal configuration, but make it increasingly harder to ever get there because you're changing the background on which you're optimizing (selecting for people with non-optimal strategies).

It should clearly give us something to think that even Gordon Macomber, the new head of Thomson Reuters, warns of depending on publication and citation statistics.
Related Posts Plugin for WordPress, Blogger...

Pageviews

free counters

Share it