Scientific Journals and the Misleading h-Index

Standard

I’m about to re-submit my first research paper. My professors and I are aiming high. In their words, “we’ll try a top, top journal”. The rejection ratios of these top journals are frighteningly high. Up to 95%. But, as one of my professor puts it, “one article in a top journal is worth 10s in a B+ journal”. This convinced me of submitting the paper to Operations Research rather than the European Journal of Operations Research (EJOR).

Weirdly enough though, when I told my officemate that, she immediately checked the h-indices of these journals. What a surprise that was! The so-called B+ journal EJOR was easily topping the ranking of journals of my field! And the A+ journal we were aiming at was at a mere 6th position, still ahead of the 20th journal called Mathematics of Operations Research which my professor claimed too theoretical for us to publish in.

Surely enough, within the operations research community, it is well-known that publishing in Operations Research is much more prestigious than EJOR. In fact, whenever I check professors CV, I pay great attention to the journals he has published in, and I’m sure they do the same for candidates for their labs. Thus, in research communities, the quality of journals where one published his papers is the main indicator of one’s potential.

Yet, most journal rankings are not consistent with the community’s judgement of their actual qualities. One cause is the frequent use of the h-index to judge a journal quality. This measure is defined as “the largest number h such that h articles published in [the last 5 years] have at least h citations each”. I see two problems with such a definition.

First, the impact of a paper A is actually not necessarily related to the number of citations it gets. Rather, the number of citations show how many other papers are related to the paper A. After all, nowadays, most citations are not use to enhance their usefulness in the construction of a current paper. Rather, most citations are used to indicate the interest a particular topic has raised within a community and justify paper A’s focus on this interest. As a proof of that, just look at the never-ending bibliographies articles nowadays have.

Second, and more importantly, what the ranking really shows how many papers a journal publishes! While Operations Research publishes around 90 articles a year, EJOR publishes about 550 of them! The champion of h-index ranking, Nature, publishes over 800 articles a year!

These facts scare me, as I fear many decisions (like hiring or funding) to be based religiously on numbered facts, with little attention to their actual meaning. Sort of like how a country’s wealth (and President’s mandate) should not be based on GDP results. That’s why I find it highly important to insist on the irrelevancy of many rankings, and/or on the importance of improving statistical indicators.