Last week we came to the conclusion that citation, despite being deficient, represents a widely-diffused system to assess the value of a scientific work. We also stated that the maximization of citations usually becomes the main goal of scientists (mainly because their careers depend on it). Therefore, we need reliable methods for calculating the pragmatic value of scientific information.
The essential goal of using citation-based measures in order to evaluate the quality of a scientist’s work is to show how often and where this scientist was cited. Probably the most used measures are the impact factor and the h-index. The impact factor of a journal in an “x” year is the number of citations received in the current year per articles published in the two preceding years divided by the number of articles published in the same two years. In other words, a journal has a 2013 impact factor of, let’s say, 5.426 if on average each of its 2011 and 2012 articles were cited 5.426 times in 2013.
Usually high-impact journals attract high-quality contributions from top scientists and receive bigger attention. Thus, publishing in these journals is one of the top goals for researchers who seek to increase their prestige and influence among their colleagues. And, of course, it also raises their chances of getting research grants and attractive job offers.
Among the strong points of the impact factor we can highlight its accessibility and ready-to-use nature. However, this method has also several drawbacks. First of all it can be easily seen that the 2-year citation “formula” fails to include the long-term value or the real impact of many journals. Another critic point is that many scientists and journals that usually publish review articles tend to have their citation counts exaggerated because these types of articles are generally highly cited. A third comment is that impact factor does not take into account the large percentage of papers in a journal that receive no citations (although they were used). Despite all this it is worth pointing out that the use of impact factor has influenced strongly the publication strategy of many scientists.
In order to get around some of the limitations of the impact-factor method Jorge Hirsch, a US physicist, developed in 2005 the h-index. The method is simple: a researcher with an h-index of, for example, 20, has published 20 articles that have each grabbed at least 20 citations. It is estimated that after 20 years a “successful” scientists will have an h-index of around 20; an “exceptional” scientist an h-index of 40; and a “magnificent” one an h-index 60. In 2011, organic chemist George M. Whitesides, from Harvard University, ranked first in h-index ranking of living chemists with an h-index of 169.
One of the strongest points of this measure is that it helps to distinguish between a “one-hit wonder” and a constant investigator with numerous high-impact papers and, therefore, a high h-index. However, as well as the impact factor, the h-index must be used with caution. This measure disfavours young scientists with a short career because it is limited by the total number of publications (regardless of their importance). It is also insensitive to different fields or types of journals, where different numbers of citations are used. These drawbacks have led to the development of alternative indexes, but this should be discussed in another post.
In summary: if scientists want their peers to be aware of their work (and ultimately take advantage and cite it) they must spread it on as many journals as possible or post it in personal homepages or institutional repositories. In the research evaluation process one thing is clear: beyond the multitude of methods and tools to evaluate one’s job, high-quality outputs always emerge, and publishing is the first step to scientific success.