The chase after attention is an indispensable part of scientific progress. When paying attention to the research done by others, those demanding scientific information are, in a manner of speaking, collaborating with those providing it. This team-oriented, altruistic aspect of science (that aims to save everyone a little time and effort) is one of the main reasons why scientific congresses and journals make sense. However, which of these two venues is more appropriate to publish your work?

Why to choose journals: Journals are quite convenient for scientists who are just starting their careers, as they have higher acceptance rates. Another reason to prefer journals is that they give you the opportunity to include as many experimental results as you wish, information that otherwise you might not be able to fit in a conference publication. In a journal you can also add extra proofs that are too long for a conference talk.

Reviews are another key point to take into account. While conference reviewers usually believe the researchers’ assertions, journal reviewers are supposed to verify them. That’s why journal reviewers might spend a lot of time on a paper whereas conference reviewers simply cannot afford to do so. Considering this your preference should be for journal publication because a detailed revision can help you to improve your work (or to understand its weaknesses) before you re-submit it.

Why to choose congresses: The most obvious reason why you should publish your work in a scientific congress is that conferences provide higher visibility. Researchers in the same discipline will attend the talk, ask questions and, most important, will become aware of the innovative research being generated in your particular subfield. Receiving feedback from peers may also help you to eventually write down your study. Moreover, you can establish contacts for future employment and you can also learn of available positions sooner than anyone.

Journals (even the best) are less selective than congresses. The rush for some researchers to send even marginal results tends to lower the overall quality of journals, while congresses usually have higher quality.

In short, regardless of which venue you choose to publish your work, the primer motive of journals and conferences is again the same: make you and your work visible. And, of course, practice your writing and your presentation skills!Image


Publicat dins de Uncategorized | Deixa un comentari


Last week we came to the conclusion that citation, despite being deficient, represents a widely-diffused system to assess the value of a scientific work. We also stated that the maximization of citations usually becomes the main goal of scientists (mainly because their careers depend on it). Therefore, we need reliable methods for calculating the pragmatic value of scientific information.

The essential goal of using citation-based measures in order to evaluate the quality of a scientist’s work is to show how often and where this scientist was cited. Probably the most used measures are the impact factor and the h-index. The impact factor of a journal in an “x” year is the number of citations received in the current year per articles published in the two preceding years divided by the number of articles published in the same two years. In other words, a journal has a 2013 impact factor of, let’s say, 5.426 if on average each of its 2011 and 2012 articles were cited 5.426 times in 2013.

Usually high-impact journals attract high-quality contributions from top scientists and receive bigger attention. Thus, publishing in these journals is one of the top goals for researchers who seek to increase their prestige and influence among their colleagues. And, of course, it also raises their chances of getting research grants and attractive job offers.

Among the strong points of the impact factor we can highlight its accessibility and ready-to-use nature. However, this method has also several drawbacks. First of all it can be easily seen that the 2-year citation “formula” fails to include the long-term value or the real impact of many journals. Another critic point is that many scientists and journals that usually publish review articles tend to have their citation counts exaggerated because these types of articles are generally highly cited. A third comment is that impact factor does not take into account the large percentage of papers in a journal that receive no citations (although they were used). Despite all this it is worth pointing out that the use of impact factor has influenced strongly the publication strategy of many scientists.

In order to get around some of the limitations of the impact-factor method Jorge Hirsch, a US physicist, developed in 2005 the h-index. The method is simple: a researcher with an h-index of, for example, 20, has published 20 articles that have each grabbed at least 20 citations. It is estimated that after 20 years a “successful” scientists will have an h-index of around 20; an “exceptional” scientist an h-index of 40; and a “magnificent” one an h-index 60. In 2011, organic chemist George M. Whitesides, from Harvard University, ranked first in h-index ranking of living chemists with an h-index of 169.

One of the strongest points of this measure is that it helps to distinguish between a “one-hit wonder” and a constant investigator with numerous high-impact papers and, therefore, a high h-index. However, as well as the impact factor, the h-index must be used with caution. This measure disfavours young scientists with a short career because it is limited by the total number of publications (regardless of their importance). It is also insensitive to different fields or types of journals, where different numbers of citations are used. These drawbacks have led to the development of alternative indexes, but this should be discussed in another post.

In summary: if scientists want their peers to be aware of their work (and ultimately take advantage and cite it) they must spread it on as many journals as possible or post it in personal homepages or institutional repositories. In the research evaluation process one thing is clear: beyond the multitude of methods and tools to evaluate one’s job, high-quality outputs always emerge, and publishing is the first step to scientific success. 

Publicat dins de Uncategorized | Deixa un comentari


PhD comics


It is a well-known fact that success in science is mainly measured with attention. Attention, like the input to scientific advancement, is a mode of getting paid. Researchers achieve full recognition in the scientific community only by earning the attention of other scientists and society. And the way scientific papers are accessed and cited by others stands as almost the unique tool to classify influential scientists and relevant works. However, is it a useful tool to help to understand how scientific community works? Is citation a compulsory part of scientific progress?

From a simple point of view, science is world-wide company in which the work of some investigators serves as input for other lines of research. Unlike other fields, the outputs of scientific investigation (in other words, the “discoveries”) are not sold on markets: they are published. Publication offers scientific progress to the general public under one condition: that the use and spread of this scientific progress is credited by citation. Gaining this attention is a prime motive for practicing science and this leads inevitably to a system where researchers, when competing for citation, get distracted from what they are devoted to doing (science).

Many governments and funding agencies use citation data to evaluate the quality of a researcher’s work. However, not everybody thinks citation analysis is the best way to judge the value of a scientific publication. This “citing system” has important weaknesses, for instance colleagues that reciprocally cite each other to build strong citation counts, people constantly citing themselves or authors citing an authority in the field without ever having examined his work themselves. On the other hand, other criteria, such as consistency, productiveness or correlation to facts could be considered as better measures of scientific value.

My point of view is that there are ways of accumulating citations that have very little to do with scientific quality. Scientists with the largest amounts of citation will not necessarily be the best scientists. The success of scientific investigation is poorly understood without a correct understanding of the mechanism that leads to this investigation. But, even though quality and citation will never meet with precision, nowadays citation represents a generally accepted measure of scientific value and, like it or not, scientists must work on it.

Publicat dins de Uncategorized | Deixa un comentari