Saturday, March 21, 2009

Are we training pit bulls to review our manuscripts?



ResearchBlogging.org
Virgina Walbot has recently published an article in the Journal of Biology discussing the idea that grad students and postdocs can turn out to be excessively (and in some cases unjustifiably) critical and even ‘savage’ in their review of articles (when asked to referee), and that this issue is a result of their training. She argues that this "problem" may be addressed by some exercises during their tutelage. She suggests a way to prevent them turning into “future generations of manuscript-savaging reviewers” ultimately by understanding what a ‘publishable unit’ is and also that a single paper is not supposed to provide complete proof of a major concept.

She opens her article by commenting on a rather common scenario: getting a bad review from one referee, while the others may have favorable comments about the manuscript.

(…the bad review, while) detailed in its critique, it relentlessly measures the work against a 'gold standard of excellence' using the latest and best techniques, before dismissing the years of labor (...)
This doesn’t normally happen. Reviewers like that are rapidly dismissed by editorial offices (ideally). If you submit a manuscript using a cDNA microarray, nobody will reject it on the basis that an Affymetrix chip should have been used or that high-throughput sequencing may have been the way to go. If you have the proper controls, the manuscript is sound in its conception and the results are well discussed, you are good to go (I know, this may sound like “too much” but it’s what is expected from an article). It is common (and even understandable), however, that if you make some claims regarding, for example, the generalized importance of a single transcriptional regulator by exemplifying with only a few supposedly target genes, that this “particular” referee may be interested in the whole transcriptional profile of a mutant defective in this regulator to be convinced of its widespread role and general importance, or some other experiments (wouldn’t we all?). As a referee you are free (and even encouraged) to ask for whatever you feel is needed to support the claims being made. Of course, this doesn’t mean the authors must perform the experiments. The editor (if a good one) will know that asking for certain experiments is just unrealistic or that these experiments may not be crucially needed to support the claims being made by the authors. The manuscript, then, will not be rejected just because the authors can’t fulfill the reviewer’s particular request. Authors are also given the chance to communicate directly to the editor their position regarding certain revisions.



The reviews from the other referees are also of extreme importance to the Editor for him/her to make a decision. But what if the editor is not that good, experienced or knowledgeable? How can he/she just ignore a devastating review from someone who is supposed to be an expert on the matter?

This is where the importance of Walbot’s article lays. She argues that with proper training, our grad students and postdocs can learn to assess the importance of each paper and to make reasonable requests in order to improve the article, and not to demand that a single manuscript provides complete proof of an idea.

She introduces the concept of “timely publishable unit” which I think is very useful. This is an article that with the available knowledge and tools constitutes a contribution of new ideas and partial proofs towards the understating of a particular biological process.

So what does Walbot suggest we do? How can we better train our students to appreciate and properly judge the contribution of a single paper, understand what a publishable unit is and to become better reviewers?

She first suggests that we make our students 
Read a short review and all of the constituent papers to understand how solid, but as yet incomplete, papers add up to a new paradigm.
This is a great idea. This will be of great help in illustrating the concept of publishable unit and students will be able to appreciate that is not a single paper that solves a particular problem, but rather a series of papers each making a partial contribution. She suggests that several points should be discussed with the students. For example:

What were the claims and evidence in the papers cited in the review? What constituted a publishable unit in this field, at that time? Is there a substantial difference in quality between papers in the most prestigious journals, in specialty journals in the field, and in obscure journals? In retrospect, given the emphasis in the review article, are the key conclusions primarily from the papers in the best journals? That is, did reviewing at the time identify the papers that best established new points or clarified existing concepts?
This will also teach them (if you haven’t told your students already) two things: 1) not everything you find in CNS journals (Cell, Nature, Science and in other one-word-title journals) is true and 2) just because a journal has low impact factor it does not mean that articles published there are weak and should not be considered in your research.
While I’m on the matter, make your students learn what impact factor measures and what it definitely does not: don’t let your students think that “the better the impact factor of a journal, the better –and true– an article in there is”.

Walbot also suggests students should
Listen to short presentations from several graduate students and postdoctoral fellows from one lab (equivalent to a faculty research seminar in depth and breath) and then discuss what's ready for publication
After attending these seminars a few questions could be discussed: “Should this 'story' be just one publication or can the work be broken into distinct publications? Should it be broken up?”
She argues this may help in making students understand
what constitutes a timely publishable unit of information in a particular field and how the ongoing contribution of new ideas and partial proofs stimulates work in the field.
For this, of course, students should be able to follow the “story” for a considerable amount of time (1-2 years for example). This is why I’ve always argued that graduate student seminars are extremely important and I completely agree with Walbot on this point.

While I agree that teaching students the importance of each paper and that each one is supposed to be understood as a contribution towards understanding a particular phenomenon, I don’t agree that reviewers turn out to be “savages” (as Walbot describes) due to a particular training method in grad school (which is the same most schools use, that is, analyzing classic great papers and also flawed ones, to make a point).

Every reviewer was first an author. As authors, they have faced the inherent problems of research, how slow it moves and how much “sweat and blood” each submission represents (particularly early in their careers). Many reviewers are sympathetic and make suggestions with the idea of improving the manuscript and helping the authors.
It is also true, however, that many reviewers are not like this. They may ask for experiments that are not within the reach of the authors, either in terms of resources or just time and in many cases, that will not be that great of a contribution to the point being made in the manuscript.

This is why Walbot’s article deserves to be read and discussed: it makes as consider the fact that we can train our students not to be like that. This does not mean that our students (and future reviewers) should look the other way when a control is missing, or that they shouldn’t judge to the best of their capabilities the quality of the evidence for each claim, or look for even the tiniest faults in the manuscripts they review. In fact, I’m a strong supporter of such reviewing attitude. In what I do agree, and is the main point being made in this article, is that besides their own experience as authors, they can be taught to evaluate the importance of each paper and what a particular paper represents in the efforts towards understanding a particular process, through some small practices during their training. This results in them being better reviewers (critical, yet realistic and helpful) and better scientists. I share the idea that such exercises can be of great help. This will ultimately be in science’s best interest.

And who knows? Your former grad student may be chosen as a reviewer for your next submission!

--
Walbot, V. (2009). Are we training pit bulls to review our manuscripts? Journal of Biology, 8 (3) DOI: 10.1186/jbiol125

Share/Save/Bookmark

0 Comments: