As a professor of computer science I get to write a lot of reviews: For Bachelor and Master theses, for dissertations, for grant proposals, and for conference and journal paper submissions. I’d like to explain the logic of the reviews I write, using conference and journal submissions as the example. It is pretty simple:
The purpose of a review is to make a recommendation to a committee (or an editor) on how to handle a particular paper submission.
In my mind, a good review starts with the actual recommendation to the committee or the editor. All that follows is a substantiation of this recommendation.
Authors and editors alike need to see the recommendation explicitly; it should not be hidden in some scoring system. Identify the Champion goes a long way towards that end (for program committees), but I still prefer to spell out the recommendation explicitly so that there is no room for confusion.
The substantiation I typically write top down. I first focus on strengths and weaknesses of core issues like research design and execution. These make or break a paper. After that, I typically follow up with minor issues that can be fixed easily. I usually only write up minor issues if I recommend to accept the paper; anything else would be pointless.
Ideally, such an evaluation follows a clear framework and is understandable to those at who the review is aimed (both a committee or an editor and the authors who ultimately get to see the review). For theses written with my professorship, I have laid open our grading framework. Similarly, for OpenSym, a conference I am ultimately responsible for, we have also laid open some of our review criteria (but more remains to be done).
Some people believe that the goal of a review is to help the author improve the paper. I disagree. Helping the author is a (positive) side-effect of a substantiated recommendation, it is not a goal in itself. This has the following consequence:
Authors have a right to complain if they don’t understand a decision; they do not have a right to complain if reviewers don’t help them fix up their paper.
The reason for my stance is that authors should tap into their peer network and gather feedback about their paper before they submit it. There is way too much reviewing work to be done and way too many incremental and ultimately inconsequential papers to read. Shopping a paper around for feedback is a really bad practice (and will ultimately ruin the reputation of the authors).
Still, good papers warrant elaborate reviews. This was framed well by James Noble, then program chair for ECOOP 2012, providing his own as well as advice from prior chairs. The following he attributes to Phil Wadler:
Here is advice on reviewing I wrote for the Journal of Functional Programming: “A wise man once gave the following advice: spend the most time refereeing the best papers. If a paper is awful, please don’t spend a great deal of time on it. If a paper is good, please do spend a little time to make it better.” Another way to view this is that the author earns the time of the reviewer: a good paper earns a detailed review, a poor paper earns a brief review.
The logic, as I interpret it, is that good papers honor a reviewer’s time and are also likely to get accepted. Then, the extra time spent on helping improve them is time well spent. With raised quality of accepted papers a reviewer also aids a program chair or journal editor who ultimately is responsible for the final quality of papers at a conference or in a journal.
Update 2014-02-04: Fixed the attribution of the JFP quote based on comments to article.
Leave a Reply