Tuesday, July 26, 2011

Reviewer armchair psychology

Did I mention July is the month for reviews this summer? I must have reviewed 25 this month (one for every hot, humid day!)

After I review papers, if I have time I enjoy doing armchair psychology on my fellow reviewers. Some conferences / journals let you see the reviews others have submitted, and some even allow you to change your score based on what you read. I'm not sure if this is a good thing or a bad thing, but it's interesting.

When there are 3-4 reviewers for a paper, the scores tend to regress to the mean. So on a 1-5 scale, the average score will be 3. There are also often repeats - so if I give it a '4', it's likely some one else will give it a 4 too. Really bad papers tend to have scores that cluster around 2, and really good papers cluster around 4.

So I'm always intrigued when I see the following:
Reviewer 1:  4
Reviewer 2:  5
Reviewer 3:  3
Reviewer 4:  1
As an nascent author, when you get a set of reviews back like the first one you tend to think, "Reviewer 4 is a jerk who Didn't Get It."

As a more seasoned author, you tend to think, "Oh no, what is my Fatal Flaw? (Reviewer 4 is a jerk who Didn't Get It.)"

And as a seasoned reviewer, you tend to think, "Who is Reviewer 4 and what is their beef?"

Occasionally Reviewer 4 has a valid point, and the other three reviewers really did miss something major. But more often than not Reviewer 4 is angry at the authors for taking too many liberties in their paper. Or for not citing Their Brilliant Work. Or it's the "Someone is WRONG on the internet" phenomenon.

In any case, when I'm an editor or paper chair I can ignore the outlier and life goes on. But when I'm a fellow reviewer I feel more vested in the outcome, particularly when I 'm Reviewer 2. I hate to see the possibility of good science getting squished because some reviewer was being thick, especially when it's someone else's science.

So sometimes, if a conference or journal offers a discussion period for reviewers, I occasionally have to confront Reviewer 4 head on, less they somehow manage to convince Reviewers 1 and 3 to change their scores.

Anyway, this is some of what goes on behind the scenes behind your favorite publication venue. As an author, try not to let the outliers get under your skin. If your other reviews are good, be persistent and try again somewhere else. There's an awful lot of randomness in this process.


  1. Reviewers are only human, for better or worse. They might be just having a terrible day.

    A friend told me that her recent reviewers' comments were criticizing the established procedure in the field... indicating the reviewers are from some other field and not very informed. Even though she requested in-field reviewers.

    (This is one of the more "interdisciplinary" IEEE journals)

  2. Thanks for writing this down. I've seen a few senior profs become rather cynical and make assumptions about the intentions of the reviewers involved. I really hope that I avoid this disease when I grow older. :-)