Friday, April 22, 2011

Strengths and Weaknesses

When you review a paper for a conference or journal, many of the reviewing forms request that the reviewer outline the paper's strengths and weaknesses.

Recently, I read a review that went something like this:
"Weaknesses:  Several of the main findings presented in this paper are merely confirmation of previous work."
I wish this was one of those publication venues where you can review the reviewers, because I would have written back:
"Weaknesses: Reviewer 2 has gotten so thristy for novelty they have forgotten one of the hallmarks of science: replicability."
It's not just Computer Science that is plagued by this problem, certainly it's cropped up elsewhere. But our discipline does have a tradition of getting a little too obsessed with the novelty of an idea that they forget the value of reproducing previous findings.

6 comments:

  1. While trying evaluate a new project I am working on I concluded that a major criticism might be that it "followed too logically from prior work". It wasn't surprising enough. It sounds silly when you say it out loud, but I can totally imagine a reviewer saying that.

    ReplyDelete
  2. Boy, can I ever relate to this. For years reviewers have been saying "yeah, that's great, but you really need to do experiments over X protocol to confirm your findings." Any guesses as to the reviewer comments when we finally did submit a paper with experiments over X? Jeez....

    ReplyDelete
  3. @Bashir: It does sound silly! It's a shame we've started thinking that way. I think follow-on work makes so much sense. That's usually how I do things - try looking at thing A, find thing B by accident, then do follow-on work on thing B.

    @acdalal: Grr, shame on those reviewers. Hopefully you have a chance to give a rebuttal. Seems like more and more conferences are allowing it lately.

    ReplyDelete
  4. Agreed. This obsession with novelty also makes it tempting for some groups to publish incomplete (and eventually inaccurate) results in order to be first. When I review, I mention lack of novelty ONLY when the manuscript reads like the researchers are claiming it, or when the results are very well established in the field, since the 10th me too paper will only clutter up the literature.

    ReplyDelete
  5. In this age of database search engines, prodigal, please explain exactly what it means to "clutter up the literature"? What amount of replication is needed, and what amount is "clutter"?

    For the rest of you, is it mist important that the result was surprising or that it *could have been surprising*, save that it just didn't turn out that way empirically?

    ReplyDelete
  6. I'm going to play the devil's advocate and suggest that a lack of novelty is fair criticism, especially in the computer systems area.

    The goal of computer systems research is to understand existing computer systems and build new systems that are in some sense "better" than systems that we already know how to build. If you accept this premise, a new system by itself is useless unless it is also _better_ than existing systems in some objective way.

    Therefore, a submission that mainly confirms known ideas about building computer systems might not be worthy of publication simply because it doesn't give us enough new insight into how to build systems.

    ReplyDelete