ReviewingPapers - KravitzLab/KreedLabWiki GitHub Wiki

Reviewing Papers

Trainees will often co-review manuscripts with their PI. It helps to have a general framework before diving into the task. I typically begin by reading the whole manuscript beginning to end, possibly taking notes as I go through. Typically we each read the paper independently, chat about it, each write up a summary, then synthesize these summaries to come up with a final review. Sometimes the journal will ask for rankings along specific criteria, in which case we’ll discuss those specifically. Credit is given to the trainee when I fill in the editorial system.

The first pass through the paper

After reading the paper (particularly the introduction), I try to articulate: • What is the point? What's the big picture impact of this work? • How does it corroborate, refute or advance our prior understanding of the topic? • Who will care about this? (People in a subfield, people in a broader field? The whole world? 2 people studying this niche topic?)

What is the hypothesis?

I then try to identify an overarching hypothesis. Many good papers will make the hypothesis (and specific predictions of that hypothesis) easy to find, but generally the ‘hypothesis’ may be more implicit. I ask whether the authors proved or disproved that hypothesis. • Does the evidence presented adequately support their hypothesis? If not, do they discuss this or alternative interpretations • Do the authors discuss technical limitations? Sometimes the tools to do the ‘perfect’ experiment simply don’t exist. • If ‘no’ to either of the above, what data would the authors need (using techniques that exist, and optimally techniques the authors have in hand), to firmly prove or disprove their conclusions?

Do the results prove or disprove the hypothesis?

Often, papers will show a lot of circumstantial evidence in support of their hypothesis, or complementary approaches that are all consistent with their hypothesis. Is there an experiment that could unequivocally refute their hypothesis? Do they do this critical experiment? For each figure, is the evidence high quality? And is the conclusion of each figure supported? • Clear visualizations • Proper control groups and manipulations • Proper technical methods • Proper analysis/stats (check for independence of samples, and whether analysis are appropriate for assumptions of the statistical tests that the authors use)

Proper experimental design

Is sufficient detail provided in the methods to understand what the authors did? Are there any omissions in the discussion, key work in the field that isn’t cited, or interpretations that don’t comport with prior literature or the data presented?

I REALLY try to avoid prescribing additional experiments, unless they are needed to interpret the conclusions made. "Hey, they could have also looked at X, Y, Z" is not appropriate. But if they can analyze their existing data in a different way to bolster their claims, or if they need to provide additional control experiments or conditions to interpret their results, this would be appropriate to recommend. Finally, going over and pointing out minor things related to presentation (unfriendly visualizations, misnumbered figures, grammar or typo-s) use line numbers here whenever possible.

Review format

Generally, the format for a review follows the following structure:

  1. Introductory paragraph, with high-level summary of the gap the paper tries to fill, and the key strengths and why it will be impactful. The main weaknesses. Your overall assessment of the paper on balance.
  2. This is followed by an itemized list of ‘major’ points for the authors to respond to. These can be conceptual or technical.
  3. List of any minor issues the authors should be aware of and remedy.