The idea of impact neutral reviewing was pioneered by PLoS ONE ten years ago this year.
The idea is that ... PLOS ONE only verifies whether experiments and data analysis were conducted rigorously, and leaves it to the scientific community to ascertain importance, post publication, through debate and comment. [source]I have now been writing impact neutral reviews for almost three years and sum up my experience so far here.
1. I review all papers with the same, impact neutral, criteria regardless of journal. I don't rank the paper in any way and if there is a required field that I don't like I just enter a "-".
2. I have not recommended rejection of a single paper in that time.* I have seen several cases where the conclusions were not supported by the data, but in these cases I recommend changing the conclusions instead. Sometimes the conclusion was, in my opinion, that nothing could be concluded (without further calculations) or that the approach didn't work. But negative results are fine within impact neutrality as long as it is clearly stated as such.
3. All my reviews now start with "In my opinion the following issues should be addressed before the paper is suitable for publication" (see also next point). I haven't come across any papers were didn't feel something (sometimes minor) needed to be fixed. "Impact neutral" also means that I don't praise the paper even if I think it is important.
4. In spite of point 2 my reviews for "high impact" journals such as JACS still turn out more "critical" because the conclusions more often need to be "toned down" based on the data in my opinion.
5. As part of sticking-to-the-facts I quote every sentence I have a problem with. I don't question the motives of the authors in writing what they did or express any annoyance I may feel. I phrase a lot of critique as questions and ask authors to clarify.
6. I end all my reviews with "Jan Jensen (I choose to review this paper non-anonymously)". Non-anonymity (onymity) is not really part of being "impact neutral" but I can't see any reasons not to sign my reviews in light of points 1-5.
7. For the same reason I also share all my reviews on Publons, although not all journals allow Publons to make them visible. In fact my greatest motivator (imagined fear) when reviewing is some future reader spotting some obvious factual error in the paper that I missed and then finding my review online.
Footnote to point 2 *The closest I got to a rejection was a paper that used some math that was completely foreign to me and very poorly described. I actually went as far as Googling the authors to see whether they were really affiliated with a university as they claimed. Ultimately I wrote to the editor saying that I could not judge whether this work was legit or not.
This work is licensed under a Creative Commons Attribution 4.0