Back to the paper
The text the way I got it

Here is the full text of the editor letter and review that I got from from Brain Research, with my response indented and in Italics.

Editor Letter:

Dear Author(s):

Your manuscript had been reviewed by the editorial board of BRAIN RESEARCH REVIEWS.

In view of the large number of papers reviewed, it has been necessary to accept for publication only those reports considered to be of high priority. I regret that the referee editors recommended rejection of your report on the basis of priority.

It is an interesting point that the editor rejected it based on 'priority', while the review did agree that it is an important subject, and object to the paper on other grounds. The reason is probably that the editor wanted to avoid having to argue about what the review says.

I am returning your manuscript along with comments by the referee editors.


Dominik P. Purpura

The review:

MS #53951
Author: Harpaz

Papers are judged for suitability and priority for publication on the basis of their relevance for understanding brain mechanisms, originality, soundness pf methods, clariy of presentation, adaquacy of illustration and/or tables, and general format. The referee's comments on these matters of specific criticism and suggesstions is noted below.

MS # 53951 by Dr. Yehouda Harpaz
Reproducibility of cognitive imaging of the cerebral cortex: a survey of the recent literature.

The paper addresses the question of reproducibility in brain imaging through published results. It relies on the work published during most of the year 1997. The review (section 2) is divided in the comparisons of results across studies ("reproducibility across studies"), within studies ("comparison between individuals in the same study") and the relevant reviews.

I believe the topic of this paper to important and intesresting. However, it lacks a scientific methodology

An empty-content insult. The question is whether the methodology used is sound, but the reviewer is apparently not bold enough to say it isn't.
and doesn't discuss some essential statistical points related to imaging experiment reproducibility, especially the use of random versus fixed effect models in the papers reviwed by the author.
Since the paper establishes that there were virtually no replications at all in 1997, obviously none of these points make any difference.
I realize that at the time of the writing, the notion of fixed versus random effects models (leading to the generalisability or not of the experimental results to the parent population in which the subjecte were drawn) was not common knowledge but has become so during the past two years.
Now a new manuever: implying that the situation has changed. The reviewer does not explicitly claim that now there are replicated studies, presumably because he wants to avoid lying explicitly. Instead, he just implies it.
One may simply conclude from the paper that fixed effect models do not show too good reproducibility across study,
The paper does not shows that "fixed effect models do not show too good reproducibility". It shows that any model does not show any reproducibility.

it worth pointing out that since the reviewer tells here a straightforward lie, he first qualifies it with "One may simply conclude". Clearly, you have to be a serious idiot to make this conclusion from the paper.

and this is now widely admited and does not address the reproducibility of functional imaging itself but only of the methods used in the field to analyse the data.
This would be a lie, if it was refering to the contents of the paper. It isn't because it refers to the reviewer "interpretation" in the previous part of the sentence, and hence simply irrelevant.

Even though, it is an odd statement, because it suggests that the reviewer believes that the reproducibility of the methods is not important.

Two other points are worth mentioning : the relation between reproducibility and sensitivity (scanner, number of subjects in the study, statistical method employed) and the question of the statistical threshold related to the multiple comparison problem,
As I have already stated above, the virtually complete lack of replication shows that all these factors are irrelevant, and I actually discuss this in the paper itself.
a point that may not have been fully understood by the author (cf the introduction).
And add some baseless personal insult. Since the reviewer did not write what I 'misunderstood', there is no way to shows that he is wrong.
Presentation of the data (eg the maximum intensisty projection) may also have led the author to some misunderstanding.

The paper as it stands may therefore mislead the general reader in forming an opinion on the brain functional imaging reproducibility.

Without specifying in what way the paper will mislead, i.e. what are the wrong impressions that the general reader will get, this is simply meaningless. The paper will give the impression that there are no replication studies, which is definitely true.

Again, I do think that the point of reproducibility is a very important one, and that more information is needed on this topic in the cognitive functional imaging, but it should be treated with more mathematical rigor than what is done in the present manuscript.

An example of blatant nonsense. Clearly, it make no sense at all to apply 'mathematical rigor' to irreproducible results.
It seems that so far, the manuscript presents the (not always fully informed) opinion of the author rather than actual results.
Plain lie, because the peper presents actual facts about reproducibility in cognitive brain imaging literature. To make it worse, it is also decorated by a personal insult. As before, the reviewer 'soften' the lie by a qualifiert ("it seems that").

Nevertheless, the opinion presented here is not widespread and as such deserved special attention.

This seems like taking a more positive attitude, but the purposoe of this is to strenghthen the lie in the previous sentence, that the paper presents an opinion rather than facts.
I suggest that it could be be communicated through a letter to the editor or a short communication, pointing out the statistical problems involved.
That is a smokescreen. The reviewer knows well that there is no chance of publishing any such letter or short communication.