Response to the and reviews from Cerebral Cortex of the replicability paper.

The text of the reviewes is gives in full. My comments are indented.

REVIEWER#1

General: This is the most coherent and dense demagoguery that I got until now. The reviewer 'drives' into the reader that the paper is doing comparison between studies and perform analyses of the data, even though my paper does not do either, and makes this point is explicitly. In addition, it presents questions that my paper answer explicitly as if they were not answered.

14798

1. It is unclear exactly how the comparisons have been made.

I don't do any comparison.
What criteria does he have for lack of reproducibility and how far do differences in test paradigms account for the inconsistencies?
The first part of the question implies that I do some analysis , which requires some criteria. The second part also implies that I do an analysis, which is dependent on whether the 'incosnsietncies' can be accounted by 'differences in test paradigms'. Obviously, both implications are false.
How many studies have actually been carried out as intended replications of other results using different PET scanners?
I have answered this question in the survey (bottom of page 7, two reported that they did, but did not give data).
It might be easy to arrive at the conclusion of variability in results merely because there is variability in methods.
This implies that my main claims are wrong if the variability in results is because of the variability in methods, which is false. The lack of replication is a serious problem, no matter what the reason is, and I state this explicitly in the implications.

In addition, the idea that the variability is because variability in methods seems unlikely, from the reason explained in the first paragrpah of section 3.

This is a common problem, in neuroscience, not restricted to functional imaging. We would need to see much stronger case that there was something 'special' about these procedures.
I don't claim that it is the procedures that are wrong. I claim that the lack of replicability is the problem, and it needs to be addressed.

2. It is probably a bad idea to have included all possible (125) studies in the survey without imposig some degree of quality control. Tighter criteria should have been used fort studies to have been included.

That would make sense only if I was doing analysis of the data (and hence implies that I do). Since what I do is a survery of all the literature, it is obviously not sensible to exclude part of it on whatever criterion.

3. The writing is poor, in some places too informal or colloquial for a journal of the quality of Cerebral Cortex. For example, on p5, top, does the author mean 'giry and sulcy', actually gyri and sulci?!

4. By comparing within-subject variation, between study variation and between imaging modality variation, the author has spread himself little thin on the details of the analyses, which appear superficial.

Repeats the implication that I do comparison and analysis.
Also the structure of rationale of the analysis is also not always quite clear, as a result.
And repeats the implication that I do comparison and analysis again.
It should be also much more specific about what it adds to the recent review by poline et al.
Clearly, what it adds is a complete survey of the literature, and I make this point explicitly in the end of the introduction (bottom of page 5).
Overall, while I have much in sympathy with the author's objectives,
Plain lie. This reviewer clearly would do what he can to prevent the publication of this paper, or any other paper that makes claim about the replicability of cognitive imaging in general.
he should probably submit it to another journal for publication such as TICs or Critical reviews in neurobiology, or even a specialist nneuroimaging journal, where the technical aspects of his argument can be debated by a more expert audience.
The reviewer probably knows that there isn't any chance that any of these will publish it.

========================================================================

REVIEWER#2

General: Odd review. The first and the last paragraph are pretty sensible, the middle paragraph is a demagoguery like the first review, but much less coherent.

Referee's report on "reproducibility of cognitive imaging of the cerebral cortex PET and fMRI: A survey of recent literature", by Yehouda Harpaz {C14798).

The author addresses a very important issue in functional brain imaging: the issue of reproducibility of cortical activations in particular, and cerebral and cerebellar activations. This is an issue which has been (intentionally?) overlooked by scientists doing functional brain imaging. Unfortunately it is extermely rarely that scientists doing functional brain imaging try to replicate the findings of others. Also the publication system may discourage this most necessary practice.

The author has gone through over 100 published functional imaging articles and reports an evaluation of some of these from two perspective: reproducibility across studies and within studies.

False on two accounts: 1) I don't evaluate any of the papers. I just report what they write. 2) I don't report 'some of these'. I report on all of them.
It is not clear how these studies are selected.
It is not obvious what 'these studies' refers to: all the studies that I survey, or the alleged subset that reviewer claims I selected.
The author uses the cognitive imaging without actually defining it.
I define 'cognitive investigation' in the abstract, which should be good enough for any reader.
Usually cognition is taken different from somatosensory and motor activities, but occasionally the author discusses activation in premotor cortex. The domain of investigation should be clearly defined, as there are several example of high reproducibility of motor paradigms.
This assertion is simply false.
For example, more than 50 studies and the same amout of abstracts have verified reproducible activations of the SMA and M1 when people perform motor sequences.
And this one is plain lie. Some regions in the SMA and M1 are quite consistently activated when humans perform some specific movements, but in a much grossoer resolution than the resolution of imaging. In the survey, I included imaging of all the cortex, inclding motor ans sensory cortex, and none of the studies showed replication of anything.
Further it is unclear how the author defines reproducibility.
I explain explicitly what I mean in the bottom of page 3 and page 4.
The author should present a rigorous definition of what is a match of activations.
That is wrong. The definition has to be useful to decide if something is replicable or not. The reviewer does not give us any reason why he thinks my definition is not useful.
In case centers of gravity it is, the author merely gives a judgement of reproducubility on unclear grounds.
It is not clear to me what this sentence actually says. The word gravity does not appear in my paper. Maybe it refers to my comments on the multi-center study of poline et al (pp. 4-5), but in this case my criticism is on very clear grounds.
A third major deficit is lack of statistical tools to judge the reproducibility.
Since I don't analyze the data, I don't need statistical tools. This is the same implication as reviewer#1 is doing.
One would hope that the author would sharpen the weapons to attack their apparent lack of reproducible findings in cognitive neuroimaging. Cognitive here is used in its usual meaning of perceptual attentional and purely cognitive tasks.

The manuscript has several spelling mistakes and omissions.

I have checked some of the examples given, and the description in the text is usually fair, but lacks precision, which is a pity since this is a most worthwhile.

The 'lacks percision' is simply false. In most of the cases there no numbers to report. Where there are numbers, I report them properly.