related texts

This is the exact text of the reviews that I got from the journal of Cognitive Neuroscience. The same text with my response. I e-mailed the latter to the editor, but haven't got any response.
Editor letter:
Dear Yehouda Harpaz:

many thanks for letting us consider your manuscript. Upon consideration of the enclosed reviews, we have several reservations and have decided not to accept the paper for publication in JOCN. I hope the enclosed remarks are helpful and will guide you in your future efforts on this work.

With the increasing popularity of JOCN, these difficult editorial decitions are becoming increasingly frequent. Our best wishes as you submit your paper elsewhere. ===================================================================
Reviewer 1:
The intuition that forms the basis of this paper is quite sensible: to what extent PET and MRI strudies that purport to investigate the same cognitiv function replicate? The author has chosen over research articles published in 1997 and assesses (a) whether the results of a given study are explicitly compared to previous data bearing on the same issue and (b) whether the issue the replicability is discussed. The author finds that there is alarming degree of variability among results, and that the issue of replicability has not been addressed properly.

While I share the author's concern that the issue deserves considerably more attantion, I do not find that the paper is sufficiently rich, provocative and/or analytical to warrant publication in its present form. Neither technical (acquisition-oriented) nor data-analytic or concetual/design reasons are put forth to account for the observation. Insofar as the manuscript is merely elaborated of the (already informally stated) observation (but not an analysis of the reasons leading to the alleged non-replicacbility), it will not fare well as a research article. Importantly, there are already papers in the literature which work through some of the problems associated with cognitive imaging in detail, so the burden is on this author to develop further, specific arguments that illuminate possible pitfalls for the field.

I have two concrete suggestions. First, rewrite this manuscript as a *letter* summarizing the "replicability-is-rare observation" and pointing to the importance of the problem. There are a number of jounals that publish technical notes or commentaries, and this manuscript as appropriate for such a forum. The observation could certainly be stated effectively in a page or so. Second, the manuscript needs extensive language editing - it would help tolet several readers look at the paper before submission ===================================================================
Reviewer 2:
It is a reasonable goal to perform meta-analysis of the imaging data to assess the replicability of cognitive activations across different studies. However, the persent paper fails to accomplish this for two reasons. First it does not actually analyze and quantify variability, but instead merely make anecdotal references to other author's comments (published) regarding replicability. A truly careful analysis would be welcome. Second, the author does not consider (much) the true variability between methods, and especially, between tasks and statistical comparisons. Before one can evaluate the effects of task A on activations B, C and D across studies, one must precisely define task A. The persent paper fails in this regard and should not be published. ===================================================================
Reviewer 3:
Review of "Replicabiliy of cognitive imaging of the cerebral cortex by PET and fMRI: a survey of recent literature by Yehouda Harpaz

This is critical review about a very important problem in functional neuroimaging: the replicability of the data. Unfortunately, the present paper contributes very little to the clarification or solution of this problem. As stated in the abstract, the review provided two findings :(a) that data about replicability in cognitive imaging are extremely sparse; and (b) the data availale are mostly negative, i.e. it suggests lack of replicability. The first finding could be reported in one paragraph, and does not justify the publication of a journal paper. The second finding is more interesting, but it has a serious flaw: it not based on a systematic and comprehensive meta-analysis of the data, but on what other people said about replicability during 10 months in 1997. The conclusion of lack of replicability is based on the following premises:
1.Most of the studies reviewed (116 out of 125) do not discuss the issue of replicability;
2. The arguments for replicability made by the remaining papers are not valid.
3. The few reviews that investigated replicability either have problems (e.g. Poline et al., 1996) or show only minimal replicability (e.g. Poepple, 1996).
The same argument is applied to individual differences in activation. Thus the conclusion that there is a lack of replicability is not based on original scientific research, but on the opinion of the author about the opinions of other researchers. If the paper is summarized into a couple of paragraphs, it could perhaps be published in another journal as a letter to the editor.