related texts

==================================================================
The full text of the reviews from JOCN with my responses. ==================================================================

General points:

None of the reviewers challanges any of:

1. The facts that I bring
Mainly that there is no evidence for replication in the papers I surveyed.
2. The conclusion that I reach
That there is a problem that needs to be addressed.
3. The importance of this conclusion.

In other words, they all implicitly agree that the paper is sound and important. Their rejection is based on two main claims:

  1. That I need to do various kinds of analysis. These, however, cannot affect the main points of this paper.

  2. Saying that the findings of these article can be summarized in few sentences and published as a letter. However, this letter is sure to be rejected on the grounds that it contain a serious claim (lack of replicability), but does not establish it. The point of this article is not only to say that there is no replication, but also to establish that it is not a mere impression based on only few selected articles. For this, less than the survey that I did is insufficient. [9 Mar 2003] Here is what actually happened when I tried to publish it as a letter.

    Note that none of the editor or the reviewers actually suggest publishing the paper as a letter in JOCN. If they seriously think that publishing this as a letter is a good idea, there is no reason why not do it in JOCN. Obviously, that is just a tactical manuever rather than a serious suggestion.

==================================================================
detailed reponses. The full text of the reviews, with my comments indented and italized.
===================================================================
Reviewer 1:
------------------------------------------------------------
The intuition that forms the basis of this paper is quite sensible: to what extent PET and MRI studies that purport to investigate the same cognitiv function replicate? The author has chosen over research articles published in 1997 and assesses (a) whether the results of a given study are explicitly compared to previous data bearing on the same issue and (b) whether the issue the replicability is discussed.
Two significant inaccuracies:
  1. I didn't choose the papers, I took any paper that I could find. Saying 'the author has chosen' implies potential bias in choosing which papers to survey.
  2. The survey was about any evidence about replicability, not only the points that the reviewer mentioned.
The author finds that there is alarming degree of variability among results, and that the issue of replicability has not been addressed properly.

While I share the author's concern that the issue deserves considerably more attantion, I do not find that the paper is sufficiently rich, provocative and/or analytical to warrant publication in its present form.

A paper does not have to be rich, provocative or analytical to be published. It has to show important results, and my paper clearly does.
Neither technical (acquisition-oriented) nor data-analytic or concetual/design reasons are put forth to account for the observation.
True, but that is not what the paper is about. The paper is about showing lack of replicabiliy, not about solving the problem.
Insofar as the manuscript is merely elaborated of the (already informally stated) observation (but not an analysis of the reasons leading to the alleged non-replicacbility), it will not fare well as a research article.
What does 'not fare well' means? It can mean either that I haven't established my claim, or that it is not important, but I do establish my claim in the paper, and it is obviously important.
Importantly, there are already papers in the literature which work through some of the problems associated with cognitive imaging in detail,
...but not with question of replicabiliy. Showing that other papers do not address the question of replicability is the point of my paper.
so the burden is on this author to develop further, specific arguments that illuminate possible pitfalls for the field.
The reviewer here tries to tell me what I should do, and that isn't his job. All he needs to do is to check that what I write is sound an important.
I have two concrete suggestions. First, rewrite this manuscript as a *letter* summarizing the "replicability-is-rare observation" and pointing to the importance of the problem. There are a number of jounals that publish technical notes or commentaries, and this manuscript as appropriate for such a forum. The observation could certainly be stated effectively in a page or so.
I have answered this in the general points above.
Second, the manuscript needs extensive language editing - it would help tolet several readers look at the paper before submission
--------------------------------------------------------
Reviewer 2:

It is a reasonable goal to perform meta-analysis of the imaging data to assess the replicability of cognitive activations across different studies.

Meta-analysis is not a tool to assess replicability. Obviously, this reviewer does not want to assess the replicability of imaging studies.
However, the persent paper fails to accomplish this for two reasons.
No, it fails for one reason, because I am not trying to do a meta-analysis.
First it does not actually analyze and quantify variability, but instead merely make anecdotal references to other author's comments (published) regarding replicability.
Not 'anecdotal'. The survey is almost complete, and I bring any evidence of replicability form any of the papers that I could find.
A truly careful analysis would be welcome.
Possibly, except that it is going to be a waste of time, as my paper is showing. There is no much point in doing analysis of irreplicable data.
Second, the author does not consider (much) the true variability between methods, and especially, between tasks and statistical comparisons. Before one can evaluate the effects of task A on activations B, C and D across studies, one must precisely define task A.
None of this is a reason not to point out lack of replicability in current papers.
The persent paper fails in this regard and should not be published.
--------------------------------------------------------
Reviewer 3:

Review of "Replicabiliy of cognitive imaging of the cerebral cortex by PET and fMRI: a survey of recent literature by Yehouda Harpaz

This is critical review about a very important problem in functional neuroimaging: the replicability of the data. Unfortunately, the present paper contributes very little to the clarification or solution of this problem.

It contributes a lot, because it raises the problem. As my survey shows, Currently the problem does not get any attantion, so raising the problem is a critical step.
As stated in the abstract, the review provided two findings :(a) that data about replicability in cognitive imaging are extremely sparse; and (b) the data availale are mostly negative, i.e. it suggests lack of replicability. The first finding could be reported in one paragraph, and does not justify the publication of a journal paper.
I answered this point above in the 'general points'.
The second finding is more interesting, but it has a serious flaw: it not based on a systematic and comprehensive meta-analysis of the data, but on what other people said about replicability during 10 months in 1997.
Lack of meta-analysis is not a flaw, because that is not what what I am trying to do. The question is whether the data I present establish my claim, and it clearly does.
The conclusion of lack of replicability is based on the following premises:

1.Most of the studies reviewed (116 out of 125) do not discuss the issue of replicability

Which also means that none of them shows or reports about replication data. That is the point.
2. The arguments for replicability made by the remaining papers are not valid.
I don't just say that, I quote and explain why the arguments are invalid.
3. The few reviews that investigated replicability either have problems (e.g. Poline et al., 1996) or show only minimal replicability (e.g. Poepple, 1996). The same argument is applied to individual differences in activation. Thus the conclusion that there is a lack of replicability is not based on original scientific research, but on the opinion of the author about the opinions of other researchers.
Plain lie. I survey the literature, and check for replication. Certainly I am not basing my opinion on other people opinions. It is true that I don't bring new original scientific research, but this is not the point of this paper. As its name suggests, it is a survey, which highlights an important problem that is currently ignored.
If the paper is summarized into a couple of paragraphs, it could perhaps be published in another journal as a letter to the editor.
I have answered this above.