Dear James H. Fetzer,

I just got the second review of 'The Neurons In The Brain Cannot Implement Symbolic Systems'. I understand you have high respect to this reviewer, but he definitely slipped here.

The main argument against the main line of my paper is that my paper fails to 'recognize the importance of levels', and is based on the assertion that it is possible that there is 'a suitable higher level of organization at which one can implement symbolic processes'. This is a completely vacuous argument, because there is no such level of organization in the brain: up to 1 mm it is stochastic (and the reviewer agrees on this), and larger elements in the brain clearly don't implement symbol token (I put the explicit argument below, in case it is not intuitively obvious).

I don't see how the reviewer can recommend rejecting my paper, and you agree with him, based on an hypothetical suggestion which is clearly contradicted by the evidence. The least that you have to do, before rejecting my paper based on this review, is to ask the reviewer to suggest the potential candidate levels of organization _in the brain_ that can be used to implement symbol tokens. This will show clearly that the reviewer (or anybody else, for that matter) does not have any idea what this level can be.

I append an explanation why there aren't possible higher levels which are candidates for implementing symbol tokens, and response to the specific comments the reviewer made.

Yehouda Harpaz

--------------------------------------------------
Possible levels to implement symbol tokens:
-------------------------------------------

Up to size of 1 mm the connectivity is stochastic (as the reviewer agrees). As argued in my paper, this eliminates the possibility of implementation by elements that are smaller than ~1 mm, either spatially localized or not ( The argument in the paper is not dependent on spatial localization).

This leaves the possibility of implementation by larger elements, i.e. groups of neurons which are larger the 1 mm. Since inside these elements the connectivity is stochastic, the implementation cannot be dependent on specific details of activity of the element, and must depend on the total activity of the element. In other words, the elements are primitive.

We know the input from sensory systems, and output to motoric systems, do not form part the symbolic system, so I ignore them.

To implement symbol tokens, these elements have to be connected in a coherent way, i.e. such that an element can affect (activate/deactivate) other elements specifically. This is not seen in the cortex. If we look at any 1mm square in the cortex sheet, it sends output its neighbors in all directions. There is nothing to distinguish the output to one neighbor from the output to another neighbor, because the neurons that send the output to different neighbors are mixed together (and in many (most?) cases the same neuron send output to several directions). The same is true for longer projections. Local connectivity inside the element cannot be used to sort this out, because it is stochastic.

(Sensory input from the LGN to the visual cortex in an example of stochastic low-level connectivity with coherent connectivity in the 1mm level, i.e. specific elements in the LGN are connected to specific elements in the visual cortex. However, this kind of specific connectivity is not seen inside the cortex itself).
The first level in which there is something like a coherent connectivity is the level of brodmann areas of the cortex. At this level, however, the number of primitive elements is far too small to implement a symbolic system and all the knowledge that a human has.

-------------------------------------------
Response to specific comments:
------------------------------

The only comment that has a bearing on the main line of argument of the paper is the one on pp 8-9, So I will start with it. My comments are indented.

pp. 8-9: This argument is not at all clear to me, and hence not compelling. Why must the analysis of dynamic properties be at the individual neuron level?

The argument is not clear because the reviewer did not read it carefully. The text does not discuss the activity of an individual neuron, but patterns of activity, i.e. activity of many neurons. This is valid, because anything that happens in the brain is neural activity.

I urge the author to take a look at the work of Scott kelso (no friend of a symbolic approach himself) and other dynamicists who work generally at much higher level.

The work of Kelso and other dynamicists does not give any hint of a possible level of implementing symbol tokens, and it does not present any data or argument that can weaken my argument. This comment is completely spurious.

As the argument moves on, the confusion of levels continues to develop: the notion of 'location' in a high-level symbolic analysis does not have to refer to a discrete location in the brain --e.g, specific neurons.

The argument in the text is independent on the way 'location' is defined, and this point is made explicitly in the text, in the second sentence in third paragraph on page 9. The reviewer clearly skip this sentence.

----------------------------------------------------------
Response to other comments:
---------------------------

The rest of the comments are not related to the main line of the paper, so the discussion here is actually besides the point. I put it here just for completion. My comments are indented.

I am very worried about the paper that relies on textbook knowledge of neuroscience. But if one is to that, why not go to the bible: Kandel and Schwartz? The same applies to the author's characterization of the cognitive literature.

The reviewer does not explain what is wrong with textbook knowledge, and I cannot think of any reason not to rely on it. That looks to me like a slip of the pen.

The question about Kandel and Schwartz is odd. The reviewer may have meant to say that he recommends that this will be in the reference list, but in this case he should have said that. In any case, Kandel and Schwartz don't supply any evidence that contradict my paper.

I couldn't understand the point of the last sentence.

P. 3, Bottom: There are some other characteristics that I think are more salient in the choice of symbolic approach: ability to account for productivity and systematicity, ability to solve binding problems, etc.

Maybe.

P. 3, top: This seems to ignore the point of the cognitive modeling. The point is to provide a framework that can handle what the brain DOES. This does require that the framework can be implemented, but that the choice of characteristics in a cognitive architecture is hardly made because the characteristics are relevant to the brain if that means we can simply find the characteristics are, say, the single neuron level.

I failed to parse the last sentence. I think it got mangled in some way.

P. 4, top : here a crucial ambiguity begins to emerge--is the problem that these characteristics are not implementable by neurons, or by anything built out of neurons?

There is no ambiguity at all. The text clearly talks about the impossibility of implementation by neurons in the brain (I will have to add 'in the brain' to the text). This necessarily also means that it cannot be implemented by anything that is built on neurons.

P. 6, top: There is growing evidence of some limited nerve regeneration in different parts of the brain; this seems to be a case of changes in connectivity.

I didn't say that there are no changes at all. As I clearly stated in the text, the main point is that any change that does happen happen far too slowly to be part of the mechanism of thinking.

p. 6, middle: I am not sure what is meant by "a well-defined structure". Results from neuroimaging and lesion studies do suggest patterns of structure-function relations. Is that what is meant?

By "a well-defined structure" I mean a structure that is defined well enough that it can be matched between (almost) all individuals of the same species (Note that it does not _have_ to be genetically defined, though). In as much as the results of the studies are reproducible between individuals, they imply well-defined structures.

P. 7, bottom: "The stochastic nature of the low-level connectivity is almost never mentioned explicitly in neurobiological textbooks, probably because it is regarded as a non-fact." This is an extremely dangerous inference. I doubt anyone in mainstream neuroscience would deny a high degree of stochastic organization, and would certainly bring it up in development context. However, what is of more interest is the higher level regularities that emerge. This would be like charging someone working at the biochemical level of denying quantum effects because they don't talk about them.

This is a misunderstanding, because of a sloppy writing. The reviewer took 'non-fact' to mean 'false', while I meant 'a fact without any interesting consequences'. I will clarify the point. I am sure neurobiologists understand the stochasticity of low-level connectivity, and make this clear in the first paragraph of section 6 (P. 11, top).

PP. 8-9 - discussed above.

p. 11, middle: as far as I know, common-sense usually makes no predictions about human behavior at the level that it is usually modeled in cognitive simulations. The fact that simulations has made many predictions which have been tested by complex behavioral experiments (and then often falsified) suggests the positive role of these simulations.

Only if we have a reason to believe that the experiments to test these predictions were more productive than other experiments. The reviewer does not bring any argument for this. The illusion that symbolic models give useful hypotheses for testing is discussed further on P. 15.

P. 12: The claim: "Until the evaluation of models will be based solely on brain related parameters, realistic models of the brain will never get noticed" seems obviously false. First of all, the author seems to call for rejecting behavioral parameters.

That is nonsense, because behavioral parameters (of humans) are clearly brain related. My statement is against using behavioral parameters of computer simulations for evaluating models of the brain. I made this clear in the last paragraph of section 8 (p. 15), which the reviewer probably skipped.

It is interesting to note that even though the reviewer thought the statement seems obviously false, he did not bother to consider alternative interpretation.

Many neuroscientists would balk at that. Moreover, the modeling community is far less hegemonic than the author suggests. There are lots of different modeling frameworks in place, some clearly brain motivated (e.g., Freeman's dynamic models grounded on studies of the olfactory cortex).

My statement is indeed too wide-range, and the second part should read: "realistic models of the COGNITIVE FUNCTIONS of the brain will never get noticed." In more peripheral areas, like sensory systems, other models can do better.

P. 14, top: There seems to a perfectly obvious reading of "not wholly constrained" which has nothing to do with the author's interpretation: my word-processor is not wholly constrained by the chip running in it: clever design at the software level and my input change what it does.

The problem with this interpretation is that it is obviously false when applied to the brain. The word-processor can show more 'intelligent' behavior that the CPU can on its own, because it has the benefit of computer programmers that understand how it works, and are much more clever than the CPU or the word-processor. The brain doesn't have this luxury, and _is_ constrained to the performance its own physical structure.

Note that operationally the word processor _is_ constrained by the CPU, in the sense that any operation that the word-process can do is necessarily expressible as a sequence of operations of the CPU. In the same way, the operation of the brain is constrained by neurons, in the sense that any operation of the brain is necessarily expressible as neural activity.

P. 15: I think a more thorough review of the literature is needed to make the charge that symbolic modeling has "hardly ever" generated hypotheses worthy of empirical testing. This seems to be just blatantly wrong. A lot of fruitful cognitive experimentation has been motivated by computational models, and now the same is applying to cognitive neuroscience. (for some examples, consider the work of Dom Massaro, Roger Ratcliff, Max Coltheart and pinker and prince among huge list of others).

This comment is based on the false assumption that finding several papers testing hypotheses from a model shows that the model is useful. As I wrote in page 15, a model is useful when it leads to significant insights about the system under investigation. A reasonable, quasi-objective initial test for the significance of an insight is whether it finds its way to the the text books of the field (the ultimate test is whether it leads to applications, in other fields or commercial). As I continued on page 15, cognitive psychology text books don't mention any such insight from symbolic models, and even specific books about symbolic models cannot find any such insight. As I am sure the reviewer is fully aware, these models have nothing to say for neurobiologists and neuroanatomists either.