My response to the `reviews` from psycholoquy of The 'text' version of the brain-symbols paper. My comments are indented and in italics.

Dear Yehouda Harpaz,

Enclosed are the five referee reports on your manuscript: "The Neurons in the Brain Cannot Implement Symbolic Systems".

Unfortunately, these reports do not give me a basis for accepting your article for publication in Psycoloquy, nor for recommending revision and resubmission. Although there are arguments and evidence in the literature to the effect that the brain is not doing only symbol processing, there is no reason to suppose that symbol processing cannot be implemented by neurons.

Interesting comment. I don't 'suppose' that symbol processing cannot be implemented by neurons, I show it.
Thank you for allowing Psycoloquy to consider your manuscript.

Sincerely,

Stevan Harnad

--------------------------------------------------------------------
Stevan Harnad                     harnad_cogsci.soton.ac.uk
Professor of Psychology           harnad_princeton.edu
Director,                         phone: +44 1703 592582
Cognitive Sciences Centre         fax:   +44 1703 594597
     
Department of Psychology          http://www.cogsci.soton.ac.uk/~harnad/
University of Southampton         http://www.princeton.edu/~harnad/
Highfield, Southampton            ftp://ftp.princeton.edu/pub/harnad/
SO17 1BJ UNITED KINGDOM           ftp://cogsci.soton.ac.uk/pub/harnad/

--------------------------------------------------------------------
REFEREE #1 Anonymous

Ordinarily I would like very much to referee papers for you. In this case, the claim seems so patently self-contradictory that it would be a waste of time. What does the author think he is doing, if not symbolic processing? And what does he do it with, if not neurons? What is Broca's area for, if not symbolic processes ? Etc.

Straightforward declaration of narrow-mindness.
--------------------------------------------------------------------
REFEREE #2 Anonymous
I'd say it's nonsense -- just look at Minsky's book on Finite and Infinite Machines!

Another declaration of narrow-mindness. And how is minsky's book relevant the article?
--------------------------------------------------------------------
REFEREE #3 Anonymous
I reviewed this paper for another journal and rejected it as having no substantial content.

--------------------------------------------------------------------
REFEREE #4 Anonymous

[ This reviewer completely ignores the central argument of the paper. There isn't even a single reference to sections 1-6 of the paper.]
It's of no value. Moreover, I reviewed it in March for another publication; I thought it was of no value then, too.
An amusing statement. This reviewer seems to think that the fact that he thought the same the previous time he reviewed this paper shows anything.
Here is a copy of the review I sent them. The author may have made some revisions since then; the section numbers I reference below don't seem to match this version. But the paper is not substantially different.

The author purports to show that brains cannot implement symbolic processing operations because they cannot copy tokens reliably. But he admits in the first paragraph of section 7 that people can in fact handle symbols. (And contrary to the claim made a few paragraphs later, such symbols are not necessarily tied directly to sensory inputs, since people are perfectly capable of manipulating sentences, formulas, diagrams, and concepts they dream up in their heads.)

People can do various things inside their heads, but not necessarily symbolic operations. Manipulating sentences, formulas, diagrams and concepts does not necessarily requires symbolic operations.
So the author tries to imply that just because people can handle symbols, this does not mean that BRAINS can handle symbols. Actually, he doesn't say something quite that ridiculous; he argues (p. 14) that "the components of the brain" cannot handle symbol. But this is an entirely different claim than the paper start out with and concludes with, i.e. that the brain as a whole is not a symbol processor. Since no one ever claimed that individual neurons or small groups of neurons were symbol processing systems, the author's central argument is revealed as an attack on a straw man.
The claim that the brain is a symbolic system means that components of it do symbolic operations, namely manipulating symbol tokens. This is the position of the symbolic theorists, and the models of the brain that are based on it, and I discuss this in section 2. This reviewer clearly haven't read this section.

The paper concludes with a long series of silly statements, such as (p. 18) "handling language does not require any dynamic mechanism" because the meanings of words are static. (What about the need to dynamically construct representations of novel sentences?)

That does not necessarily require a symbolic system. The argument from language is based on the fact that the language itself is made of symbols (words), but because these are static, they do not require a symbolic system.
And (p. 21) "Neuroscientists do not realize the importance of the stochastic connectivity for theories of cognition".
So what is silly about that ?

Obviously, this is not a scholarly paper. And the central argument is nonsense.

Impressive statement, considering the fact that this reviewer did not discuss the central argument at all, and does not even give us a hint that he has read it.
I recommend that the paper be rejected.

--------------------------------------------------------------------

[ apparently, the numbering of paragraphs have been changed from my original numbering, though I didn't see the altered numbering. Where the reviewer refers to a paragraph by a number, I tried to figure out which paragraph is this and put it in square brackets. ]

REFEREE #5 Anonymous

This paper argues that the brain cannot represent symbolic systems.

Already misleading statement. The paper is about the possibility of implementing symbolic system, not representing them, whatever the latter means.

The conclusion appears to be obviously false,

A declaration of narrow mindness.
the literature review is shoddy,
The paper does not contain a literature review, and this is a completely spurious statement.
and the paper is not well-argued. Commentaries on this piece are unlikely to substantially advance our understanding. I therefore recommend against publication.

Symbols in conscious deliberate thought & Dualism

The reason that the argument appears to be doomed from the outset is that the human brain appears to at least be able to simulate symbolic systems in conscious, deliberate thought.

So what? isn't implementation. If simulation means implementation, we could implement whether systems, stars and protein folding on computers, because we can simulate all of these on computers. Simulation
Thus even those radical connectionists who would deny that symbol-manipulation plays much role in some areas of cognition make an exception for "serial, deliberate reasoning" or "conscious rule application". For example, Touretzky & Hinton (1988) wrote that

"many phenomena which appear to requires explicit rules can be handled by using connection strengths [but...] we do not believe that this removes the need for a more explicit representation of rules in tasks that more closely resemble serial, deliberate reasoning. A person can be told an explicit rule such as "i before e except after c" and can then apply this rule to the relevant cases."

With all the respect to Touretzky & Hinton, they don't give us any clue how symbolic systems can be implemented by real neurons, and their own belief on how the brain works is not an evidence about it. The fact that humans can follow rules does not tell us anything about how the brain works.
If the brain can manipulate symbols there, any argument with the conclusion that the brain cannot manipulate (or represent) symbols must be mistaken.
But nobody have shown that the brain can internally manipulate symbols. The sole argument that the reviewer brought for this is the beliefs of Touretzky & Hinton .

The author addresses this point briefly in paragraphs 47-50, but the response given there seems incoherent. If the brain a whole can represent symbols, then the brain can represent symbols.

So what? The question is whether components of the brain can do it, not if it is as a whole can do it. This is what is discussed in section 7. It seems the reviewer does not have the ability (or more likely, the open-mindness) to distinguish between the brain as a whole and components of it. Using he reviewer's logic, since we can drive a car, there shouldn't be a problem to drive a gearbox (because it is a component of the car).
The author seems to be resorting to a kind of dualism in which a person can manipulate symbols, but a brain cannot. Everything we know about cognitive neuroscience and cognitive neuropsychology makes this sort of dualism untenable.
Still a failure to distinguish between all the system and parts of it, and now, out of the blue, introducing dualism. I make it very explicit why the whole system is different qualitatively from its components in section 7, but this reviewer does not actually comment on my explanations.

Where has the author gone wrong?

As I reconstruct his argument the premises are as follows.

1. the processing in the brain is done by neurons.

2, every mechanism in the brain must be implemented by neurons.

3. neurons are stochastic.

Inaccurate, on two accounts. The connectivity between the neurons (i.e. which neurons has synapse with which neurons) in the cortexis stochastic. The latter inaccuracy (that I discuss neurons in the cortex, not any neuron) is used later in the argument.
4. stochastic neurons cannot implement symbols
This is a blunt lie. This statement is not a premise, it is a conclusion of the discussion in section 5. The reviewer seems to have skipped this section completely.

Therefore

5. The brain cannot implement symbols.

Very inaccurate and misleading. I am talking about symbolic systems ( not 'symbols'), and explicitly states what that means in section 2 (using symbol tokens). That is not the same as 'symbols'.
Which premise is wrong? Premise 1 is likely right, but not entirely beyond doubt; the same goes for Premise 2.

Premise 3 may be correct for some neurons and not for others.

True for some extent, but I am explicitly discussing neurons in the cortex, not any neuron. This is the significance of the inaccuracy in point 3 above.

Here, the author to some extent confuses gaps in our collective knowledge of neuroscience with limitations on what is possible.

A demonstration of ignorance in neurobiology. That the connectivity in the cortex is stochastic is established beyond doubt.
One might have formed a similar argument in Mendel's time to argue against the possibility of their being genes.
A spurious statement with no relevance to the discussion.
The main problem is with Premise 4, a premise that rests more on a poverty of imagination than on any in-principle argument.
Extending the original lie. 'Premise' 4 is the conclusion of section 5, which the reviewer skipped.
Note that there is at least some stochasticity in a standard computer, albeit of a different sort (e.g., radiation emitted from a sunspot might affect the memory register that contains the thirty-seventh character in this symbolically stored e-mail message). If we accept the fact that sunspots introduce some stochasticity into computers, the form of argument expressed in 4 would lead to the false conclusion that computers are not symbolic. (The author seems to think that these two kinds of stochasticity differ in an important way relevant to the argument, but I am unable to follow the author's argument.)
The obvious difference is that the stochastic effects in computers, like the one described above, are very rare, and the computer will not work if it happens even once in 10 ** 12 operations. In the cortex all information processing is based on stochastic connectivity (By which I mean, as explained in section 4 of the paper, stochastic across individuals, not across time).

The catastrophic response of computers to errors compared to the graceful degradation of the brain is a well rehearsed piece of information, and the 'failure' of the reviewer to figure this out shows how he is blinded by his own biases.

Other general comments

The manuscript is unclear. Having read through this manuscript several times, I still don't fully understand the author's argument. (Some examples are listed below.)

It is not clear because the reviewer refuse to except the conclusion, distorts what I write, use blunt lies, skip complete sections, and 'fails' to see obvious points (e.g. my previous comment).
The literature review is shoddy, which is especially worrisome since the author frequently takes gaps in the literature to be indicative of in principle arguments; often such gaps only reflect the authors's unfamiliarity with the literature. (See various references below.)
Second blunt lie. None of the 'in principle arguments' in the paper reflects a gap in the literature, and the reviewer could not find any such argument, except when he shows his own ignorance of neurobiology. As to the references the reviewer gives below, all of them are spurious, and don't show anything about the subject of the paper.

Other comments.

Paragraph 7 is confusing. Isn't the storage of a symbol a kind of computation?

[ paragraph 2.2 ]
You can call it 'computation'. It does not affect the argument.
Also, the requirement of arbitrariness seems unmotivated.
Third blunt lie. The next sentence explains it.

It's also not clear that symbols *must* be stored; one could build a symbolic computer a la McCulloch & Pitts (1943) that passed along symbolic activation values without storing them.

It is not symbols that need to be stored, but symbol tokens. The reference to McCulloch & Pitts is completely spurious, as this paper is neither based on real neurons nor is showing an implementation of a symbolic system. It is instructive that the reviewer could not find a more recent model.
Paragraph 13: There may be mechanisms of short-term synaptic plasticity that have not yet been identified. See Gallistel (1994) for an argument that such mechanism must exist, even if they have not yet been identified. Also, see Gallistel (1998, in the 4th volume of the Osherson MIP Ress Cognitive Science Introduction.)
[ paragraph 3.3.5 or 3.3.6 ]
If the reviewer means changes in strength of connections, then I explicitly say that this happens happens [3.3.6]. If he means generation or deletion of synapses in the time scale of computation (less than a second), then it is simply a declaration of ignorance. That there is no generation or deletion of synapses in the cortex in the time scale of thinking (less than a second) is universally accepted by neurobiology researchers, and this is another demonstration of the ignorance of this reviewer.

The reference to Gallistel is another spurious reference. The 1994 paper does not touch at all on the subject of synaptic plasticity.

Why can there be no way in which "distal events can affect the strength of specific synapse"?
Because there is no such mechanism in the brain. Another declaration of ignorance.
Paragraph 15: Maybe the selection of these neurons is systematic, but in ways that neuroscientists have not yet come to understand.
[ paragraph 4.2 ]
As explained in the paper, the evidence for the stochastic connectivity is from differences between individuals, so if the selection is systematic, it has to be a different system for each individual. There is no place in which these different systems can be coded. Again, the reviewer misses a simple explanation, because of his biased views.
Paragraph 23. Point about cell populations is important, but should not have been ignored in Section VI.
[ paragraph 4.10 ]
Why? How does it affect the argument? That is just a spurious comment.
Paragraph 24 confuses "specialized" with "prespecified"; targets of given vertebrate neurons may not be prespecified but may still be specialized. (See, e.g., Rakic, 1995).
[ paragraph 4.11 ]
The quote make it clear that when the authors says 'specialized' they mean 'pre-specified'. Otherwise, the usage of the word 'precise' does not make sense. The reference is another spurious, without any relevance to the point of discussion.
Paragraph 24 Constructions like "worst example" (other examples may be found throughout) are unduly condescending.
[ paragraph 4.11 ]
Possibly.
Paragraph 43. I don't undertstand the argument and/or the evidence here.
I couldn't figure out what this refers to.
Paragraph 52, 56. Re: behavioral tests and insights from symbols: The work of Pinker on connectionism is very relevant here. (See (Marcus, Brinkmann, Clahsen, Wiese, & Pinker, 1995; Pinker, 1991; Pinker & Prince, 1988) Likewise, Fodor & Pylyshyn (1988) have argued that symbols give the only account of compositionality.
[ somewhere in section 8]
More spurious references. How is Pinker's work relevant? In other words, what insights does it give? Certainly no works on connectionism can tell you anything about symbolic systems. It seems that the reviewer believes that connectionism and symbolic systems are the only alternatives, and hence any work that show deficiencies in connectionist models is evidence for symbolic systems. That is obvious nonsense, because the brain is not necessarily either connectionist or symbolic.

Fodor & Pylyshyn 'argued for symbolic systems' simply by arguing against some flavours of connectionist models, and showing that they cannot support compositionality. This is the same faulty logic.

Paragraph 53: No argument is given to show that the symbolic approach does not work.
[ paragraph 8.3]
It is explained in the next two paragraphs, which the reviewer did read. His failure to make the connection is another example of blindness, probably induced as a result of his refusal to accept the conclusion.
Also, what would the author take as evidence for the symbolic approach?
How is this relevant to the argument in this paper? Anyway, here is the answer:

1) Showing features in the brain that seem to support symbolic operations.
2) demonstrating a consistent correlation (over diverse kinds of mental tasks) between symbolic models and human behaviour, without playing with the parameters of the symbolic models.

Paragraph 54: Even if symbols had not yet yielded insights into neurobiology, that would not mean that they couldn't.
True, but I based my argument in section 8 on the fact that they did not in the past (and, as shown in the previous sections, biologically impossible).
Again, Mendel's postulate of genes took a number of years before it lead to direct understand of molecular mechanisms.

Mendel's ideas were not used in a massive research program between the time they where formulated and the time they were found to be useful. Symbolic systems were (and still are) used in massive research programs.
References

Fodor, J. A., & Pylyshyn, Z. (1988). Connectionism and cognitive
architecture: A critical analysis. Cognition. 28, 3-71.

Gallistel, C.R. (1994). Foraging for brain stimulation: toward a
neurobiology of computation. Cognition, 50, 151-170.

Marcus, G. F., Brinkmann, U., Clahsen, H., Wiese, R., & Pinker, S.
(1995).  German inflection: The exception that proves the rule.
Cognitive Psychology. 29, 186-256.

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas
immanent in nervous activity. Bulletin of mathematical biophysics. 5,
115-133.

Pinker, S. (1991). Rules of Language. Science. 253, 530-55.

Pinker, S., & Prince, A. (1988). On language and connectionism:
Analysis of a Parallel Distributed Processing model of language
acquisition. Cognition.  28, 73-193.

Rakic, P. (1995). Corticigenesis in Human and Nonhuman Primates. In
M.S.  Gazzaniga (Ed.), The Cognitive Neurosciences (pp. 127-145).
Cambridge, MA: MIT Press.

Touretzky, D.S., & Hinton, G.E. (1988). A distributed connectionist
production system. Cognitive Science, 12, 423-466.

--------------------------------------------------------------------