Response to reviewer # 1
Referee No: 1 ms/no: #98029 title: Can the neurons in the brain implement symbolic systems?
GENERAL COMMENTS The manuscript #AM98029 titled "Can the neurons in the brain implement symbolic systems?" deals with an important question. Unfortunately the manuscript is full of grammatical and typographical errors, and the main argument is weak. I enclose some comments that the authors might find useful but I doubt that this manuscript can be improved enough to warrant publication in an international journal.
a) The title is awkward as it implies that there is some other kind of neuron which could implement symbolic systems.
I disagree. A question implies some statement if when the statement is false the question is nonsensical. The question in the title is still sensible even if there are no neurons elsewhere which can implement symbolic systems, so it does not imply that there are such neurons. On the other hand, it is important to state that the discussion is about the neurons in the brain, because I use a feature (stochasticity) which is true for the brain, but not for every other system. The importance of this point is highlighted by the fact that this reviewer himself tries to counter my argument by using evidence form neurons outside the brain (alpha motoneurons of the triceps surae)b) The authors claims that "the set of neurons that will tend to become active as a result of the activity of some specific neuron is stochastic" and that the set of neurons that will tend to become active as a result of the activity of some specific neuron is "uncorrelated to the set that will tend to become active as a result of the activity of any other neurons". To take one of their examples, the authors claim that contacts between LGN axons and their target neurons in area 17 are stochastic, that the trees of different axons can not be matched and that therefore they are not well specified. However, the authors have failed to provide evidence supportive of these assertions, Instead, readers are invited to pursue textbooks which admittedly fail to make a case for synaptic selectivity between individual neurons. This is very weak argument. The lack of specification of axonal trees and connections may have more to do with our ignorance of how brains are built and work and less with how brains are actually built and work.
My argument is based on the differences between brains, not on our lack on knowledge of the specification. The reviewer does not give us any idea how the connectivity in the brain can be precisely specified and yet come different between individuals.
I have changed the text in section 4 (pp. 9-10) substantially to make this point clearer.
c) If the authors could provide evidence to suggest that connections between individual neurons are indeed stochastic, the point they are trying to make would be much more convincing.
As I wrote above, the reviewer simply ignore the question of variability between brains.
I must admit that I would not bet much on their chances of success. To judge from what we know about the best studied synapse in the brain (the homonymous one between Ia fibers and alpha motoneurons of the triceps surae as described by Burkem, Fleshman, Segev and their colleagues) there are about 10 synapses per fiber no matter who the motoneuron of the fiber.
That is quite stunning statement, as these synapses are in the spinal cord, not in the brain. It is difficult to imagine that the reviewer does not know that, but it is also difficult to find other explanation for this statement.
In addition, even in this case the connectivity is not well defined, and the exact connectivity of each Ia fiber is variable between individuals.
This example is similar to example I quote (the spinal cord of the bullfrog).
Accordingly, I fail to see anything stochastic about this connection.
Since it is not in the brain, this is irrelevant.
Other connections could be different but evidence to suggest that this is the case should be presented.
As I wrote above, the evidence is the variability between brains.
d) The authors have failed to prove that symbol tokens can not be handled by larger scale structures in the brain. The notion that the neurons which belong to a 1 mm square of the cortex but send processes to other elements "are all mixed up together" is not consistent with neurobiology. For example, the cells that send processes to the Claustrum are not mixed up with the cells that send processes to the superior colliculus even if they inhabit the same 1 mm square of area 17.
Maybe the reviewer understands the term "mixed up" differently from the way I intended which is that they are mixed spatially. The text in my paper makes this clear by stating that they are "all mixed up together, on a very small scale(tens of microns, at most)."
I have changed the text in section 6 (pp. 14-15) to be clearer. With the correct interpretation, my statement is obviously true to any neurobiologist.
Even if they are mixed up they need not be non-separable and the authors have failed to provide evidence that this is indeed the case.
The question is not if they are "separable" in general, but whether they can be controlled separately, as I wrote in the second line of p.15. I have changed the text to make it clearer.
e) The number of grammatical and typographical errors is too large to list. I enclose the ones I found in randomly selected page (#9): ...... [ list of corrections ]
I have fixed these and other errors.
15. P 9, p 4, 14:"It follows immediately", It doesn't follow, let alone immediately.
The assertion that this refers to is that from the fact that the set of neurons that tend to become active as a result of the activity of one neuron is uncorrelated to this set for other neuron, It follows immediately that the set of neurons that will become active as a result of the activity a set of neurons is stochastic.
To me that looks obvious, because there is no way in which the set that becomes active can be specified. The reviewer simply claims it does not follow, but does not give us any hint how the set that become active can be specified.
I have substantially changed this part of the text (pp. 9-10), to make the logic of the argument clearer.
Response to reviewer # 2
Review #2: "Can the neurons in the brain implement symbolic systems?"
The authors argue that the information-processing in the central nervous system[cns] of human human/animals[at least vertebrae] cannot be captured, even remotely, as symbolic manipulation as implemented in computers, i.e. rapid storing and retrieval of values attached to symbols -which are propagated to/from computed locations.
The symbolic manipulations is not only "as implemented in computers". It is the way the theorists of symbolic systems define it, as the quotes in my paper show.
The argument is based on the empirical anatomy and physiology of the CNS, primarily on the[low level] connectivity pattern of the dendritic tree of an individual neuron, which is "stochastic". This refers to the fact that few thousands connections are realised from tens of thousands "available to it", with no apparent rule, "not related in a consistent way to the selection that other neurons do - in the same or other brains".
The novelty and persuasive force of the article is in focusing on this stochastic connectivity issue, and on the restriction to the negative conclusion only. It does not support the Connectinists' position that cognitive abilities of humans/animals should be modelled by neuron-like systems - a position which is in a strong controversy with the reductionist position of Fodor, Plyshyn and other prominent cognitive scientists.
I don't think any of the this controversy is relevant to the material which is discussed in my paper.
It is tenable - according to the paper - that the whole cognitive system, which includes a body with sense organs and inputs from the outsize world, can implement a symbolic manipulation - it mentions that the stochastic connectivity is less apparent in the peripheral NS.
Why the whole person can implement a symbolic system even if the neurons in the brain cannot is explained in section 7 (pp. 15-17). It is not dependent on the PNS being less stochastic.
it is interesting to note, in passing, that Dryfus, in his book "what computers cannot do', uses the "body' as a partial explanation for the superiority of humans/animals over computers in most information processing manipulations essential for everyday life.
On the whole, the message is that even if we learn a lot more about the functioning of the CNS, the entire cognitive process may remain a mystery for a long time.
That is not the message of the paper, which is given in the "implications" section (p.22). It can be regarded as a possible corollary.
There are references to Vera and Simon, Smolemsky and the PDP group, but I feel references to Fodor and several other prominent Cognitive scientists, whose are relevant, are missing.
I refer in the original paper to any text which I could find which is relevant to the question of implementing symbolic systems using neurons in the brain. Unfortunately, most of the proponents of symbolic systems never discuss the question of implementation, as I mention in section
I added few references in section 2, which just serve to highlight this point.
Response to reviewer 3
I read the paper you gave me. I cannot give a definite view on whether it's good or not, as there are arguments I'm not familiar with In general, the argument is this:
1. Symbol systems are dynamic and arbitrary in the sense they store and retrieve tokens of symbols in the process of computations - think about the dynamic of a program for adding numbers. This is correct.
2. Processes in humans brains are stochastic in the sense that the propagation of activation are stochastic - that is, (if I understand it right) you cannot tell in advance which neurons will get the information.
Somewhat ambiguous, because you can tell in advance by making earlier measurements in the same brain. In other words, the stochasticity is not over time. I make this point explicit in the second paragraph on page 10 (end of the first paragraph in the revised paper).
3. You cannot implement dynamic processes (in the sense of 1) in stochastic processes (in the sense of 2), because (if I understand it right) stochastic processes are not robust enough for the propagation of patterns.
Two inaccuracies: 1) You can do the implementation if there is 'somebody' which knows how to get the connectivity right. E.g. in neural networks, the 'somebody' is the programmer. In the brain there is no such entity. 2) It is not that stochastic processes "are not robust enough". They are not robust at all, in the sense of 'robustness' that symbolic systems require.
Therefore: symbol systems cannot be implemented by human brains. My problem is that I cannot judge whether claim 3 is correct. I'm not that familiar with the theory of neural networks.
Otherwise, the argument is clear, and I'm pretty sure it's novel. Connectionists indeed argue that we should focus on more biologically-oriented models of cognition, and they raise the worry that there is seemingly a gap between symbol systems and the working of the brain. But they do not explicitly argue that human brains cannot implement symbol systems.
Response to the annotations
Where the annotation is grammatical or typographical, I did not comment on it. In some cases, what the reviewer consider a typographical error is actually a misunderstanding, and I comment on some of these.
Title page: I explain in the answer to the comments of reviewer 1, point (a), why the title should not be changed. Abstract page: "Are there any other sort of neurons"?
There are many neurons outside the brain.
Changing the wording of the end of the abstract
But his suggestion means that the word "stochastic" appears before it is defined in any sense. The way the text is written gives the reader an initial clue to what 'stochastic' means.
Page 4 I don't see what is wrong with the sentence that is marked is 'awkward'. I fixed the specific mistakes.
Question mark around "(they do mention connectionism later)."
I put this comment in the text so the authors that I quote (Eysneck and Keane) cannot complain that I mis-represent them by implying that they ignore connectionism.
Page 5 Erase "with these characteristics"
This phrase is important, because neurons with other characteristics (ordered connectivity), can be used to implement symbolic systems.
page 5 Erase "consistent"
This is important word, because evidence has to be consistent to be reliable.
page 9 "How do you know" Answered in the next paragraph.
"This is not good enough"
That is not a serious comment. The reviewer needs to point to what is wrong in the argument.
"How do you know that it is the right neurons that are being compared?"
By comparing their locations. Neurons in two different brains which are not in the same location (approximately) within the brain cannot have the same connections, because the gross structure of the brain is the same, so a different location in one brain would correspond to a different location in the other brain. I think that is so obvious that I didn't add it to the text.
"or nurtured maybe"
This is answered in the third paragraph on the next page (p.10, paragraph starting "It can be argued..."). The revised paper explicitly deals with the question of environmental effects (which include "nurture").
"Weak argument. It is not uncorrelated maybe "not completely specified" would be better but this would not salvage the argument"
If it is "not uncorrelated", then it is correlated in some sense. How? It cannot be through the connectivity, because this is stochastic, and there is nothing that can correlate the two sets of neurons that tend to become active. I have substantially changed this text, to make it clearer.
"It doesn't follow"
I answered this in the response to reviewer 1, point (e)
Page 10 Changes "within" to "to Erasing "relations between" (twice)
This paragraph does not exist anymore in the revised paper.
Page 11 "The choice may be stochastic but the resulting connection pattern is not"
First, note that this about neurons outside the brain. The resulting pattern is stochastic, in the sense that which neuron activates which fiber cannot be predicted by anybody. In particular, the brain itself cannot know this in advance, and must learn it.
"Most of the authors's unease with proofs or arguments in favour of synaptic selectivity is justified. But, this does not justify the notion that connections between neurons are stochastic."
The notion that the connections are stochastic is based on the variability between brains (page 9). The discussion here is not intended to show evidence for it. Rather, it is intended to show that there is no evidence against it.
"Not making a point is not the same as making the wrong point"
True, but I am not claiming that these authors make the wrong point. I claim that they don't make the point, and since this is an important point (as the reviewer himself agrees in the first paragraph of his review), this is disappointing.
"What about neural networks that start with a random connections and then figure out a connectivity scheme to do the job"
What is "the job"? To support symbolic systems, the networks need to support manipulation of arbitrary and dynamic symbol tokens, as discussed in section 2. Currently, there are no networks which can support such manipulation.
Page 15 Replace the term "element"
I am intentionally using the term "element" as a generic term, rather than area, region or column etc., because the argument is generic and covers all of these possibilities. I explicitly make this point in the annotated text.
"Mixing up does not make these non-separable nor does it make their connectivity stochastic"
As in my answer to point (d) of reviewer 1, The question is whether they can be controlled separately. The stochastic connectivity is based on the variability between brains (page 9).
I have changed the text (pp. 14-15) to make it clearer.
Page 16 "So connections between individual neurons are not stochastic after all"
They are stochastic, but almost fixed over time. I explicitly states that 'stochastic' does not mean variable over time (p. 10, second paragraph)
"We know enough to build some models. We can't build a model for a thinking brain but I am not sure it's because we don't know enough about neurons"
This annotation is a response to a 'counter argument' that some people advanced against my argument. My response is given in the following paragraph in the paper.
" "Specific" in neurobiology usually means "cell type specific" when it comes to mammals (say Ia -> a-Mn homonymous, etc.) and "cell specific" when it comes to inverterbrates. The "cell specificity" of connections is neither known to be true nor know to be false in mammals and other vertebrates"
The variability between brains (page 9) shows that there is no cell-specificity in the mammalian brain. The meaning of "Specific" is more variable than what the reviewer states, e.g. in the brain it many times means "layer-specific", or "region-specific". In any case, it does not correspond to the way it is used in computer science, which is the point I make in the text.