related texts

[Last updated 1 Jun 2013]

The "intelligent neuron" misconception: an extended discussion of some examples

In this page I discuss specific articles and the public comments on them, and show how the experimental findings are convoluted to give a wrong impression of what was found. In the discussion, there is a strong emphasis on the language that is used, and the way it will be interpreted by a non-expert. This is important, because the wrong impression is based on using language that a non-expert will not understand the way an expert will.

1. Sigala and Logothetis

The actual finding in the first article (NATASHA SIGALA AND NIKOS K. LOGOTHETIS, Visual categorization shapes feature selectivity in the primate temporal cortex, Nature 415, 318 - 320 (2002)) is that many neurones show significant change in level of activity between different situations (looking at different pictures), and that many more of them show changes correlating to changes in the significant (to the monkey) features than to the non-significant. This finding is far from exciting, as some changes in neural activity must accompany perceiving a different picture, and the more significant change should cause larger changes. This is based on what we currently believe thinking is. Since the findings themselves are not that impressive, they are enhanced by linguistic hyping.

The larger effect on the activity is described by the authors thus (first paragraph):

We found enhanced neuronal representation of the diagnostic features relative to the non-diagnostic ones.

Where does "representation" come from? All they see is changes in activity. They can argue that "representation" means "change in level of activity", but that is not the way readers will understand it. Those of them that will read only the first paragraph, which in Nature is used as the abstract, will end up believing that research found some "representation". Assuming they are not experts in neuroscience, they are not going to realise that "representation" just means "changes in level of activity".

The next sentence (still in the first paragraph) is much worse:

These findings demonstrate that stimulus features important for categorisation are instantiated in the activity of single units (neurons) in the primate inferior temporal cortex.

That is clearly false. Like anything else, the "features" are "instantiated" (whatever that means) by a population of neurons, and there is nothing in the data of this article to suggest otherwise. Their data is compatible with single neurons "instantiating" the features, but it is also compatible with the "instantiation" by a population of neurons, so cannot distinguish between these possibilities. To take the data as demonstrating single-neuron instantiation is a simple logical error(Misanalyzing the 'Null Hypothesis' ).

Maybe the authors would argue that when they write "instantiated in the activity of single units" they don't actually mean that, and really means that the "single units" are part of the population. That, however, is not something that the reader can infer from the reading the first paragraph. Only critical reading of the article can show that. Thus the text is very misleading to non-neuroscientists that will not read the article critically, i.e. most of the readers.

The last sentence of the article itself (before the "Methods" section) says:

Our finding that inferior temporal cortex neurons can selectively represent visual object features that are important for the task at hand provides a mechanism through which feature diagnosticity shapes the encoding and the perceptual interpretation of visual objects.

First, we see again "represent" used to mean a change in level of activity. However, the main interest about this sentence is that it regards this "representation" as a potential "mechanism". This is typical "intelligent neuron" misconception: for the neurons to change their activity, the network needs to adapt (either be in a different activation state or change strengths of connections), and this is the underlying mechanism. The way the authors say it, it suggests that the change in level of activity is the underlying mechanism.

While the authors only strongly imply the "intelligent neuron" in the article itself, it is made explicit in the associated hype. The press release by the Max Planck Society is headed:

Perception is stored in Single Neurones

This is obviously nonsense (except if you read it in non-interesting sense that "Single Neurones" means "neural tissue"). First, Perception is not stored in any normal sense of the word. Secondly, the changes of activity are results of changes in strength of connections somewhere in the network. They may be in the synapse of the neurons themselves, in their neighbours, or maybe somewhere else altogether. The data in the study cannot tell us where the changes are.

The release continues:

Tuebingen Max Planck researchers discover that our perception of diagnostic features is controlled by single neurones.

Now making it even more explicit that it really means single neurons, and also adds that these single neurons control perception. Later in the release it says:

In other words, these specialised neurones had learned to distinguish between the two categories. Rather than simply representing the presence of a specific object or features, they convey detailed information about the features that are diagnostic for the two categories.

Again, it is made explicit that it is the "specialise neurones" that "had learned", rather than the network. The second sentence is quite a ridiculous one, because it says they convey "detailed information", where all the neurones do is to fire at different rates (which is, of course, what all neurones do). In the last paragraph we find again:

The Tübingen group concludes that there are apparently single neurones that sharpen our perception when they are trained to respond to categories.

Make it again explicit that it is the neurones that are trained, rather than the network.

A reader of the release, unless already an expert in neuroscience, cannot escape getting the impression that single neurons change their intrinsic properties to specifically recognise a feature, i.e. being "intelligent", as opposed to changes in activity as a result of changes in the strengths of connections. This is what I call "intelligent neuron" misconception.

It is worth noting that this press release is not by a journalist. It is prepared by the Max Planck Society, which is a scientific body, in co-operation (apparently) with the principle author of the paper (Logothetis).

[ 2 Mar 2004] An interesting contrast is this article, by the same senior author. In this article, the discussion is in terms that don't imply changes in the neurons which show changes in activity, and in fact explicitly suggests changes elsewhere ('projections from higher cortical areas').

2. Gandolfo et al

This article (Cortical correlates of learning in monkeys adapting to a new dynamical environment, F. Gandolfo, C.-S. R. Li, B. J. Benda, C. Padoa Schioppa, and E. Bizzi, Proc. Natl. Acad. Sci. USA, Vol. 97, Issue 5, 2259-2263, February 29, 2000) is less blunt, but the statements it makes will also be very confusing.

The last sentence of the abstract says:

Our results are consistent with the findings of these studies and provide evidence for single-cell plasticity in the primary motor cortex of primates.
Of course, what they see is a result of plasticity in the network, and there is nothing in their data to show otherwise. Yet, they still call it "single-cell plasticity". It is not obvious whether the authors realize that it is the network that changes or not.

The rest of the article doesn't clarify this point, and the discussion in the end doesn't help either. The authors write:

Basically, these cells took on the properties of the neurons that are involved in the control of movement.
Which also suggests "single-cell plasticity", though it can be interpreted in different ways.

An amusing thing about this article is how the authors try to push the idea of "internal model". All they show is that the monkey learns, and that there are changes in activity in the primary motor cortex, but they want to interpret this as evidence for the "internal model". It specially stands out in the results section, which starts with a sub-title "Formation of an Internal Model", and then does not mention model (because there is nothing in the results relevant to a "model").

3. Pena and Konishi

[27April2001] here is a typical hyping of the "intelligent neurons", this time doing multiplication. This makes it seem more plausible, because multiplication is a simple operation and has general utility. The hype explicitly states that "While most neurons simply add incoming signals to come up with an answer, these neurons can multiply."

That is trivially false. In the paper (Pena and Konishi, Science, V.292, 13 Apr 2001, p. 249), they didn't measure the direct input of neurons at all, so clearly could not have shown that the neurons multiply their input. Moreover, it is not even the stimuli (Interaural time difference and interaural loudness difference (ITD and ILD)) that are being multiplied. The factors in the multiplicative model, i.e. the U's and V's in equation 1, are arbitrary values that are derived by the model, based on the measurements of the potentials inside the neuron itself. These measurements have no straightforward relations to the ITD and ILD, as can be seen in the graphs in Figure 2. The authors "validate" their derived values by comparing them to measured values (Figure 2), but these measured values are again the potentials inside the neurons themselves. Thus all they have shown is that they can fit their multiplicative model to the potentials inside the neurons, not to the relation between inputs to neurons and output from them.

The results are actually worse than that. In figure 3, they compare the correlations that are generated by their multiplicative model and an additive model. While the multiplicative model is clearly better, the additive model (which cannot be correct) still generates quite convincing correlation (Figure 3D). Thus their methodology can easily generate spurious results. This point becomes even more significant when they try to apply their method to the actual output of the neuron, i.e. the spikes. They show a good correlation with their multiplicative model (Figure 4B), but it is significantly worse than the correlation they got with additive model for the potential measurements (Figure 3D). Since the latter result is obviously spurious, the former result is spurious too. Thus they cannot get a good model of the relation between the potential inside the neuron and its actual output. They do give some possible explanations for this failure, but it doesn't change the fact that they failed.

The "multiplicative model" is based implicitly on the assumption that the nature of inputs to the neuron is the same at all levels. This is unlikely from what we know about neurons, and it is more likely that the troughs in the depolarization are a result of inhibitory signals. The latter signals act, at least partially, as "clamps" on the polarization rather than being purely hyperpolarizing. This assumption will also generate a better fit to their data.

It is also worth pointing out that the assertion that "... most neurons simply add incoming signals..." above is false, too. The "additive model" is just a theoretical first approximation, and a very crude one. It is not based on actual measurements of the relation between the inputs of the neuron and its output, in terms of spikes, as opposed to internal potentials. It is used because it is the simplest approximation to the base-level assumption that the output is dependent on all the inputs.

The paper itself contains more misleading information. In the abstract it says "The owl's auditory system computes interaural time and (ITD) and loudness (ILD) differences to create a two dimensional map of auditory space." Through all the paper is it taken for granted that ITD and ILD are the pieces of data that are used.

That is simply false. "Computing some value" means generating something with a well-defined interpretation which is linear with the value, and the owl's auditory system doesn't do that. At most, it may be possible to find some neurons with some correlation between their activity and the ITD and ILD, and nothing with a well-defined interpretation. I think a typical defence would be that "computing some value" means generating an internal representation, but this is false too: there is nothing inside the owl's brain that represents the ITD or ILD, in the normal usage of the word "represents".

Additional deficiency in this paper is that it doesn't explain how were the neurons selected, and display the raw data only from one neuron, thus hiding any variability between them.

[24Nov2002] There is also research on "multiplication" by a neuron in locust (New paper in Nature (Nature 420, 320 - 324 (2002); doi:10.1038/nature01190) and an older paper (The Journal of Neuroscience, February 1, 1999, 19(3):1122-1141)). It is important to note that this is different, because it is an identified neuron (i.e. neuron that can be identified across individual animals), and it is in an invertebrate. In this case, the multiplicators are at least real variables, rather than mathematical fiction like in Pena and Konishi above. However, it is still based on approximating experimental results by a function, and then claiming that the neurons perform the function. Thus it gives the impression of precise performance (multiplication), based on imprecise observations.

4. Counting neurons

[15Sep2002]

A "Science update" in Nature web page is titled "Single brain cells count". The first paragraph of the piece is:

When a monkey looks at two dots, apples or other monkeys, single nerve cells recognize the groups' 'twoness', researchers have found. The discovery shows that the brain's ability to deal with abstract concepts can be traced right down to individual cells.
As usual, this is nonsense, as the activity of these neurons is determined by the network they are in, so it is the network that recognizes the 'twoness', or any other concept. That is trivially obvious to any neuroscientist, but not to readers without expertise in neuroscience, and these are the target readers of this piece. A reader without such expertise cannot figure out from this 'update' that it is the network that does the work, not single neurons. Thus they will be completely misled by this text.

The worse manoeuvre of this 'Update' is that it doesn't tell the reader that the monkeys were trained to respond to numerical values and to ignore other features of the visual input. It then misleads the reader using this omission. The fourth paragraph says:

Monkeys' numerical abilities could be an evolutionary shadow of our own, says Dehaene. "Even behaviour which we think of as sophisticated and based in culture ultimately has biological roots."
Obviously, every behaviour has "biological roots", so readers will interpret it to means "genetic roots" or "innate", and it is clear that this is what Dehaene meant. This, however, is clearly nonsense, as the discrimination by the neurons (i.e. by the network) was clearly a result of training the monkeys. However, since the readers were not told that the monkeys were trained to discriminate numbers, they cannot figure this point out. They will be strongly misled to believe that the discrimination is always there.

Other demagoguery in this 'Update':

Miller's team showed groups of dots to macaques, and recorded the output from individual neurons in the monkeys' prefrontal cortex. This area receives inputs from the visual system.
This gives the impression that the prefrontal cortex stands out by the fact it receives input from the visual system. As every neuroscientist knows, that is trivially false, the prefrontal cortex is the furthest from the entry of visual input to the cortex, and therefore gets the least visual information from all the rest of the cortex. Non-experts will be left with the wrong impression.

5. Gemma Calvert

This (Crossmodal Processing in the Human Brain: Insights from Functional Neuroimaging Studies, Cerebral Cortex Dec 2001;11:1110-1123; 1047-3211/01/ $4.00) is quite an extreme example, because it is only the term that is used which is misleading, not the actual contents. Discussing the Superior Colliculus, the author writes (p. 1111, second paragraph):
Each bi- or trisensory neuron in this structure contains a map of sensory space, one for each sense (visual, auditory, tactile) to which it responds.
So each neuron "contains a map of sensory space". Clearly (to anybody that knows neuroscience) she doesn't mean that each neuron contains a map. She refers to the receptive fields of the neuron, and really means something like: "Each .. neuron .. responses to some parts of the sensory space of each of the senses". However, that is clearly not what "contains a map" means in any context, including in the context of neuroscience, notwithstanding her claim. A reader that doesn't already know about receptive fields will get the impression that each neuron has some complex features which can be reasonably called "map".

For her defence, this article is in a quite specialized journal, so the readers may be expected to have some expertise in the area. On the other hand, it is a review, which normally is aimed at larger audience.

It is interesting why the author use the phrase "contain a map" the way she does. She claims it is shorthand for "receptive field", but this is clearly false from the references she gives to support it.

6. Goldman-Rakic

[05Nov2002]

In this news story, we find this:

The researchers recorded and thoroughly analyzed close to 2,500 cells. The responsiveness of the neurons was tested with many different stimuli. Their research also revealed that cells in the prefrontal cortex were able to maintain information even after the stimuli disappeared.

"This work shows that the neurons in the prefrontal cortex are activated when different faces are maintained in memory," says Professor Goldman-Rakic. "Even in primates who were not trained to remember pictures, neurons continue firing long after the stimulus is gone, showing that an intrinsic property of these neurons is to maintain that activity in the absence of stimuli."

As usual, it is not an "intrinsic property" of the neurons, but a property of the network (whether "intrinsic" or not depends on the exact meaning of "intrinsic"). This doesn't stop Goldman-Rakic from saying that it is property of the neurons. It may be argued that she really means the network, but that cannot be understood from the news story.

The paper itself (Science, Volume 278, Number 5340, Issue of 7 Nov 1997, pp. 1135-1138.) contains a similar claim:

Thus, the capacity for face-selective persistent firing and delay-period activity does not depend on intention to make a response, but appears to reflect an intrinsic property of the neurons' responses to visual stimuli.
Which seems to suggest that intrinsic really should be interpreted as "independent of intention to make response", which is quite bizarre.

It worth also mentioning that the 2,500 neurons mentioned in the quote above are the number of neurons that were tested, not the number of neurons that show the response that is the result of the paper. This was shown by 46 neurons only (of the 2,500 tested).

A later paper (Cerebral Cortex, Vol. 9, No. 5, 459-475, July 1999) reporting the same research contains in its title "Evidence for Intrinsic Specialization of Neuronal Coding ". But here it is not property of the neurons, rather the "Neuronal coding" (whatever that means). In the text itself it appears only in the introduction, in a conditional statement, and not in the results or the discussion, and it is the PFC which is maybe "intrinsically specialized". There is a reference to "the neurons' intrinsic firing rate" in the "materials and methods" section, but this refers to irrelevant activity. Thus this paper is not doing the "intelligent neuron" error.

The actual results of these papers, which is showing localization of response to faces in the prefrontal cortex (as usual, "specialization" is just an overinterpretation of "localization"), are extremely interesting. More accurately, they would have been extremely interesting if they were reproducible. However, they haven't been reproduced, and since they are both very interesting results (which means other researchers certainly will try to reproduce them) and should be easy to reproduce, the fact that they haven't suggest that they are not reproducible.

7. Doug munoz

[10Nov2002]

In this press release, we are told that

The research team has found that a small region in the frontal lobe of the human brain is selectively activated when an individual intends to make a particular action and not another.
However, the paper itself is only about saccades (eye movements), and the "small region" that they found is the Frontal Eye Field (FEF), which is already known to involved in eye movements (as its name suggests). Amazingly, the press release does not mention "eye" or "saccade" at all, thus misleading the reader that they found an area which is involved in intentions in general, rather than saccades specific. The article itself (abstract below) also tries to imply generality, though it makes it clear that the research is about the FEF. But readers of the press release have no way to know that, and will be left with a completely false impression.

Published online: 4 November 2002, doi:10.1038/nn969

Human fMRI evidence for the neural correlates of preparatory set

Jason D. Connolly, Melvyn A. Goodale1, Ravi S. Menon & Douglas P. Munoz

We used functional magnetic resonance imaging (fMRI) to study readiness and intention signals in frontal and parietal areas that have been implicated in planning saccadic eye movements - the frontal eye fields (FEF) and intraparietal sulcus (IPS). To track fMRI signal changes correlated with readiness to act, we used an event-related design with variable gap periods between disappearance of the fixation point and appearance of the target. To track changes associated with intention, subjects were instructed before the gap period to make either a pro-saccade (look at target) or an anti-saccade (look away from target). FEF activation increased during the gap period and was higher for anti- than for pro-saccade trials. No signal increases were observed during the gap period in the IPS. Our findings suggest that within the frontoparietal networks that control saccade generation, the human FEF, but not the IPS, is critically involved in preparatory set, coding both the readiness and intention to perform a particular movement.

8. Erogov : Really intelligent neurons?

[15Dec2002]

I thought that at least all neuroscientists know that the "intelligent neurons" are not really intelligent, and their activity is determined by the network, but apparently not all of them are convinced. A new article in Nature ( Nature 420, 173 - 178 (2002); doi:10.1038/nature01171, Graded persistent activity in entorhinal cortex neurons, Alexei V. Egorov, Bassam N. Hamam, Erik Fransén, Michael E. Hasselmo & Angel A. Alonso; A review that is accessible without password) shows "Graded persistent activity in entorhinal cortex neurons", and says that "Such an intrinsic neuronal ability to generate graded persistent activity constitutes an elementary mechanism for working memory." (end of first paragraph).

The problem with this article (and many others) is that the experiment is done in brain slices (rather that whole brain), and inside a drug cocktail that blocks the normal activity of the neurons (by blocking glutamatergic and GABA-mediated neurotransmission). There is no reason at all to assume that there is any relation between what they see and the behaviour of neurons in the live brain (If you want an analogy, cut a piece of a computer memory, throw it into some acid and do experiments on it, and see how much you can learn about computers from this). We already know that the main determinant of neural activity in the live brain is the input from other neurons, and experiments that destroy the activity of the network cannot be used to prove otherwise.

9. "Testing neurons"

[22 Sep 2003]

This is a mild example of the "intelligent neurons" error. In this research description page it says:

We use these patterns to test the ability of single neurons to extract global pattern information by integrating the responses of local orientation detectors.
(my italics)

What they really mean is

We use these patterns to check the frequency of neurons that have significantly differential response to global pattern information as a result of signals from the responses of local orientation detectors.
It is possible to argue that the difference between their anthropomorphic version and my version is just linguistic, but I think it represents the way they (and most of neuroscientists) think. The problem is that it makes what they do look far more interesting than it should look, and therefore skews their selection of research projects towards single neurons studies ad the expense of studying the network and its connectivity.

9. Cells "using strategy"

[22 Sep 2003]

This article is not that bad, but all the discussion is about what single neurons "do", e.g. "single neurons could still increase or decrease their information about direction." Reading this text the reader may get the impression that neurons don't interact with each other. The worst bit is this sentence:

These results suggest that cells increased the slope of their tuning curve near the learned direction and improve the information content in their activity. Cells can use several strategies to do so and we considered three possibilities:...

So cells not only " increase the slope of their tuning curve", they actually use "strategy" to do it. Obviously the word "strategies" is used metaphorically, without intending anything like a thinking process, but the metaphor is for behaviour or changes in the cells themselves, and that is wrong: it is the netwrok that changes.

This article gives the impression that the authors themselves realy believe it is changes in the cells themselves that give the changes in activity that they see, because they don't mention the network at all. On the other hand, they don't say that explicitly.

This article is follows immediately the second article by logothetis that was referenced above, and also gives an interesting contrast.

9. "Monkeys monitor their brains"

[13 Apr 2004]

This one is not really "intelligent neuron", it is more like "homunculus". In this article they present very mildly interesting data, but when they discuss it, they repeatably write about the monkeys "monitoring" the direction columns. Obviously monkeys cannot monitor parts of their brains, so presumably they actually meant some kind of "homunculus", i.e. an entity inside the brain with enough cognitive skills to be able to monitor cortical activity.

The problem with this idea is that neurons in the brain cannot monitor bits of the brain. Neurons just response (fire or not fire) to the signals that they get from the neurons that connect to them through their input synapses, and that is what they always do. Thus the term "monitor" doesn't have a meaning when applied to the brain. These authors, however, believe that it is useful.

In e-mail, the first author told me that "the monkey" means other parts of the brain. When I told him that "other parts of the brain" cannot monitor, his response was that the brain will not work without it. In other words, he thinks that his a-priory judgement of what the brain does should override the facts (a typical 'knowing' what the brains do error).

In the paper, the authors repeat the "monkey monitoring" idea many times, but without ever considering what it actually means inside the brain. It gives the impression that while they believe in "monitoring" as useful concept when discussing the brain, they also feel uneasy about using it with reference to parts of the brain, so they use it with a reference to the whole animal instead.

10. "Encoding of Movement Fragments in the Motor Cortex"

[1 Sep 2007] This authors (Nicholas G. Hatsopoulos, Qingqing Xu, and Yali Amit, The Journal of Neuroscience, May 9, 2007, 27(19):5105-5114) realize that neurons don't code for the simple features that most neuroscientists think they do. However, they still think that neurons must indvidually code for something. Thus they say in the end of the abstract:
These findings suggest that single motor cortical neurons encode whole movement fragments, which are temporally extensive and can be quite complex.
So they really think that a single neuron "encodes" whole movement fragment. This is plain nonsense, it is the network that "encodes" (generate the appropriate patterns of activity), not single neurons. The fact that such stupidity appears in the Journal of Neuroscience show how strong is the the belief in the "intelligent neuron" myth.

10. Maybe actual progress

[ 1 Jun 2013 ] In this article (The importance of mixed selectivity in complex cognitive tasks, Rigotti, et al, Nature 497, 585–590 (30 May 2013) )) they seem to finally realize that the "intelligent neuron" approach is wrong. The last sentence of the article says:
Our findings recommend a shift of focus for future studies from neurons that have easily interpretable response tuning to the widely observed, but rarely analysed, mixed selectivity neurons.

That is definitely better than what we have until now. The fact that it is published in Nature (and as an "article" rather than "letter") both suggest that it is by now quite acceptable to say that, and gives the message extra-strength. ======================================================================
======================================================================

Yehouda Harpaz
yh@maldoo.com
20Jan2002
http://human-brain.org/