Related texts

Last modified 19 Jun 2023

Myths and misconceptions in Cognitive Science

In this text I list and discuss some myths and misconceptions that are common in cognitive science. I started to write this while writing my criticism of Crick's book ('The astonishing Hypothesis', Simon & Schuster (1994), 0-671-71158-X), because large fraction of the mistakes that Crick is doing are actually widespread, rather than being Crick's only. As a result, many of the examples here are from Crick's book.

The question of 'consciousness', free will, qualia, vivid images etc. is discussed separately in the methodological problems, sections [6-8]. There is a separate discussion of common Reasoning errors, and direct comments on MITECS.

1. The 'Massive parallel' and bottleneck myth

It is quite common to find somebody asserts that the brain is a 'massively parallel' system. This assertion is simply false. In the area of information processing, 'parallel system' means a system that contains more than one processors that work independently. It can be argued that the neurons are such processors, but that is not what cognitive scientists mean when they say the brain is a 'parallel system'. They think on more complex processors, with more capabilities than a single neuron has.

However, this kind of processors does not exist in the brain. The main thinking part of the brain is the cerebral cortex, and in the cortex there are no separate units at all. Every region in the cortex has direct connections (i.e. sends axons and receives axons) to all its neighbors, and in many cases to further regions. Since these connections are direct, each operation of every region is cortex is dependent directly on activity of other regions in the cortex, i.e. it is not independent.

It can be claimed that the cortex may contain distributed processors, which are independent by virtue of their connectivity. However, this requires an exact connectivity, which does not exist in the cortex (see in here for a discussion).

Thus the brain does not contain many independent processors, and therefore it is not 'massively parallel' system. What is the correct name is not obvious, but something like 'integrated system' is probably a good start. At least it is not grossly incorrect.

Here (Visual processing: Parallel-er and Parallel-er, Richard T. Born, Current Biology 2001, 11:R566-R568) is an example of hyping parallel processing. Even though the title of the comment already talks about parallelism, there is nothing in the comment or the article it discusses about really parallel processing (i.e. several processors independent of each other) in the cortex. The author simply uses it as a buzzword.

Cognitive Scientists sometimes argue that there is psychological (as opposed to neurobiological/neuroanatomical) evidence for parallel processing. Most of this evidence cannot distinguish between parallel processing and fast processing. Even evidence that can be claimed to show parallel processing cannot show that the system itself is parallel (i.e. has got independent processors). All it can show is that the mechanisms of the system that perform some tasks have the capability of overriding interference from other activity.

The 'sister' myth of a 'bottleneck', which is commonly associated with 'attention' or 'consciousness' or 'awareness', is used to explain the very common cases where the human cognition does not behave like a parallel system. Rather than realizing that it is not a parallel system, cognitive scientists postulate a 'bottleneck', which makes the computation serial. That opens the way for a massive discussion about the nature of the bottleneck, its implementation, roles and evolutionary history. This discussion, of course, is a total waste of time, as it is just a result of mistaking the brain for a parallel system.

2. An entity or process called attention

Many cognitive scientists theorize about attention as a separate entity or process. Sometimes they regard it as an actual physical part of the human brain, and sometimes as an abstract mechanism. Here (The neural mechanisms of topdown attentional control, J. B. Hopfinger, M. H. Buonocore and G. R. Mangun Nature Neuroscience, p.284 volume 3 no 3 , march 2000) is a typical example. The article starts:

Selective visual attention involves dynamic interplay between attentional control systems and sensory brain structures.
i.e. they already take it for granted that there are "attentional control systems".

However, there is no evidence that there such an entity or process in the brain. For example, in the article quoted above, the evidence they find is (in the "Discussion"):

A network of cortical areas including superior frontal, inferior parietal and superior temporal brain regions were implicated in top-down attentional control because they were found to be active only in response to instructive cues.

Which is plain nonsense. Their data cannot show a "network", because it does not show connections {4}, and the regions that they found (quite large chunks of the cortex) are activated by a myriads of other stimuli and tasks, sometimes together sometimes separately. Thus they are not a "network" or a "system", and using these terms is completely spurious. (The data in this article is by fMRI, so is also unlikely to ever be reproduced (see Here)).

The reason that people regard it as sensible to talk about 'attention' as an entity is because 'attention' is used very frequently in common speech. However, the normal usage of the term 'attention' does not attach to it any meaning other than thinking (e.g. paying attention to X is linguistically equivalent to thinking about X, accept that 'paying attention' requires X to correspond to some current sensory input). Thus 'attention' does not stand for a separate entity or process in common usage of the word.

Part of the reason that cognitive scientists don't grasp this point is that for thinking about something to affect its perception (as attention do), it requires an integrated system, as opposed to the modular system that most cognitive scientists like.

Here (Reynolds, Chelazzi and Desimone,1999, The Journal of Neuroscience, 19(5):1736-1753) is a quite stunning example, where they build a model based on the assumption that attention changes the synaptic strength of the neurons they were looking at. They actually say:
Attention is assumed to increase the strength of the signal coming from the population of cells activated by the attended stimulus. The exact mechanism by which this increase could occur is unknown. It is implemented here by increasing the efficacy of synapses projecting to the measured cell from the population activated by the attended stimulus.
The implementation is simply daft, because there is no way to change the synaptic strength of the input of a specific neuron in the cortex (it is possible to affect all the neurons of a specific type in a region, but not to discriminate between them). So the authors assume magical feature, because it allows them to build a nice model for attention, rather than admit that attention as a separate entity or process is incompatible with what is known about the cortex. Once they have this assumption in place, they can use it to model various phenomena, and claim that they successfully model attention.

A common 'role' for 'attention' is to explain the 'bottleneck' myth, which was discussed in the previous section, but it gets associated with many other things, for example 'binding' (below). Since it has no connection to any real entity, it is a 'hyper-free' variable, which can be used to plug any hole in cognitive theories.

In many cases 'Attention' seems to be regarded as at least potentially not a neuronal feature. For example, Crick (The astonishing Hypothesis, Simon & Schuster(1994), 0-671-71158-X) points to evidence that when a monkey shifts its attention, the activity of some neurons in its brain changes. Then he says (P. 226):

However, in those trials during which it attended to the green bar, the red-sensitive neuron fired less. Attention, therefore, is not just a psychological concept. Its effects can be seen at the neuronal level.

(Italics in the source).

Obviously, if there is something like attention, it must affect neuronal activity. Crick, however, needs evidence to show it, so apparently he thinks it is possible that attention affects our thinking without affecting neuronal activity. A similar idea is expressed in MITECS:

In addition, studies of attention in the animal brain have revealed that attentional shifts are correlated with changes in the sensitivity of single neurons to sensory stimuli (Moran and Desimone 1985; Bushnell, Goldberg, and Robinson 1981; see also auditory attention).

Again, implying that it is possible that attention can affect thinking and behaviour without affecting the activity of neurons, so this effect has to be 'revealed'

3. Separate memory systems

It is accepted almost universally in cognitive science that human memory is made up of several functionally separated systems. However, this is contrary to neuroanatomical evidence, which shows conclusively that in the human brain all the major mental processes (including remembering) are done in the same system (namely, the cerebral cortex). Thinking about this kind system, however, is too complex for cognitive scientists, so they simply ignore the neuroanatomical evidence. Instead, they postulate several memory systems.

3.1 Short term memory (STM)

This is almost universally accepted. The evidence for short-term memory (STM) is that people can recall easily things from the very recent past. However, there is no anatomical evidence for a separate system for doing that, and a much simpler explanation is that the things that a person find easy to remember are the things that he is thinking of.

The main problem with the latter is that it means that to understand 'STM' effects, you have to understand how people think, and that is indeed difficult. Instead, cognitive scientists postulate STM, and since it, like attention above, is not associated with any real entity, they feel free to use it anyway they like.

Another problem is that in many cases the STM can be shown to hold things that the person is not aware of, and many people, including many cognitive scientists, haven't figure out yet that a person is not aware of everything she/he is thinking about. Therefore, they cannot accept the identity between having something in the STM and thinking about it.

Some researchers have already figured out that all the phenomena that are associated with the STM can be explained by what is currently active in the cortex, but still continue to regard the STM as a separate system.

3.2 Working memory

'Working memory' is an elaboration of the STM, to account for the many psychological observations that the STM concept cannot account for. By virtue of being a more complex model, it can be made to fit more data. However, it still does not have any neuroanatomical basis.

[17 Ar 2006] Some progress: At least some people in the field now reealize that working memory does not correspondence to any neuroanatomical features. See Working Memory As An Emergent Property Of The Mind And Brain and references in it (author home page: Brad Postle)). In this article he still doesn't get rid of the concept of "working memory", but he does propose to "jettison the concept of working memory" here.

The article above appears in an issue of Neuroscience which is all about Cognitive neuroscience of working memory. The rest of the articles don't seem to have any problem with the concept of working memory.

3.3 Procedural vs. Semantic memory

The distinction between 'procedural memory' (knowing how to...) and 'semantic memory' (knowing that...) is widely accepted. However, this distinction is completely behavioral, and there is no neuroanatomical evidence for the existence of two separate systems. Logically, there is no problem in having these two kinds of knowledge in the same system ('knowing that X' can be implemented as 'knowing how to respond to a question about X', or 'knowing how to do X' can be implemented as 'knowing that the set of operations Y is required for doing X'), and all the evidence suggests that this is what happens in the brain, where both of these knowledge types reside in the cerebral cortex. For example of the way it may be implemented, see my theory.

The one-system option, however, make it much too complex for most of cognitive scientists. Therefore, they continue to assume that there are two separate systems, even though it is clear from neuroanatomy that it is wrong. As evidence for the two-system option they bring the existence of automatic, fast operations that are 'unconscious' (the person is not aware of thinking about them), which are regarded as being manifestation of the procedural memory, as opposed to slow and conscious operations, which are regarded as manifestation of the semantic memory. However, this can easily explained by assuming that for a process to be 'conscious' (i.e. the person become aware of it) it must take significant amount of time, so fast processes are not 'conscious', and since deciding to make them is also very fast, it is also 'unconscious', i.e. automatic. Becoming 'conscious' takes time because it requires some change in the brain, which takes time.

3.4 Emotional memory

This idea is 'supported' by the fact that damage to the amygdala seems to cause larger problems with feeling and expressing emotions than with other cognitive processes. However, that just shows that the amygdala is important in emotional processes (most likely, it involves either in controlling or in sensing autonomic responses like heart rate, sweating etc.). It does not support a separate memory system.

3.5 Episodic memory

Humans seem to have a strong tendency to recall episodes, rather than individual details. This tendency is the evidence for 'Episodic Memory'. However, this tendency does not show that there is a separate episodic memory. It shows that there is some mechanism that tends to associate between things that the person perceives at the same time. It is clearly important to figure out what this mechanism is, but there is no reason to assume that there is a separate memory system for it. In general, this mechanism is of Hebbian nature. (See the ERS in my theory for a more elaborate hypothesis)

From brain damage studies, the Hippocampus clearly has an important part in this mechanism, but it is clearly not a separate system, and it functions by affecting processes in the main system (the cortex).

3.6 implicit vs. explicit memory

That is quite common distinction, which, like the other distinctions, does not have neuroanatomical basis. The distinction is based on the ability to recall details, so it is really a measurement of the capability of the recalling mechanism. The 'contents of the explicit memory' are those concepts for which the recalling mechanism can recall additional details, and the 'contents of the implicit memory' are those concepts for which the recalling mechanism fails to recall additional details (These are just the extreme points on a spectrum, of course).

3.7 'Light bulb memories'

'Light bulb memories' are memories that are said to form in a single instance, but still linger for a long time (typically for life). The most common example is 'the place you were when you heard about Kennedy's assassination'. This is nonsense, because all these 'light bulb memories' are associated with some traumatic, or at least very important, events, which the person keeps returning to in the following period of time, both consciously and unconsciously. This repeated 'chewing' of the subject caused the traumatic episode to be overlearned.

It is difficult to explain how serious scientists fail to grasp this point, but they do. An amusing example is from Crick, who seems to grasp that the memory may be strengthened by 'chewing' it, but still believes that it becomes strong immediately. In 'The astonishing hypothesis', P. 67, he says on these memories: "One remembers them strongly after only a single instance. (Such memories may of course be strengthened by rehearsal--by telling the story over again, not always correctly.)".

4. The 'Binding Problem'

It is common in cognitive science to discuss the 'binding problem', and sometimes it is considered a fundamental problem. The idea is that different attributes of the perceptual input (e.g. color, movement, shape) are processed in separated modules, and then they need to be 'bound' together to give a unified representation.

However, this is dependent on the assumption that the brain is a modular system, which is clearly wrong from the structure of the cerebral cortex. (See in Reasoning Errors for a discussion of the modularity error). In an integrated system like the cortex, the binding problem simply does not exist.

Sometimes it is claimed that the attributes of the object need to be bound because otherwise we are not going to regard the object as a unitary object. This claim is a clear blatant nonsense. It would have made sense if our thinking system was sensitive to the location of information in the brain. But our thinking system is clearly not sensitive to such locations. for example we don't have any feeling that the visual input is in the rear of the cortex (We know that from investigation of the relations between brain damage and mental deficiencies). Thus the distribution of the information about an object in the brain doesn't make any difference to the way we perceive it.

5. The significance of oscillations.

Quite commonly, cognitive scientists attach a significant importance to electric oscillations in the cortex, and suggest that they perform various tasks. These oscillations have frequencies of several tens Hz. The 'binding problem', which is discussed in [4] above, is one the 'problems' that is commonly 'solved' by the oscillations.

This is clearly nonsense, because the amount of information processing that can be done using an oscillator of few tens Hz is tiny compared to the amount of information processing that is required for doing any significant mental task. The only way in which these oscillations can be doing any significant information processing role is if they are very precise, so the exact shape of the oscillation contain information too (in other words, they have information in much higher frequency). However, it is clear that in the brain there aren't any mechanisms that can generate or interpret such accurate oscillations.

When cognitive scientists present computer or mathematical models that purport to demonstrate the usefulness of oscillation, they always rely on very accurate oscillations. Normally, however, they do not make this point explicit. In many other cases, there is no explicit model, and it is just asserted that these oscillations may be doing some task. In these cases it is probably more a failure to consider the information processing capability of these oscillations.

[19 Jun 2023] Last issue of Nature has this article: Geometric constraints on human brain function, which claims that oscillations in the brain can be matched to the natural oscillation modes of the brain. These modes depend solely on the geometry of the surface of the brain, and they use a standard brain rather than individual brains geometries. They claim that this gives much better match than trying to match to neural connectivity.

They then claim that this shows that the geometry has more efect on the brain function than neural connectivity, rather than the obvious conclusion that oscillations are irrelevant to cognitive processes. It looks like a satire, but it does appear as a respectable article n Nature, without any hint that it is not intended seriously (at least not that I can discern).

An interesting point is that they keep mentioning "brain function", and do not use at all words like "cognition", "mental" , "think", "Perception" etc., which are the words that most people use when discussiong brain function. It gives the impression that they don't think that "brain function" is anything to do with any of these concepts, which may explain how they can think that the geometry of a standard brain can determine "brain function".

The home page of the laboratory last author doesn't give an hint about how to understand this article. They seem to be "run of the mill" researchers. [21 Jun 2023] Judging by a non-scientific article by the first and last author about the Nature article, the authors really believe that they show that the sahpe determine how thinking happens. The title actually says: "A new study shows its shape is more important than its wiring". That is bonkers. I have no idea how anybody can think like that.

From the same group, Here is article in which they claim to show that the selection of ananlysis pipleline has large effect of the result of the analysis of hub connectivity in diffusion fMRI. They say in the abstract:

The choice of parcellation has a major influence on hub architecture, and hub connectivity is highly correlated with regional surface area in most of the assessed pipelines (Q > 0.70 in 69% of the pipelines), particularly when using weighted networks.
Since hub connectivity is depenent on the pipeline selected, obviously it is not a real measurement, but that is not their conclusion. Instead, they say:
Overall, our results demonstrate the need for prudent decision-making when processing diffusion MRI data, and for carefully considering how different processing choices can influence connectome organization.
In other words, be careful when you massage your data.

6. The 'intelligent-neuron' misconception

In many cases, the discussion about neurons is done in terms that imply that the neurons are kind of intelligent entities. For example, researchers would say that a neuron is 'interested' in specific stimuli, or that it 'codes' a specific concept. Obviously, since all that neurons do is to send a signal down their axons once they have got enough activation (weighted by the strenth of their connections, and taken as negative for inhibitory neurons) through their dendrites, these statements cannot be literally true.

An expample taken from the abstract of a typical article(Robert O. Duncan, Thomas D. Albright, and Gene R. Stoner, The Journal of Neuroscience, August 1, 2000, 20(15):5885-5897) on the net:

These neurons thus use contextual depth-ordering information to achieve a representation of the visual scene consistent with perceptual experience.
Obviously, the neurons cannot be "using" anything. They are activated by other neurons, in a way that is ultimately dependent on the depth-ordering information.

In most (or even all) cases, the researchers actually understand this point. If pressed about it, they would assert that they use these terms in a metaphorical way, or a similar explanation. However, this understanding seems in many cases not to have an effect of the way they think.

For example, a neuron that is 'interested' in a stimulus (i.e. becomes active when the stimulus is presented), does this because its inputs are from neurons that, on average, tend to become active when the stimulus is presented. Thus, to understand how the stimulus is processed, it is essential to understand how the connectivity of the neuron has been specified. However, researchers feel free to ignore the connectivity, and take the response of the neuron as if it is an intrinsic property, not only in their language, but also in their lines of research.

It is fair to say that investigating the connectivity of neurons in the brain, especially in higher level animals, is extremely difficult. However, without understanding the connectivity we cannot understand how the brain works. Thus the 'intelligent-neuron' misconception allows researchers to avoid the most important (and most difficult) line of research in these animals (which include us).

The other side of the question of how a neuron knows when to be active is how this information is affecting the rest of the system. Again, in reality this boils down to the question of connectivity, and researchers ignore it. Instead, they use meaningless terms, like saying that the neuron 'codes' the information, and that the information 'is made available to' or 'being accessed by' other parts of the brain. These terms has no relation to neurons, and are therefore another set of 'hyper-free' variables, which can be used to plug any hole in any theory.

In effect, all the current models of human thinking are supported by this mis-conception, because once the connectivity of neurons is considered, it is clear that none of these models is compatible with the stochastic connectivity in the cortex.

The idea of 'grandmother cell' (by Barlow) is a direct result of this error. When considering the connectivity of a single cell, and its effects on the system, it is clear that a single cell cannot code for a concept. However, when ignoring the connectivity of the neuron, the idea does not seem as stupid as it is.

Here (Tarr and Gauthier, 2000, Nature Neuroscience, Vol 3, No 8, p.763-769) is a typical example of ignoring connectivity. The article is quite reasonable, but it succeeds to spend more than 5 pages of discussion of the FFA without a single mention of connections of neurons or any related word (connection, connectivity, synapse, axon, dendrite, projection etc.).

Another example is this review(doi), which discusses networks on real cortical neurons grown ex vivo. The actual patterns of connectivity that are formed (as opposed to statistics) are not discusssed at all, because the authors take it as obvious that it is random, and they don't remark on any difference in this respect between the ex vivo network and the one in the brain, because they also take it as granted that the pattern is random in the brain as well. However, a reader that does not already know that it is random in the brain can figure it out from the review only if he is very alert, and notice that the authors refer to their networks as random, but do not think that this point worth discussing. Note that as far as the authors are concerned, they are just ignoring an uninteresting point. They don't realize that for most of their readers this is not an obvious point.

Here is a good example of the kind of errors that the intelligent-neuron misconception causes. It says:

In addition to figuring out how quickly brains can rewire themselves and accommodate new categories, he [Earl Miller] wants to find out whether the same neurons represent the same categories in different brains.

Obviously, the stochastic connectivity in the cortex rules this out, but Earl Miller hasn't worked it out. Since he obviously knows that the connectivity varies randomly between individuals, it must be because he didn't yet consider the implications.

The language that neuroscientists use confuses them, but it confuses other people even more. I suspect that most of people outside neuroscience believe that a neuron that is "interested in some stimulus" is really specialized in some way to detect this stimulus, rather than being connected to group of neurons that on average are more active when the stimulus is present.

[20 Jan 2002] Here are some extended discussions of examples of the error.

7. Coding by Correlation of firing

Sometimes researchers suggest that some information can be coded by correlation between firing of neurons. For time scales of tens of milliseconds, it is obvious that neurons have to fire at the right time relative to the firing of other neurons. Hence the question is whether there is information in correlation of shorter time scales, around millisecond or less.

First thing to realize is that for the correlation to be useful, in the way it is normally theorized to be used, it must be propagated in the cortex. By propagation I mean that activity in part of the cortex causes neurons in different parts of the cortex to fire in synchrony. For example, if in vision correlation is used to bind attributes of the same objects in different modules, than all the neurons in these modules which process the attributes of this object has to fire in synchrony. This must be because the cause of their firing (not necessarily directly) is the same activity in the primary visual cortex, which means that the synchronous activity has been propagated from the primary visual cortex to each of these modules.

The problem with this is that when exactly a neuron in the cortex fires is determined by input from a large number of other neurons. Even if some group of these fire in total synchrony, they cannot stop the neuron from firing between their signals, and they cannot cause the neuron to fire in the dead time, which is around a millisecond. Thus to impose a pattern of firing on neuron, the group that fire in synchrony must also include a group of inhibitory neurons, firing in the gaps between the firing of the activating neurons. The stochasticity of the connections in the cortex rules this out.

Hence a minority of the inputs to a neuron cannot impose a pattern of firing in the millisecond time scale. To impose such a pattern a majority of the input to the neuron must be synchronized. Because of the stochasticity of connection in the cortex, that can happen only if the majority of the neurons in the area of the neuron fire in synchrony. This means that there is only one temporal pattern of activity in the region of the neuron, so the temporal pattern does not carry any useful information (because it cannot be used to distinguish between neurons which process information from different sources). Thus a neuron in the cortex can never be forced to an exact temporal pattern of activity in the millisecond time scale which actually carries any useful information.

Since a neuron cannot be forced to an exact temporal pattern that carries information, any such information carrying temporal pattern will get corrupted any time it passed on from one group of neurons to another. Therefore, it cannot be propagated in the cortex. Without propagation in the cortex, the temporal pattern cannot be used for data processing in the cortex.

Note that this does not mean that neurons in the cortex cannot be synchronized. This, however, does not give a way of using temporal patterns (in the millisecond range) to transfer information inside the cortex. Neurons that fire in synchrony do have a different effect than the same neurons firing asynchronously (they will increase more the probability of firing of the neurons that receive input from them), but the synchronous activity is not propagated. It is possible that such synchronous activity is used effectively as a filter. For example, it is possible that only when the activity in some region (e.g. V1) becomes synchronous it succeeds to activate neurons in other areas (e.g. higher visual areas), thus filtering out noise. However, in this case the only information that is transferred in the synchronous activity is that the pattern of activity is real.

The massive `evidence` that currently exists for the role of synchronization is all about showing that there is synchronization in the cortex, but there is nothing in this evidence about the functional significance of this synchronization. Researchers simply assume that if they show synchronization in the cortex, they have showed that it is used in the computation in the cortex (typical example: J. Fell et al, Brain Research Reviews, 42 (2003) 265-72) . A further twist is a recent article in nature, which tries to show that binding inside the cortex is done using synchrony by showing that humans are sensitive to synchrony in the visual input. This is based on the assumption that if humans are sensitive to X (synchrony in this case), it shows that X is used inside the brain. This assumption is obviously nonsense (to see that, replace X by 'light'). That researchers take this `evidence` seriously is an example of the conclusion-validation error (see in Reasoning errors).

The major mistake of the researchers that advance this idea seems to be that they do not consider the connectivity that is required to generate correlational firing. Other researchers simply do not think enough about it. Crick, for example, discusses this idea on P.211, and there he suggests a temporal pattern with gaps of 100 milliseconds between sets of fast spikes. Obviously, that is far too slow, as the brain must be able to progress on with processing must faster than that. Some researchers postulate that the correlation detection neurons get their input through some filter that somehow compensate for the dispersion of the signal, without considering how this filter can be implemented, and how does it know how much to compensate.

Here (PNAS | June 10, 2003 | vol. 100 | no. 12 | 7348-7353) is a typical paper showing how research into correlation proceeds. It contains a reasonably sound discussion, but of the wrong question: how easy would it be for somebody recording spikes of neurons to deduce anything from them about the stimuli (using or ignoring correlations). The intesresting question is what effect these spikes actually have on the brain itself (i.e. on the activity of other neurons), and on this question their idea cannot tell us anything. We need to know the connectivity for that.

Here is a book chapter (The Handbook of Brain Theory and Neural Networks, Second edition, (M.A. Arbib, Ed.), 2003, pp. 1136-1143) by one of the leading "theorists" of synchronization, where the question of propagation is simply totally ignored.

Here is another typical article (Gross et al, PNAS , August 31, 2004 , vol. 101 , no. 35 , 13050-13055). They say in the abstract:

Our results reveal that communication within the fronto-parieto-temporal attentional network proceeds via transient long-range phase synchronization in the beta band.
And in the discussion they say (p. 13054):
Having ruled out these alternative accounts, we conclude that the visual-attentional network does indeed communicate by neural phase synchronization in the beta band, in the particular experiment paradigm under consideration.
But at no point in their paper do they show any evidence for communication. All they show is correlation between synchronization and behaviour, and apparently they take it for granted that this means that the synchronization is used for communication.

This case is specially ridiculous, because of the way they write "communicate by neural phase synchronization". What does that mean, in terms of neural activity? Maybe they mean that neurons firing in phase in one region cause firing in phase in other regions, but in this case it is even clearer that their data does not show it.

This article (Brecht, Singer and Engel, Amplitude and Direction of Saccadic Eye Movements Depend on the Synchronicity of Collicular Population Activity, J Neurophysiol 92: 424-432, 2004.) is another ridiculous effort to establish the important of synchronicity. In the title they talk about "Synchronicity of Collicular Population Activity", but the synchronicity is beteween the artificial stimulations that they themselves drive into the superior colliculus, rather than any activity that arise naturally.

Their argument is that their result shows that something is sensitive to the synchronicity, but that isn't the point. Neural systems certainly respond differently to different temporal combinations of inputs. The important question is whether they then "code" any information in synchronicity, and this article shows us nothing about this question.

An extreme example is Edelman's models, e.g. in this article (Seth et al, Cerebral Cortex 2004 14(11):1185-1199; full text). To make their model work, they give each unit a phase. That makes sense only if it is assumed that all the units oscillate in the same frequency, which is clearly nonsense for actual brains. In addition, their computation, (cosine to the power 10) means that each unit has in effect a sharp peak of 'activity' (effect on other units), and close to 0 activity most of the cycle, which is also unrealistic (see their figure 3, top line, middle and right).

On top of it, the way that the phase of each unit is calculated from its input is very unrealistic. In real neural system, even if the inputs oscillate in the same frequency, if they are in different phase they are not going to generate activity that oscillate in the same frequency. In each cycle, the target unit will have several peaks corresponding the different peaks of the input units, so the result will be broad activity.. In the model, they get over it by effectively ignoring activity that is far in its phase from the new phase of the target unit, which is completely artificial mechanism. Thus they force their system to use 'correlations', i.e. matching phases, by making totally unrealistic system that is designed to include oscillation in fixed freuqnecy implicitly, and to be good at propagating phases.

The authors seem not to realize how unrealistic their model of activity passing is, and don't bother to discuss the question at all. Apparently they kind of realize that their model is specially good for synchronicity, because they say at the end of page 1189:

We found that this neuronal unit model facilitates the emergence of synchronously active neuronal circuits....
But they (and the reviewers and editor) seem to fail to realize that this invalidates the usefulness of the model for learning anything about the brain.

8. The significance of layers in the cortex

In the cross section of the cortex there are layers, which have somewhat different properties. Many cognitive scientists put a large emphasis on this feature. However, the neurons in the cortex do not respect the layers borders, and a typical pyramidal neurons traverses most of the layers in the cortex with its dendrites and axon tree. Thus the layers do not have any functional significance as far as information processing is concerned. They are probably just a reflection of the way that evolution 'solved the problem' of packing huge number of neurons, with their long dendrites and axons and high metabolic requirements, in a relatively small space.

The illusion that the layers are actually meaningful was apparently helpful in the advancement of connectionist models, because many people thought that the layers in the cortex might correspond to the layers in connectionist models. This is of course total nonsense as explained above.{1} An amusing example can be found in Braitenberg and schutz's book "Cortex: Statistics and geometry of the neuronal connectivity" (Springer, Second edition, 1998, 3-540-63816-4.). On page 139 they write

Generally the dendrite trees of cortical neurons do not seem to respect borders between layers, nor do many of the axonal ramifications. Still, the stratification that is apparent to the architectonic eye looking at the cortex at magnification must correspond to something meaningful, and is in any case at the basis of the most important areal distinction.

The authors don't give here or in the rest of the chapter any indication why the "stratification ... must correspond to something meaningful." These authors also realize that the connectivity of neurons is important and relevant, and that it does not fit with the idea of importance of the layers, but they still believe that they are important because they "must". That the layers are the basis for areal distinction just means that these distinction are also unlikely to be significant.

Here is another example: A recent review (Kenneth D Miller, David J Pinto and Daniel J Simons, Current Opinion in Neurobiology 2001, 11:488–497) starts this way:

"The cerebral cortex has a stereotyped six-layer structure (reviewed in [1]). 'Feed-forward' inputs to layer 4 of the primary sensory cortex come from the thalamus and represent the sensory periphery. Layer 4 cells project to layers 2/3, which in turn provide feed-forward input to layer 4 of the next higher cortical area and to deep layers. Deep layers then provide feedback to layers 2-4 and the thalamus as well as output to non-thalamic subcortical structures.

The penultimate sentence ("Layer 4 cells...") is simply false (unless it is interpreted to mean that layer 4 cells project to 2/3 as well as to other layers, in which case it is contentsless). With few exceptions (e.g. whisker-barrels in rodents), neurons from all layers project to neurons in all layers, both in the same cortical area and in other cortical areas. Deep layers tend to send more projections outside the cortex than the other layers, but it is only a tendency. The last sentence is not incorrect, but it tries to give an impression of more organized connectivity than the real state (which is something like "Deep layers neurons project everywhere"). It is worth noting that after drawing this picture of neat organization, the rest of the review does not refer to this organization at all, which makes it clear that the authors actually know that it is not real.

And another example: In Grossberg and Williamson(2001, Cerebral Cortex 11, p.37-58) They write(p.37):

Cells in cortical area V1 are arranged into columns whose local circuits link together cortical layers.
This strongly implies that the layers are separate, which is ofcourse false. It also implies that there are identifiable local circuits, which is also false. In this article, the authors continue the discussion as if these local circuits really exist, and assume that when a neuron from layer affects a neuron in another layer, it is always in the same column (see Table 1). It is possible that these authors actually believe that this is true, as they don't give any indication otherwise. This article is also a demonstration of misconception about 'lateral connections'.

[1 Mar 2002] Another example is this abstract of a workshop. They say:

A network perspective: Six layers, extensively recurrently interconnected. What are the computational advantages of this layered neocortical structure? To what extent do the layers interact, and is this interaction crucial to the operation of the column? Does the column work as a unit or can layers operate independently? What is the computational advantage of the immense recurrent circuitry?
I don't think you can do justice to this paragraph without calling it 'idiotic'. The network is of course network of neurons, most of which cross layers boundaries willy-nilly. As far as the network is concerned, the layers don't exist. Somehow whoever wrote this stuff missed it.

In the summary of the workshop it is not obvious if anybody there raised the point that neurons don't respect layers boundaries. My impression is that they realized that the layers don't have computationally functional significance, but felt obliged to say something anyway.

Mountcastle (Perceptual Neuroscience: The Cerebral Cortex, 1998, Harvard University Press, 0674661885), a strong believer in columns, debunks the layer myth on p.314.

[20 May 2006] In a recent Science, in an article about "grid cells" (Conjunctive Representation of Position, Direction, and Velocity in Entorhinal Cortex, Sargolini et al, Science 5 May 2006, Vol. 312. no. 5774, pp. 758 - 762), they say (p.760761);
These results imply that, despite the differential hippocampal and neocortical connections of superficial and deep layers of MEC [Medial Entorhinal Cortex] (10,28), all layers together operate as in integrated unit, with considerable interactions between grid cells, present in all principle cell layers, and head-direction cells, present in layers III to VI. Principle neurons from layer II to V have apical dendrites that extend up to the pail surface(14,29). Layer V cells have extensive axonal connections to the suprficial layers (14,15), and local axons of layer II and III cells may contact the dendrites of deeper cells (30). This implies that visuospatial and movement-related signals from postrhinal and retrospenial corties (10,28) and directional signals from dorsal presubiculum (18-20, 31-34) may activate the entire MEC circuit even when the axonal input specific to one or a few layers.
Thus they feel that they need to argue that "all the layers operate as in integrated unit". This should been totally obvious by now, but it obviously isn't.

9. The significance of regions in the cortex

The cortex can be divided to many regions, based on histochemical properties. However, these regions are not really separate sections of the cortex, and there are no borders between them. Functionally, i.e. as far as the spread of axons and dendrites is concerned, the cortex of each hemi-sphere is a single continuous sheet.

There is a lot of research about the projections between regions, and this research gives the impression that regions are strictly separate entities, with well defined input and output channels. This is very misleading picture, as the projections are not actually separate entities. They are found by injecting some tracer material in one region, and checking where else in the cortex it appears. This shows that some neurons send axons between these regions, but it doesn't tell us which neurons. Even in the resolution of 1mm, the patterns are variable between individuals, so it is clear that the actual neurons vary between individuals. In addition, all regions in the cortex are connected to all of their neighbors along their 'borders'. Thus the regions cannot be considered separate units, and it is not obvious that they have any functional significance.

The illusion that the regions are separate entities is both supporting and supported by the modularity assumption (see in Reasoning errors). It also helps to keep the illusion that the brain is a computer-like machine.

10. The 'unity' of perception/awareness/consciousness

In some cases, cognitive scientists use as a part of their argument the assertion that our perception/awareness is unified. In general, the assertion of 'unity' is meaningless, because what distinguishes 'unified' perception/awareness from 'non-unified' perception/awareness is not specified. In some cases, it means that everything that we perceive simultaneously is perceived as being unitary. This again meaningless, because it is not obvious how it could be different.

The only related phenomenon that is real is that humans tend to recall together entities which where perceived at the same time ('episodic memory', see in [3.5]) above). However, in most of the cases people don't look for explanations of episodic memory when they look for explanations of 'the unity of perception/awareness/consciousness'. Instead, they use it as an excuse for introducing all kinds of nonsense theories which 'solve' this 'problem'.

11. 'Vector representation'

In some cases researchers use vectorial summation ( Sigma(AiVi), where Ai is the activity of neuron i, and Vi is its preferred movement/stimulus direction). The resultant vector corresponds to the direction of the current movement/stimulus. The procedure on its own is a useful way of summarizing the data.

However, some researchers go on to interpret the fact that the direction of the vectorial sum agrees with the direction of movement/stimulus as evidence for various things. This is nonsense, because it is easy to show that this is a necessary result of the procedure of vectorial summation of directional cells {2}. Hence, once we know about the existence of directional cells, the agreement between the vectorial sum and the direction cannot tell us anything. This does not stop researchers from claiming that finding such agreement proves whatever they feel like.

12. The critical period of language acquisition.

The 'critical period' refers to the observation that children in the age 2-10 (approximately) find it much easier to learn language (both their first language and a second language) than older people. This is the result of three effects:

  1. Humans are born with an ability to comprehend and generate all kinds of phonemes, but during childhood (starting from birth, and maybe before) this ability is shaped by experience such that only the phonemes of the native language are easily comprehended and generated. In adults, these abilities are much less plastic, so adult learners of new language find it specially difficult to comprehend and generate the phonemes of the new language that are not used in their native language.

    Here (Kuhl, Nature Reviews Neuroscience 5, 831-843 (2004)) the author says that this effect, which she calls native language neural commitment (NLNC), ".. might explain the closing of the 'sensitive period' for language learning".

  2. At the time of learning to speak, the child learns to understand the world, and linguistic interaction forms most of the data in this learning. As a result, the learned neural structures that correspond to concepts tend to be associated with the neural structures that correspond to the words (by Hebbian mechanisms).

    When an older person learns a language, the concepts already have neural structures, which are quite fixed. The neural structures corresponding to the words in the new language, which are determined by the perceptual input, have no relations to the former structures, and hence the association is relatively difficult to learn.

  3. In learning new language, the learner is not only required to perform new sequences of mental and motoric operations, but is also required not to perform the old ones. The old sequences are very thoroughly learned through practice, so it is very difficult to avoid performing them. Thus older second language learners find it very difficult not to slip back into their old language, both in terms of motoric actions (pronunciation) and mental actions (syntax structures, phrases etc.). For a young child, this is much less of a problem, because his/her language performance is much less practiced.

Note that this is different from learning in new domain or more advanced learning in a known domain, where the person does not have to avoid already highly practiced habits. In cases where learning does require avoiding practiced habits, she/he will also find it difficult.

It is easy to see what would cause psycholinguists to miss the second reason. Most of them think of the brain as a symbolic system, and in such a system there is no problem associating arbitrary combination of phonemes (i.e. a new word) with an existing concept. You have to realize that the brain is not a symbolic system (see in Brain-symbols) to figure out that forming such associations in large numbers is much more difficult problem than forming them at the same time that the concepts themselves are being formed.

Chomsky (and close followers) would find it even more difficult to see this problem, because he believes that all the concepts are already formed when the child starts to learn language (see the quote from page32 ) .

It is more difficult to explain why psycholinguistics miss the third reason. Part of the explanation is the theories of word processing in cognitive psychology, which tend to postulate a 'lexicon' that is used to identify the words. In this case, it is easy to imagine that when a person uses a second language, all he needs is to mark the lexicon of the first one as inactive.

It is even more difficult to see the reason that psycholinguistics miss the first reason, and I think the simple explanation is that they don't want to accept it, because it undermines their theories.

13. Blindsight and similar phenomena

Blindsight, in principle, is a phenomenon where a person shows some reasonably reliable response to visual stimuli that she/he is not aware of. In practice, researchers don't include phenomena that fit the description but don't fit their own line of thinking. For example, short flashes are not 'seen' by healthy individuals, yet affect their responses, and hence fit the definition of Blindsight. Researchers normally restrict the term blindsight to people with some brain damage that causes them to be blind in part or all of their visual field. Some of these patients still show reliable response to visual stimuli in the damaged area, and that is blindsight.

The explanation for this phenomenon is straightforward: The visual system is not an ideal system, so it contains some noise. Hence the building of 'how the world is' 'picture' (== patterns of activity in the highest level of the visual system and the 'association areas') inside the brain must be insensitive to weak and/or short-lived activity, which is likely to be noise. In blindsighted people, the activity from the damaged area is too weak to affect this process (i.e. it does not affect the global pattern of activity), but is still strong enough to activate some basic concepts (mostly visual, like movement, direction, colors, but also less visual concepts). The latter activity help them to know things about the blind field, i.e. to have blindsight. This is also what happens when some information is flashed very briefly to healthy individuals.

If the activity is above the threshold but close to it, it affects the 'how the world is' picture, but the effect is distorted, because some details of the activity which is normally associated with the stimulus would be missing. In brain damage patients the damage also adds to the distortion. The result is that activity on the threshold is always somewhat confusing.

Even though the explanation is straightforward, most of the researchers seem to fail to figure it out. In most of the cases that seems to result from being too obsessed with their own theories to consider other explanations. Sometimes it seems to be based on the assumption that the visual system is a noiseless system, and sometimes on the assumption that healthy individuals are aware of any visual stimulus that affects their thinking. Both of these assumptions are clearly nonsense.

Whatever the reasons are, the discussions of blindsight ignore the simple explanation, and instead use blindsight to support various spurious ideas, like information that is '{in}accessible to consciousness' or separate systems for explicit and implicit knowledge.

The phenomenon of reliable response to information that the person is not aware of is also documented in other syndromes, like achromatopsia, object agnosia, alexia, prosopagnosia etc. This is the same, but from a damage to more specific process than seeing (i.e. colour discrimination, object recognition, word recognition, face recognition etc.). The damaged process is too weak and inconsistent to affect consistently the 'how the world is' picture, but can still cause some reliable affects.

14. Neglect

Neglect, where a patient ignores part of the world (typically one side) seems to be different from the syndromes in [13] above, because in this case the patient seems to be unaware of information about the neglected side even when she/he could easily move her/his gaze such that the neglected area be visible in the intact visual field. Moreover, in some cases, there doesn't seem to be any problem with visual input at all. A more likely explanation is that in these patients the damage cause thinking about the neglected side to be unpleasant (whatever that means in neural terms), so the patient avoid thinking about this side. The patient himself is unaware of this avoidance, because to figure it out she/he has to think about the neglected side, which she/he avoids.

Currently, most cognitive scientists seems to be completely incapable of grasping the idea that a patient (or healthy person) can avoid thinking about something without being aware of it, so they are totally mystified by neglect.

15. Connectionist models

Current connectionist models are at least 10**8 times smaller than the real system (the human brain), using the number of synapses as the indicator of size. There is no reason to believe that any property whatsoever is preserved over this gap. Hence it is not sensible to try to deduce anything about the properties of the human brain from the properties of current neural networks.

Sometimes it is argued that neural networks don't show how the brain works, but show what is possible to do with neural networks, and hence what mechanisms can be assumed to be present in the brain. This argument is wrong for two reasons at least, and each one of them would be enough to invalidate it:

  1. The current artificial networks can do only tiny fraction of what the real network (the brain) can do, and cannot be used to show what the real network cannot do. Thus the current artificial networks cannot give us any hint of the whereabouts of the border between what is possible for the real network and what isn't.

    How big the artificial network has to be before we can use it to learn about the real one is not obvious, but it is clearly much larger the tiny artificial networks (few thousands connections) of today.

  2. Artificial networks are based on artificial neurons, which have different properties from the real neurons. To be able to deduce anything about the real network from the artificial network (even if they are the same order of magnitude in size), we must know which of the properties of the artificial neurons are essential to the performance of the artificial network, and check that the real neurons have these properties as well. This could be done in principle, but in practice it is rarely done. In general, researchers in this field do not feel the need to check which properties of their artificial neurons are essential to the performance of their network, and are these properties realistic.

16. 'Neurons are natural coincidence detectors'

Sometimes researchers say that neurons are natural coincidence detectors. This is normally associated with assuming coding by correlation (see [7] above). The problem with this is that it is half-truth. Because even if the neurons in the cortex are sensitive to coincidence as they are claimed to be, there is no way to tell from their output which of their inputs are correlated.

For example, a typical pyramidal cell has input from several hundreds other neurons (maybe thousands), and several tens of these are required to make it fire. However, there is nothing in the signal that the neuron outputs to tell which of the several input neurons has fired. Thus the neuron may tell us that there was some coincidence in its input, but not which coincidence (i.e. the activity of which neurons is correlated). Since the later is the useful information, neurons are not 'coincidence detectors' in any useful sense.

The firing of the neuron does give some information (that some of its inputs are correlated), so a network of such neurons, with the appropriate connectivity, could in principle compute where there was coincidence. However, in the cortex, the stochastic connectivity of neurons rules out this possibility.

Inhibitory signals make it even more difficult to use coincidence detection, because some correlations will not be detected at all, because the neuron will not fire because of inhibition.

The discussion above applies to the cortex, and to any tissue with complex and stochastic connectivity. In other areas there may be neurons that are effectively coincidence detectors.

This 'perspective' article (Information Coding, Barry Richmond, Science Vol 294, 21 December 2001,p. 2493) is an interetsing case. The author actually realises that for the exact timing of spikes to be important, a specific connectivity is required. He writes:

Thus, it is important to know whether the connections among nearby neurons are segregated (presumably from genetic specification or learning) to preserve the information coming from individual neurons.
Unfortunately, he skips the answer to this question, which is simply no, at least in the cerebral cortex (and he certianly knows that). Instead, he brings an example for cricket, which seems to be inetnded to be an example of such segregation (though he doesn't explictly says that). It is possible that he considers it obvious, but he is writing to an audience (readers of Science) which is mostly non-neuroscientists, and will certainly get the impression that it is actually an open question.

17. There are Darwinian mechanisms in the brain.

Some researchers try to explain the occurrence of some changes in the brain by Darwinian evolution inside individual brains, i.e. competition between some brain features (typically patterns of activity or connectivity). That is actually nonsense, because Darwinian evolution requires reproduction, so for Darwinian evolution in an individual brain there need to be something that reproduce in it. There isn't, as far as mental operations are concerned. Therefore, any talk about Darwinian evolution in an individual brain is plain hot air.

Some researchers may argue that by Darwinian evolution they mean selection out of random changes. This is just misleading: the reason that Darwinian evolution is invoked is that it is known to be able to generate very complex and interesting things (e.g. life forms). Hence when it is applicable, it is (potentially) a good explanation for complex phenomena. We have no reason to believe that selection out of random changes, but without reproduction, can generate complex phenomena, so invoking such selection to explain anything in the brain is invalid explanation. By calling such selection 'Darwinian', the speaker misleads the listener to accept the invalid explanation (selection without reproduction) by associating it with a valid explanation (selection with reproduction) which is not applicable.

Some people argue that the new evidence about some regeneration in adult brain supports Darwinian mechanisms. That is of course just more nonsense, because in regeneration the new neurons descent from stem cells rather than from old neurons, so it is not reproduction.

One person that does understand this problem is William Calvin, but instead of acknowledging that the lack of reproduction in the brain eliminates Darwinian explanations, he invents "observations" about the brain to support reproduction. His latest idea is the "hexagons" in the cortex, a complete fiction.

One way that is used sometime to support evolution in the brain is to confuse between evolution inside an individual brain and evolution of the brain over generations. The latter obviously happens, but it is a different thing from competition inside the brain.

18. Perception systems are 'well understood'.

In some cases it is claimed that we have good understanding of perception inside the cortex, e.g.: "Although the representations and processes underlying higher functions in the brain are still largely unknown, the organization of sensory cortices has been quite well understood for about 30 years." That is simply nonsense: we know many features of the organization of these parts of the cortex, but we don't understand the organization of these areas, i.e. we don't know how these features are combined into perception. In fact, for most of them we don't even know if they have any significance to the perception process itself.

19. In the cortex, inhibition, short-distance connections, 'lateral connections', 'horizontal connection' and 'back-projections' are important

They obviously are. The misconception is the fact that neuroscientists feel free to ignore them in their models, without giving any reason (sometimes the fact that we don't understand them is used as a reason, which is obviously nonsense). There is no ground, either theoretically or experimentally, for this omission, but it makes simpler models. These models, of course, are unrelated to the cortex, because in the cortex short distance connections, inhibition, 'lateral' connections and 'back-projections' are part of the integrated circuit of the cortex.

For example, Mountcastle (Perceptual Neuroscience: The Cerebral Cortex, 1998, Harvard University Press, 0674661885) describes the "prototypical Intracortical Circuits" and their activity on pages 293-294. The 'horizontal connections' are mentioned, but not any effect they may have on the activity. Mountcastle seems to seriously believe that it is possible to gain some understanding of the working of the cortex while ignroing their effects.

The situation is improving lately, with more models including more features of the brain. Amusingly, researchers commonly say things like: "Lately, it was found that inhibitory neurons have an important role." That should really read: "Lately, we figure out how idiotic it is to ignore inhibitory neurons." There was never any reason to doubt that inhibitory neurons have an important role, and the same is true for short-distance and 'lateral' connections and 'back-projections'.

In many cases, the mis-conception lives on in a different guise: rather than ignoring these features, the researchers build a model specifically to explain them (e.g. a specific model for the role of 'horizontal connections', for example Grossberg and Williamson(2001, Cerebral Cortex 11, p.37-58)). This is of course nonsense again, as in the cortex these are not separated from the integrated circuit (This is an extreme example of the "Operational modularity" error).

This article (Christos Constantinidis, Graham V. Williams & Patricia S. Goldman-Rakic, A role for inhibition in shaping the temporal flow of information in prefrontal cortex, Nature Neuroscience, February 2002 Volume 5 Number 2 pp 175 - 180) is a cute example. According to the last sentence of the abstract, the main conclusion of the article is:

These findings suggest an important role of inhibition in the cerebral cortex - controlling the timing of neuronal activities during cognitive operations and thereby shaping the temporal flow of information.
This, of course, is an interesting conclusion only to someone that either did not know that "inhibition" (i.e. inhibitory neurons) has an important role, or did not know that all neurons in the cortex "control" the timing of neuronal activities. Obviously anybody with even a marginal interest in how the brain works would know that, but somehow this is news for these authors, the editor and referees. The best explanation is that they have some models in which inhibitory don't have a role, and hence think that it is interesting to find that they do have a role.

Note that to make it sounds more interesting, they use the term "controlling" where really they should say "affecting".

[ 5 Oct 2004] This article (Spratling and Johnson, A Feedback Model of Visual Attention, Journal of Cognitive Neuroscience, 2004;16:219-237:full text) is another typical example, modeling "feedback" as a separate entity. These authors also invent fictional connetcivity for the cortex in the second paragraph of the introduction and Figure 1. It is interesting that this rubbish went past the reviewers, as it contains simply false statements. Presumably the heap of references that they bring, which are a good example of the "bogus reference" technique, deter the reviewers from objecting.

20. Autistic Savant

There are some real autistic savants, but some of it is hype. In many cases, trivial tasks performed by an autist person are hailed as 'amazing skill'. The most typical of these is the ability of telling the day of the week of a given date. This is trivial, and every 8 years child could have learned it in few hours, if they did not find it so boring {3}. Another 'amazing skill' is calculating prime numbers, which is again simple arithmetic.

21. Referents and meaning of words

A confusion that is rife in philosophy of mind and linguistics is about the 'meaning' of words. Philosophers tend to ignore that comprehending a language involves taking the meaning of the words and evaluating it. In many cases the evaluation is a null operation (i.e. the result is the same as the input), but not always. For example, the meaning of the word 'I' is "the speaker (more generally, signifier) of this word". The evaluation of the latter gives the actual person (or maybe some other entity).

The important point about evaluation is that it requires an evaluator (normally the listener/reader), and that it depends on the properties of the evaluator, both the long-term ones (knowledge, beliefs, biases, tendencies) and the transient ones. The latter includes the way the evaluator perceives the world at the current time, his location etc. For example, the evaluation of the meaning of 'I' is dependent on who the listener believes is talking. Normally this is obvious, but not always.

Because the result of the evaluation is dependent on the properties of the evaluator, the result of the evaluation cannot be regarded as property of the word. Philosophers, however, talk about the 'referent' of a word (by which they refer to the result of the evaluation, by some evaluator, normally an evaluator with all the relevant knowledge), as if it is a property of the word (and hence is what any user of the word refers to when he uses it). That means that what the philosophers think that the user refers to is different from what the user himself 'refers to' (the result of his evaluation of the meaning of the word), and hence leads to many 'deep' problems (because they are unsolvable). Many examples can be found in MITECS(you can search my comments for 'eval' to find them).

22. The 'Self'

The 'Self' has a long list of myths and theories associated with it. Most of these ignore several basic facts.

  1. For every individual, it is a fact of life that he/she has a body, with a clear boundary, with two unique properties: it sends signals (pain, heat, pressure etc. and also feeling tired etc.) directly rather than through some input organ, and it can be controlled (to some extent) directly.
  2. For every individual, it is a fact of life that there are some things that affects his/her emotions, thoughts and actions directly, but are totally inaccessible to others. We now know that these things are neural activities in the brain, but this knowledge is not required to figure out the existence of these things.
  3. All of these (subjective) phenomena above are almost always together, and it feels differently when they are dissociated. Such dissociation is normally associated with some failure to understand reality, as measured by independent factors (e.g. inability to respond properly to danger).

After an individual have learned these facts (not necessarily all of them), they constitute for him/her the 'Self'. Thus we should expect any individual which is intelligent enough to figure out some of these facts to think about him/her 'Self', and to regard it as unique, directly accessible, private and of overriding importance. There is no need for any additional explanation. The latter would be required only for regularities in the concept of 'Self' which cannot be explained by the facts above.

A typical example of ignoring these facts is the discussion in MITECS .

23. 'Mirror Neurons'

This is a new fad. 'Mirror neurons' are neurons with raised activity both when the individual perform some motor activity and when the individual watches another individual perform similar motor activity. These neurons were found in monkey brains.

The name 'Mirror neurons' is very misleading, because the term 'Mirror' is normally associated with exact and detailed reflection. There is nothing exact or detailed or anything to do with reflection about 'Mirror Neurons'. The name also serve to give the impression that these neurons have some other identifying characteristics, an impression which some researchers try to promote, apparently successfully. 'Mirror neurons', at least at the moment, can only identified by measuring the activity of many neurons in the cortex (of a monkey), until you find some neurons that behave according to the definition. There are no known other characteristics specific to these neurons. In addition, these neurons are not the same between individual animals.

Once we look at the actual observations, there is nothing really interesting. They are compatible with any theory that postulates that thinking about some concept involves activity of some neurons (in other words, all current scientific theories), plus the assumption that the monkey is intelligent enough to recognize the similarity between his body and other animals (including humans) bodies and hence to have concepts like 'hand movement'. The 'mirror neurons' simply are (probably small) part of the neurons that are active when the monkey think about this concept. The existence of these neurons does not tell us if the concepts are learned or not, though the fact that they are not the same place in different monkeys suggests that they are learned.

That did not prevent some researchers to regard 'Mirror Neurons' as fundamental feature, and to use them as 'supportive evidence' for all kind of theories. E.g. Ramachandran (see my comments) claims to believe that they are as important as DNA.

Apart from plain wishful thinking, "mirror neurons" are also an instance of the "intelligent-neuron" misconception, because people attribute to them capabilities without thinking how can they do it.

[20 May 2005] The latest Science has a very uncritical "News focus" article about mirror neurons (Miller, Science 13 May 2005: V.8 945-947). It stands out that the question of how these neurons know when to fire is not mentioned at all in this discussion. Obviously, the question wether what is seen is the natural result of learning in network of real neurons is not discussed at all either.

24. 'Visual processing streams'

The 'Streams' are supposedly two streams in the cortex in which the information from the primary visual cortex flows. This is clear nonsense, because the cortex has an extremely complex connectivity, with information flowing in almost every direction from every point. This is simply because from any point there are axons going out in almost any direction.

The 'evidence' for 'streams' just shows different tendencies in different regions in the cortex, but nothing like a well-defined flow. The sole reason for introducing streams is that they are simpler and easier to model on computer than realistic models.

In some cases, researchers use the terms 'stream' and then actually discuss areas with different tendencies. In this case, it is the usage of the word that is very misleading, because the reader would still be left with impression that there are 'streams', i.e. well-defined flow. See for example the discussion by Motter in the MITECS.

Another nice example is in the book Neuroscience: Exploring the Brain (Bear, Connors, Paradiso, 1996, ISBN: 0683004883). In general, books about brain anatomy ignore the 'Streams' concept, because it doesn't have any anatomical basis. This book is an exception. On page 267 they discuss it, and show a figure of it (figure 10.27). In the figure, the two streams are depicted by two arrows through the cortex. To any critical reader it is obvious that these arrows are drawn completely arbitrarily, and the only relation to the anatomy of the cortex is that they start in V1. Amusingly, the same figure shows also the location of the area MT (V5), which is supposed to be in one of these streams, and it is obvious that it is not in the path of either of them. The discussion in the text itself first introduces the concept of 'streams', and then discusses 'areas'.

[22 Apr 2004] Another example is this article (Eagleman et al, Perceived luminance depends on temporal context, Nature, 428, 854 - 856, 22 April 2004). The last sentence of the abstract is:

This temporal context effect indicates that two parallel streams -- one adapting and one non-adapting -- encode brightness in the visual cortex.
There is nothing in the article that indicates "two parallel streams", but since the authors believe in two parallel streams to start with, they plug in the concept anyway. In the discussion in the end of the article they talk about two populations of neurons, one adapting and one not. This is less of a nonsense, but it is clearly not "two parallel streams". It is not obvious how it is supposed to be different from what we already knew before this article (or even 50 years ago). The authors also say that : "This raises the possibility that the two encodings could be multiplexed into the same population of cells in V1.", which makes what they say a tautology (because there isn't, even in principle, any observation that is incompatible with it). But for the vast majority of readers, who read only the abstract, this article gives the impression that there is an additional experimental support for the "two parallel streams".

Since they like so much the idea of streams, neuroscientists stick it in other places apart from the visual system. For example, in this article (Tian and Rauschecker, Processing of Frequency-Modulated Sounds in the Lateral Auditory Belt Cortex of the Rhesus Monkey, Neurophysiol 92: 2993-3013, 2004; doi:10.1152/jn.00472.2003) they found different sensitivities to FM sweeps, and then conclude:

Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space
Obviously, their data shows nothing about streams or anything parallel, but that doesn't bother them.

25. 'Representations'

Large number of Cognitive Scientists (I think almost all of them) take it for granted that the brain represents the world. Typically, they would say that an important question is how the brain represents various things, thus already assuming that the brain represents things. For example, Holyoak starts by writing :

Psychology is the science that investigates the representation and processing of information by complex organism.

We already know enough about the brain to know that it doesn't represent anything. The way in which information is kept in the brain (plasticity of synapses) does not represent anything. This, however, is just a fact, so cognitive scientists feel free to ignore it.

A common way of getting around the difficulty is simply to define plasticity of synapses as representation. This, however, is just a word-game, and doesn't change the fact that plasticity of synapses does not represent anything.

By now I decided to have a separate page about representations.

26. "Cortex-inspired silicon circuit"

That is part of the title of an article in Nature (Hahnloser, Sarpeshkar, Mahowald, Douglas and Seung, 2000, Nature, V. 405, 947-951). It is difficult to describe how stupid it is to call the circuit that is presented in this paper as "cortex-inspired." It is like claiming that tri-cycle is horse-inspired, because both have a saddle and can be of different colors. The authors have the effrontery to call their units "neurons" without any qualifications, even though their units have hardly any similarity to real neurons, and the whole circuit has no similarity at all to anything that is seen in the cortex (or anywhere in a biological system).

The "news and views" item that accompanies this article (Diorio and Rao, 2000, Nature V. 405, p.891-892) goes a step further: It is not only cortex-inspired, it is actually "neural circuits in silicon". It explicitly draws analogies between the behavior of the circuit and human performance, in a totally arbitrary fashion.

Obviously, anybody with any understanding of neurobiology can see that it is garbage, but Nature is read by many people that don't know enough to realize this. For them, this article and the "News and Views" would be extremely misleading.

27. 'Intuition'

The existence of intuition is not a myth. The myth is that intuition and 'normal' ('conscious') thinking are different things. These two are exactly the same thing, except that part of the 'normal' thinking process can be recalled. In other words, if enough of a thinking process can be recalled, it is called 'normal' ('conscious') thinking, and if not it is 'intuition'.

The evidence for this is that the thinking processes are all in the cortex, without any separation between 'intuitive' and 'conscious' thinking.

The reason that most people cannot work this out is that most of them believe that they can recall their thinking processes, so there must be something special about a process that they cannot recall at all. That it is trivially false, but most people find it emotionally difficult to admit that they don't know in detail how they think.

The difference in recallability has additional effects: If you can recall at least some of the thinking process, you can inspect it, report it to other people etc. As a result, a person can improve his 'conscious' thinking process directly, and teach others to think in a similar way. Thus 'conscious' thinking is easier to learn and communicate, and in general is visible. This however is just a reflection of higher recallability, rather than any underlying differences in mechanism.

28. Non-biological Self organizing systems

There are researchers that claim that we can learn something about the way the brain is working by looking at various self-organizing systems, including non-biological ones. Mostly, this assertion is based on the assumption that the brain is a self-organizing system as well.

However, the process of organization of the brain (development) is fundamentally different from the non-biological self-organizing systems. The most important difference is that the development of the brain is based on a program that is coded in the genome. This means the brain can be, and is, a far more complex system than non-biological self-organizing systems can ever be. The additional complexity includes many design principles, which non-biological self-organizing systems do not have. These differences mean that these systems cannot tell us anything about the brain.

In principle, maybe it would be possible to make self-organizing systems which organize themselves using some design code like the genome, but currently we haven't got a clue even how to start doing that, and self-organizing systems experts do not even try.

[5 Dec 2003] Claims about "the brain as dynamical system" are also doing the same error. They also ignore the fact that the brain has underlying principles which are fundamental to its function and are not shared with other dynamical systems, and therefore studying dynamical systems in general is not going to help understanding the brain.

29. Visual Perception and eye movements

It is well established that to actually perceive the world, humans need relative movement between the perceived objects and the retina. Stimuli that are stationary with respect to the eye itself become invisible. However, this knowledge has passed by many researchers. Typical examples are building models for object perception that ignore eye-movements (e.g. Marr [II.3] and in MITECS) and claiming that when a person looks at a static image the retinal input does not change (e.g. here).

Another stunning example is given by Whitney and Cavanagh (2000, Nature Neuroscience, Vol 3m No, 9, p.954). The last sentence of their abstract says (my italics): "The results indicate that motion's influence on position is not restricted to the moving object itself, and that even the positions of stationary objects are coded by mechanisms that receive input from motion-sensitive neurons." Considering the requirement for eye movements, the italicized part is obviously true and does not need any further experimental support, but the authors somehow missed it. In their discussion, they don't mention at all the question of eye-movements and their importance in perception.

The reasons for this kind of amnesia seem pretty clear: it is far more difficult to build computational models of how humans use motion input to perceive the world, and therefore researchers prefer to ignore it. After all, it is just an observation, and observations should not be taken too seriously.

The reason that humans ignore stimuli that are stationary with respect to the eye is also clear: stimuli of this kind are not real. The eyes have many inhomogeneities (notably blood vessels, but also variois imperfections), which cause "features" in the visual input, which are not real. These features are distinguished from real features by the fact that they are stationary with respect to the eye.

It is not obvious if the mechanism that makes stationary objects invisible is innate or learned. The problem it solves has existed since eyes started to evolve, so it may have evolved. On the other hand, it can be easily learned by each individual during the first months of life, as it is a very consistent effect.

30."It is a great mystery how the brain can do what it does"

This statement is simply ridiculous. The implementation of each neuron would require the computation power of something like a standard PC, and the brain has tens of billions of these. With this kind of computation power, it is not a mystery how the brain can do what it does, and in fact it is obvious that it makes it possible for the brain to perform these tasks in a huge number of different ways.

In principle, the question "how the brain does what it does" could still be an interesting question. However, the most interesting functions of the brains are done in the cerebral cortex, in which the connectivity (and hence the actual computation) varies stochastically across individuals. Thus, for most of tasks, the answer is simply "each brain does it in one of the myriad ways that it is possible to do it." If you find this answer disappointing, you are not the only one: I also found it very disappointing. That, however, doesn't change the fact that it is a direct conclusion from what we know about the brain.

The interesting question that is left open is "how does the brain know what tasks it should perform?". A heap of randomly connected neurons would not be expected to generate a coherent behavior. Thus there must be some mechanism that directs the cerebral cortex to acquire proper activity patterns, and it this mechanism which is the real mystery of the brain.

30."Language is very complex and difficult to learn, and it is amazing how children learn it."

It is quite common to find such statements. This is not only from Chomsky and other believers of Universal Grammar, but also from other camps (For example: "Your three-year-old child may know nothing of politics or calculus, but she’s a genius at learning language." ).

The problem is that this statement is simply false. Language itself is pretty simple, and the only problem is that it requires remembering many arbitrary associations. The real problem is in actually using it: to generate a meaningful communication or understand it, the communicating agent needs to have understanding of the world, and this is indeed difficult.

The argument for the complexity of language is normally based on one or more of these arguments:

Comparing with computers: It is very difficult to teach computers language
That is clearly a consequence of the fact that computers don't understand the world. They can generate language, but not meaningful communication.
Comparing with linguists: Linguists work for years to define the grammar that a child learns in months
That is because the child does not learn the formal grammar, it learns to communicate, by using some (learned) rules, which gives the child an approximation of the grammar. As the child grows, he/she adds rules that improve the accuracy. The linguist, on the other hand, looks for a short formal description that gets everything right (which is actually impossible, as language changes all the time).
Comparing with adults: children are much better than adults in learning language
The reasons for that are explained in section 12 above.

31. It is either genetics or the environment ("nature or nurture")

It is quite often that people say that some features of cognition/perception (at all levels) are either from genetics or from the environment. For example, in the overview page of some laboratory in Berkeley it says: "What is genetically determined and what is shaped by the visual environment?". Here is another example. On page 2 it says:"Which is more important for development of our mental capabilities? Nature (genes) or Nurture (environment)?". and then answers:"Neuroscience provides unequivocal evidence: Both are important!"

This is not completely wrong, but it misses the main point, because it ignores learning processes. It may be claimed that "the environment" means also learning, but that is just a word game. People normally don't understand "environment" as including learning mechanisms, which are internal features. Thus by presenting the problem as genetics or environment, researchers skip learning mechanisms without considering them at all.

[ 2Mar2003 ] A typical example appeared in the latest Nature Review Neuroscience. In an "Opinion" titled Beyond Phrenology, At Last(Kenneth S. Kosik, Nature Reviews Neuroscience 4, 234-239 (2003)), it says: "The environment creates a diversity of cell functions, creating a rich fabric of information from which a perception is synthesized." That is clearly nonsense, as the enviroment clearly does not "create .. cell functions". The author also seems to genuinely make the "intelligent-neuron" misconception. He says: "Do cells that were destined to respond to discrete visual stimuli, such as lines or colour, now become tuned to respond to another stimulus as a result of tactile input?". Obviously (from the variability between brains), cells are not "destined" to any visual stimuli, they happen to respond to some stimuli due to the pattern of connectivity they happen to have. The author, a professor of neurology and neuroscience who does research on the molecular mechanisms of neural plasticity (Home Page), seems not to know that.

[ 2Dec 2003] This page is another example of "it is genes or environment", but this guy seem to really consider learning as part of the "environment".

This encyclopedia article (Plasticity of cerebral cortex in development, Newton and Sur, in Encyclopedia of Neuroscience, Adelman and Smith (eds.), Elsevier, 2003) is an impressing example. They say in the abstract:

A fundamental issue in cortical development is the degree to which "nature" (intrinsic factors such as genes or molecular gradients) or "nurture" (extrinsic factors such as sensory experience) shape the final structure and function of the cortex.
By making explicit what they mean by "nature" and "nurture", these authors make it clear that they do not consider learning as a relevant concept. The term "learning" appears in the article only in the context of "learning rules", where it doesn't actually mean learning, but rules for changing the strength of synapses (It happened to be the case that in animals learning is normally implemented by strength changes, but that doesn't make changes identical to learning, in the same way that air vibrations are not identical to language). Ironically, it also appears in the name of the institute in which the authors work. That doesn't stop them from completely ignoring the concept, which means that they do not consider the question of learning mechanisms, and therefore their view of the brain and what it does is badly distorted.

32. "In order to accurately reach for an object, one must transform a sensory signal into a complex pattern of muscle activity."

This is the first sentence from the research plan of the Sensorimotor Neuroscience Laboratory, York University, Canada, and is regarded as obviously true by most researchers.

The problem with this statement is that it is clearly false. Reaching movements are clearly learned from experience. In other words, when a person performs some movement in some situation, he tries to repeat movements in the past that gave the desired result in similar situations. The judgement of similarity of the situations is based on the sensory signal. Thus the actual operation is pattern matching of previous situations and motivational states to the current situation and motivational state.

I am sure some people will counter-argue that the result is equivalent to some transformation. However the equivalency is only in the result, not in the underlying mechanism, and it is the latter which is of interest in neuroscience (and in cognitive psychology in general). Thus when researchers come with models of the mechanisms, they most commonly come with models that perform transformations, and are therefore unrelated the the mechanisms in the brain itself.

Here (Stephen R. Jackson, 'Action binding': dynamic interactions between vision and touch, TRENDS in Cognitive Sciences Vol.5 No.12 December 2001) the idea is expressed too (first paragraph):

To execute goal-directed movements accurately, it is necessary that sensory signals be transformed into appropriate motor command signals. This is commonly referred to as the 'sensorimotor transformation' problem and remains one of the most important and yet least understood issues in motor control research.
And we can be confidence that it will stay "one of the .. least understood issues", as long as it is assumed that there are really transformatons, rather than pattern matching. This miscoception can be regarded as an instance of the the "Operational modularity" reasoning error.

33. Babies are attracted to faces from birth

That is quite a common myth. The research that is supposed to support this is always based on showing the baby "schematic faces", which are so simplified that they can hardly be called face-like. The reason that such simplified patterns, rather than real faces, are used is simple: experiments show that new-born babies are not attracted at all to real faces. Thus the "attraction to faces" is realy an attraction to simple patterns with mirror symmtery ( an hypothesis of the neural mechanisms).

For example, in this Studying how babies learn to see brochure, it says:

Several years ago, Prof. Johnson and Prof. John Morton established that even in the first half hour of life, babies will turn their head and eyes further to keep in view a schematic face - three black spots representing two eyes and a mouth - rather than other patterns (see picture on right). Since then, researchers believe that this instinctive attraction is the bait that ensures that what a baby sees in the earliest days includes faces.

That is straightforward stupid, because the baby is not attracted to faces, so this instinctive attraction cannot ensures that the baby see faces.

There seem to be here an odd process: The researchers believe that since the baby is attracted to the simple "face-like" patterns, he is also going to attracted to real faces, even though experiments showed that babies are not attracted to real faces.

An example of how the myth is propagated is given in this report on a conference (Science, Vol 292, 13 APRIL 2001, 196-198). It says:

Newborn babies prefer to look at pictures of faces compared to other objects, supporting the idea that people are born with some predisposition to favor faces.
Here there is no mention of the fact that it is "schematic faces" that the babies prefer.

[17 dec 2003] Quite stunningly, even researchers that are quite strongly believers of learning believe this crap: See here (Tarr & Cheng, TICS V7.1, pp 23-30, 2003) section 4.2.

A related myth is that newly-born babies immitate other people immediately (within hours from birth). That is of course false, as every parent that tried to interact with their new-born baby knows. However, some researchers (Meltzoff and Moore) published a paper claiming early immitaton, and even though it is not reproduicible, it is quite common for people to believe it, sometimes even cognitive scientists. This is specially odd, because if this was true, then:

The fact that none of the groups above (parents, medical profession, researchers) adopted immitation should have killed the idea of early immitation, but it didn't. The fact that some people still believe this idea is an example of what I call "theory-driven blindness", where theoretical ideas (in this case more accurately "wishful thinking") cause people to completely ignore contradicting evidence, even though this evidence is clear-cut and overwhelming.

[16 Feb 2003] You would have thought that at least scientists in sciences related to cognitive science would be aware of the status of the "newly-born imitating" myth, but that would be wrong. In the latest Nature Neurosceince there is a review of a book about imitation edited by the main promoter of babies imitation (Meltzoff). The review explicitly says that :

Andrew Meltzoff revolutionized the world of developmental psychology in the 1970s when he showed that very young infants (the youngest was only 42 minutes old) have some rudimentary imitative ability.
But there is no hint that this is work is not universally accepted. It seems the reviewer (mainly brain imager, but seem also to do some psychophysics) really believes that Meltzoff's work is real.

[28Aug2003] And another one from Nature Neuroscience: in a "news and views" item (Seeing after blindness, Nature Neuroscience, September 2003 Volume 6 Number 9 pp 909 - 910) the author, an "Emeritus Professor of Neuropsychology", writes:

Only recently has it been realized that babies have considerable vision within hours of birth, even mirroring their mother's expressions.
He also seems to actually believe that this is true.

[6 Oct 2008] Just found this. (McKone, E., Crookes, K., & Nancy Kanwisher, N. (in press). The Cognitive and Neural Development of Face Recognition in Humans. To appear in Gazzaniga (Ed.), The Cognitive Neurosciences.). They say (p.7 of the pdf):

In a classic result, newborns (median age 9 minutes) track an upright 'paddle face' (Figure 2a) further than versions in which the position of the internal blobs is scrambled or inverted (Goren, Sarty et al., 1975; Johnson, Dziurawiec et al., 1991). Although it has been suggested this preference could arise from general visual biases (eg., for stimuli with more elements in the upper visual field; (Simion, Macchi Cassia et al., 2003), preference only for the normal contrast polarity of a (Caucasian) face (Farroni et al., 2005) argues for a level of specificity to face-like structure. Thus, humans are born with some type of innate preference that, at the very least, attracts infants' attention to faces.
(Italics added).

As before, the fact that babies are not attracted to faces doesn't stop them from assuming that they are anyway.

34."Cell types" in the cortex

Neuroscientists distinguish in the visual cortex between "cell types" of 'simple cell' and 'complex cell', and sometime 'hypercomplex cell'. What is commonly missing in the discussion is that the distinction is based solely on the activity of each cell. There are no other features of the cells (biochemical, morphological etc.) that are correlated with the distinction (in V1 there is some correlation with the layer that the neurons are in, because the input from the LGN is mainly in one layer). Since the activity of a cell is determined by its connectivity, the distinction is really between connectivities of cells, rather than between "cell types", which implies some intrinsic differences between the cells.

Sometime other terms are used, like "cell varieties", "cell categories", "cell classes". These are somewhat better terms, because the implication of intrinsic differences is weaker, but still confusing. Most of non-neuroscientists that hear about complex and simple cells would get the impression that there are intrinsic differences between them. I suspect that even among neuroscientsts there is a large fraction which believe that there are such intrinsic differences (there is a relation between this misonception and the intelligent neuron misconception).

Another point that is normally hidden is that the distinction is not as sharp as is implied by talking about different types (categories, classes, varieties). The responses of different cells show all the gradations from simple cell to complex cell, and the properties that are assigned to the different "types" are the properties of the most common cells, rather than the only cells, that are found in the visual cortex.

[17 Nov 2004] There is now what seems like a serious suggestion that at the level of connectivity, the distribution of connections "types" of neurons is actually smooth, and the bimodaity in activity patterns is a result of a non-linearity (or "threshold") between the input to the neurons and their activity pattern. See this article (Mechler F & Ringach DL, On the classification of simple and complex cells, Vision Res 2002 Apr;42(8):1017-33) and this article (Priebe et al, The contribution of spike threshold to the dichotomy of cortical simple and complex cells, Nature Neuroscience 7, 1113 - 1122 (2004); full text). Note that the non-linearity itself is the "same" (probably variable actually, but not in a way that corresponds to the complex/simple axis) in all the cells.

It is also worth noting that the adjectives used ('complex' and 'simple') are quite misleading. The 'simple cells' are 'simple' because there is a strong correlation between their level of activity and a simple feature in the visual input, and the 'complex cells' are 'complex' because for them such correlation is not found. The existence or otherwise of this correlation is unrelated to the compexity of the cell. There is no difference in the complexity between 'complex cells' and 'simple cells', including their connectivity.

35. Assuming that activity is a simple reflection of connectivity

While this is not made explicit, it seems that many researchers implicitly assume that activity of neurons reflects connectivity in a simple way.

For example, they claim to show "reorganization" in some part of the brain, when the data they base the claim on shows only changes in activity. In principle, it is possible that when the researchers claim "reorganization", they mean it in a different way from the natural interpretation. There are two reasons to believe that this is not the case:

On the other hand, it seems difficult to believe that researchers on the brain really believe that there are changes in connectivity in all the cases that they report as "reorganization". However, I cannot think on a better explanation, and I suspect that we have a case where researchers believe both contradicting positions (that the change in activity that they see is a reflection of a change in connectivity, and that it isn't) at the same time.

An interesting example is in this article (Carmena et al, PLoS Biology, V1.2, 2003). The term "reorganization" appears many times, though it refers only to changes in activity, but the authors clearly don't mean just changes in activity from the way they use it. For example, they write:

These considerations should be taken into account to decide how much of the plasticity reported by Taylor et al. (2002) reflects real cortical reorganization instead of resulting from the improvement in the animal's behavioral performance during the task used to measure directional tuning.
The "plasticity reported by Taylor et al" is changes in activity too, but according to these authors it does not necessarily reflect "real cortical reorganization." So some changes in activity are "real reorganization" and some are not. It is also interesting that these authors apparently think that "improvement in the animal's behavioral performance" isn't reorganization. That suggests that they think that the improvement is caused in a way that is substanially different from the way that the changes that they say see are caused (which is clearly nonsense, because both are reflection of changes in strength of connections in the cortex).

36. "Functional connectivity"

[18Nov2002]

"Functional Connectivity" is a relatively new buzzword which is designed to mislead non-experts in neuroimaging. The term actually refers to correlated activity between brain regions. So why not call it "correlated activity"? Becuase "Functional connectivity" sounds much more interesting. But the reason that it is more interesting is that it implies false idea, i.e. that correlated activity identifies connectivity. Neuroimaging experts don't have problem with that, because they know that "Functional Connectivity" means somehing else, but others are mislead by it.

This paper (Cordes et al, AJNR 22, p.1326, 2001) is not that bad, because it says upfront that "functional connectivity" means having "high cross-correlation coefficients". It then uses "functional connectivity" as synonym to "correlated activity", so you get this bizzarre statement in the CONCLUSION in the abstract: "Functional connectivity ... is characterized predominantly by frequencies ..." (How can connectivity be characterized by frequencies?), which makes perfect sense once you replace "Functional connectivity" by "correlated activity". The main problem will arise when people refer to such articles, saying something like "Functional connectivity has been shown ..." (like this article does in the first paragraph of the body), because unless the readers know what "Functional connectivity" means, they will get the impression that the reference actually showed connectivity which is functional.

By having significant number of papers discussing "functional connectivity", neuroscentists effectively hide the fact that they don't really research the actual connectivity. Somewhat extreme example is this abstract, where in the title they even drop the "functional".

37. The significance of directional selectivity and similar phenomena

[ 10 Dec 2002 ]

Directional selectivity of neurons, i.e. a reasonably consistent different level of activity when the animal is presented with stimuli with different direction (either direction of movement or direction of an elongated shape), is a common feature of neurons in the visual areas of the mammalian cortex. There are also many examples of neurons selective to other features, e.g. neurons that show different activity for different spatial frequency. Researchers regard these kind of selectivities as very important findings.

How significant these observations really are? In general, an observation is significant when it significantly reduce the number of possible descriptions of the world. In this case, the observations may reduce the number of possible descriptions of the visual system. By how much? At the moment we don't have a good understanding of the visual system, but we can say with confidence that the visual input must have a substantial effect on the activity of neurons in the visual areas. So the observation of directional selectivity can rule out the systems that are largely affected by the visual input and don't have many neurons with differential response to the input.

The problem is that such systems don't seem that likely even without the observations. Any system where the activity is substantially affected by the input will have neurons with differential activity to features in the input. It is possible that the number of neurons with differential activity that are observed is larger than we should expect from a "typical" system, but without a far deeper understanding of the visual system we cannot be sure even of that claim. Thus the observation of directional selectivity is at most of small significance, and maybe of no significance at all.

I suspect that many reserachers would say at that point that the observartions are significant because many models are based on them. That, however, just shows that the modellers and the people that evaluate the models think that the observations are imprtant. It would have been a significant observation if models that are based on directional selectivity were significantly better than models that do not. Currently, however, the models are so far in their performance from the real system (i.e. mammalian vision system), that there is no way to say whether a model is really good or not. Instead, many researchers evaluate models using directional selectivity as a criterion.

So why do reserachers regard these observations as so significant? The most important reason, for neuroscientists, is that these are the observations that they succeeded to make, and scientists (like any other human) always give high significance to their observations. Normally in science, progress in understanding of the system makes it clearer which observations are significant and which are not, but there wasn't any real progress in understanding of the visual system (like the rest of the cognitive system). While we have much more data about the visual system now than we had when such selectivities were first observed (the 60s), we don't understand it better. Therefore there is no clear evidence that the observations of directional selectivity are not significant, only logical argument as above, which never convinced any scientist.

The fact that we don't understand better the system is actually also a (weak) evidence against the significance of the observations, because if they were significant they would have helped us to undrstand the system better. However, neuroscientists will not admit that they don't progress, and for most of them the logic of this argument is incomprehensible anyway.

Additional explanation is the conjunction of this myth with the intelligent neuron misconception above. For somebody that believes that a neuron with a directional selectivity actually calculates the direction (in any sense), these observations are clearly significant. This explains why outsiders of neuroscience, who read the misleading neuroscientific literature and take it as written, would regard directional selectivity as significant. But I suspect that even neuroscientists, who do know that the directional selectivity is the result of the network, are getting confused by their own rhetoric and effectively are doing the "Intelligent neuron" mistake, even if they don't admit it.

Here is a typical example of a way to make directional selectivity looks significant. Apparently they grew mice in a special visual environment ("illuminated by a strobe flash at a frequency of 8 Hz"). This reduces the directional selectivity of neurons in their cortex, and causes deficits in various tasks. They concluded that this shows that directional selectivity is critical for the other tasks, which is obviously non-sequitur, because the special visual environment may cause various kinds of deficits. It makes sense if you make one of two assumptions:

  • Only directional selectivity can be damaged by the special visual environment (obviously nonsense).
  • Directional selectivity underlies the other tasks. That makes the evidence simply circular. I suspect that this is the way that this people think.

    In this article (The Information Content of Receptive Fields, Adelman et al, Neuron, Vol 40, 823-833, 13 November 2003) they actually try to evaluate the significance of the receptive field. Unfortunately, they are doing it in a dragonfly, so it doesn't tell us anything about the human brain. They, however, give the impression that they think that it can generalized to all "receptive fields".

    In this article (krug et al, Comparing Perceptual Signals of Single V5/MT Neurons in Two Binocular Depth Tasks, J Neurophysiol 92: 1586-1596, 2004) they look at neurons that "show perceptually relevant signals in binocular depth tasks". They show that there is no pool of neurons that can account for binocular depth percept. That doesn't shake their belief that perception must be based on such pools, and therefore they conclude:

    The cortical circuitry must be able to make dynamic changes in the pools of neurons that underlie perceptual judgments according to the demands of the task.
    The possibility that the concept of a "pool of neurons that underlie perceptual judgments" is simply bogus, which is the obvious conclusion from their data, seems to be beyond their range.

    38. "Infant habituation studies show that size and shape are perceived correctly on the first day of life."

    This sentence appeared in a review in Nature Neuroscience of a book( Nature Neuroscience (June 2003 Volume 6 Number 6 p 550)). It is, of course, a plain lie. There are some studies that try to show that, but the only variables they can look at are looking times and sucking times, because babies don't do anything else. Thus it is always just an interpretation of longer or shorter looking time as "perceiving". In addition, the experiments with infants are only with very simple objects, so the most that can be said is that with very simple visual input, babies show some differential looking time.

    Apparently, in this cases there isn't even such bogus evidence for infants of the first day, and it is "a slight exaggeration" by the reviewer (his words).

    39. "High and low spatial frequency information in visual images is processed by distinct neural channels."

    This is the first sentence in an article in Nature Neuroscience ( Nature Neuroscience, June 2003 Volume 6 Number 6 pp 624 - 631), and again a straightforward lie. There is evidence that in the LGN there is differential response to spatial frequency in the P and M cells, but there isn't any kind of evidence for distinct channels for different spatial frequencies in the visual cortex.

    I e-mailed the first author asking about references for this statement, and his response is a good example of the "bogus reference" technique, i.e. pretending to support a claim by references that do not support it (relying on readers not to actually read them).

    40. "Cortical neurons display two fundamental nonlinear response characteristics: contrast-set gain control (also termed contrast normalization) and response expansion (also termed half-squaring)."

    This is the first sentence in an article in The Journal Of Neurophysiology (Albrecht et al, The Journal of Neurophysiology Vol. 88 No. 2 August 2002, pp. 888-913), and is a nice demonstration of few misconceptions.

    The more obvious misconception, but minor, is the term "cortical", which suggests that it applies to all the cortex. Obviously (to an expert in the field), the concept is applicable only to neurons in the visual cortex. Not making this explicit may look like only a sloppy language, but I would suspect it actually reflects sloppy thinking, where conlusions from sensory areas are assumed to be automatically applicable elsewhere.

    The major misconception is the word "fundamental". We don't have any kind of evidence to suggest that the characteristics are fundamental, in any sense of the word "fundamental". The only "support" for this proposition is that these characteristics are what neurophysilogists succeed to measure, but this does not tell us that they are fundamental, or even that they have any significance.

    A possible objection is that the word "fundamental" is not supposed to mean fundamental in this sentence, and it should be interpreted as somehing like "very interesting" or "significant". However, the text doesn't give us any reason to believe that this is the intended interpretation. A more likely explanation is that the authors regard the fact these characteristics are observed as an evidence for them being fundamental, maybe with "support" from computational models that assume that they are fundamental.

    41. Rate code or temporal code?

    It is quite common in neurosceince to find claims that neurons code information in rate-code or in temporal-code, and to find people argue for or against either of these possibilities. For example Here it starts: "A critical issue in systems neuroscience is whether neurons communicate using a rate code (activity averaged over > 100 ms), or a temporal code (spike timing accurate within ~10 ms)." Note that the "criticality" of the issue is taken for granted.

    The problem with this is that both concepts are applicable only to isolated neurons. For neurons inside a network (cetrtainly the cortex, but actually most of the rest of the mammalian brain as well), there isn't any reason why a single neuron will 'code' for anything. It is the patterns of activity that 'code' for things, and it is not sensible to try to interpret the activity of a single neuron.

    That obviously doesn't stop neuroscientists, who spend a lot of time and effort on elucidating whether the code of neurons in the cortex is rate-code or temporal-code. Since it is neither, they find evidence against both positions, and therefore the dabate goes on without resolution.

    In single-unit measurements, what is measured is the rate of some neurons, and sometimes also the correlations between some neurons. Typically, the researchers go on to interpret the rates, or the correlations, as codes for some features (typically some stimulus) that correlated with them. The large number of such interpretations gives the impression that the coding is well-established, but it isn't: the researchers always have to assume that the rate or correlation are coding something to which they correlate. At the moment, there is no independent way to measure if neurons code for something. Therefore, when researchers claim to show that some neurons code something, what they really say is that the rate of activity of neurons correlates with the 'something'. The latter statement has some interest, but it is not the same as 'coding'.

    Most of neuroscientists, however, seem not to realize that. My impression is that the majority of them really believe that when they find correlation between the activity of some neurons and some stimulus, they find a complete description of what the neurons do. For example, in the abstract of this review (Jennifer M. Groh, Reading Neural Representations, Neuron, Vol. 21, 661-664, October, 1999) it says:

    Single unit recording studies in awake animals have painted an increasingly thorough portrait of the types of signals present in different brain areas.
    That makes sense only if it is assumed that the correlations between the single-unit activities and stimuli tell us about all, or close to all, the signals that are 'present' in the area. This assumption is clearly nonsense, as it is the pattern of activity of the network in the area that defines the 'signals' in it, and the single-unit activities give us very little information about these patterns. The author of this review, apparently with most of neuroscientists, missed this point.

    The ubiquity of 'coding' (i.e. of interpretations of correlations as coding) seem to be an amalgamation of the simplicity assumption, the intelligent neuron misconception, and the wish to find something that can be modelled computationally.

    42. "edge detector", "line detector" etc.

    Neuroscientists call a neuron "X detector" (where X is some feature in the input, e.g. edge, line, spatial frequency) when the neuron is much more active if the the feature is present. The problem with this terminology is that this make sense only if the word "detector" is given a very limited meaning. The normal usage of the word "detector" is a device that is specifically detecting something, does this on its own, and normally doesn't do other things. With this meaning, calling the neurns in the cortex "detectors" is wrong, because the neurons are not specifically built to detect what they "detect", they don't do it on their own, and take part in processing of other information.

    Neurosceintists probably know this, though by now I am not sure. I suspect at least some of them do get confused by the terminology. Outsiders are in general being mislead by the terminology, because you have to be very alert reader to realize that "detector" in neuroscience doesn't mean detector in its normal sense.

    This misconception is a special case of the the 'intelligent-neuron' misconception, and is also related the misconception of coding above.

    [ 7 Dec 2004] This review (Kayser, Konrad and Konig, Processing of complex stimuli and natural scenes in the visual cortex, Current Opinion in Neurobiology 2004, 14:1-6, full text) argues against the assumption that mapping simple features explains how the visual system works (more publication from the same laboratory here). But they still think that some mathematical model will explain the visual system, apparently not realizing that the features they try to explain are learned on top of stochastic connectivity.

    The latest statement is almost true as is about this chapter (Olshausen BA, Field DJ 2004. What is the other 85% of V1 doing? Problems in Systems Neuroscience. T.J. Sejnowski, L. van Hemmen, Eds. Oxford University Press.). Section 2 contains a reasonable discussions of methodological problems with the main models currently, but they don't lose their belief in computational ideas, and advance their own ideas in section 3.

    43. "Orientation maps"

    [30 Oct 2003]

    The latest Nature contains this article: Spontaneously emerging cortical representations of visual attributes, Kenet et al, Nature 425, 954 - 956 (30 October 2003). The data in the article is interesting to some extent, but more interesting (in a negative sense) is the gloss that the authors and the "News and Views" item put on it.

    Throughout the article, the authors refer to "Orientation maps", which are just (gross) patterns of activity, which are evoked by gratings in some orientation. These patterns don't have any "mapness", and since the authors show that they also arise spontaneously, they are clearly not "orientational" either. They are patterns of activity which happen to be activated by some gratings.

    That possibility seems to be beyond the range of the authors. For them, these patterns are necessarily "Orientation maps", or "cortical representations of orientations". Hence they regard the fact that they appear spontaneously as very significant, and give some speculations about them, which are ridiculous once you realize that the patterns of activity are not "orientation maps".

    The "News and Views" item Neuroscience: States of mind, Dario L. Ringach, Nature 425, 912 - 913 (30 October 2003), is worse. According to it "Until now, it was thought that these spontaneous patterns of activity were random....". That is obviously stupid, because the patterns of activity will always be dependent on the pattern of connectivity. The pattern of connectivity is almost fixed in time, and therefore the patterns of activity will span a limited range of patterns, and therefore will tend to be similar with or without input (except for inputs that skew the activity very very strongly).

    It is an interesting question whether anybody really believes (or even believed) that the spontaneous patterns are really random with respect to patterns which are evoked by visual input. It seem hard to believe, because everybody knows that activity depends on connectivity, and that the connectivity is (almost) fixed. But it is possible that some people don't follow the logic. In this "News and Views" item, for example, the connectivity of the neurons and its effect on patterns of activity is not mention at all in any way. It is not mention even when the author explicitly list the factors that determine the patterns of activity in the cortex (second paragraph). Thus it seems that at least this author doesn't really understand that activity is dependent on connectivity (even though he certainly knows that), in a sense that he doesn't consider it when it is relevant.

    The related Nature Science Update is as bad, probably because the main source is the author of the "News and Views" item.

    [1 Oct 2004] Apparently, other people also believe that the spontanous activity is random. In this article (Nature 431, 573 - 578 (30 September 2004); doi:10.1038/nature02907) they explicitly state (in the abstract) "... this variability has been considered to be noise owing to random spontaneous activity within the cortex..", thus they at least believe that other people believe the randomness of the activity. I sent an e-mail to the senior author to ask is he really thinks that people believe it is random.

    43. Confusion about learning

    In this commentary, the author describes an experiment where researchers measure the activity of neurons in the cortex, and use the measurement to move the robotic arm, so the monkey can move it without actually performing any movement. Then the author says:

    Amazingly, when the researchers removed the pole, the monkeys were able to make the robotic arm reach and grasp without moving their own arms, though they did have visual feedback on the robotic arm's movements. Even more surprising, the monkeys' ability to manipulate the arm through "brain control" gradually improved over time.
    Why is it "amazingly", and more importantly, why is it "surprising" that the monkeys' ability improved?

    Learning to perform movement means learning to activate the right neurons in the cortex (i.e. the ones that activate the right muscles). Learning to move the robotic arm means learning to activate the right neurons in the cortex (i.e. the ones that the researchers measure). The difference between the cases is in the way that the right neurons cause the movement, but it is not clear why that should affect the learning process.

    A significant difference between the cases is that in real movement the sensors in our body give us information about the position and movement of body parts. That makes it easier to learn movements of body parts, particularly when they require fast corrections or to be done without visual guidance. But learning clearly doesn't require movements as such, because we can aquire knowledge from seeing and hearing things.

    That the author thinks it is surprising that the monkeys' ability improved suggests that he thinks that learning is not learning to activate the right neurons, but something else. With the way we understand the working of animal bodies, that is clearly nonsense. But I suspect the author, together with many other people (including neuroscientists), has implicitly an idea of learning which is different from learning-to-activate-the-right-neurons.

    In the paper itself(Carmena et al, PLoS Biology, V1.2, 2003) the authors don't express surprise that the monkey learns, but they also seem to have some odd ideas about learning, because they say:

    Thus, we hypothesize that, as monkeys learn to formulate a much more abstract strategy to achieve the goal of moving the cursor to a target, without moving their own arms, the dynamics of the robot arm (reflected by the cursor movements) become incorporated into multiple cortical representations. In other words, we propose that the gradual increase in behavioral performance during brain control of the BMI emerged as a consequence of a plastic reorganization whose main outcome was the assimilation of the dynamics of an artificial actuator into the physiological properties of frontoparietal neurons.

    The first sentence shows that the authors take it for granted that the monkeys "learn to formulate a much more abstract strategy", which is clearly nonsense, as the monkey clearly don't formulate anything, even less "formulate strategy", and there is nothing abstract about their actions (unless you regard neuronal activity as "abstract"). Their hypothesis is not much better. What does "incorporation of the dynamics of the robot arm into multiple cortical representations", or "assimilation of the dynamics of an artificial actuator into the physiological properties of frontoparietal neurons" actually mean? If it means learning to activate the right neurons, that it is just a heap of buzzwords. More likely, the authors do intend it to mean something else, which doesn't have any sensible interpretation.

    It should be noted that the article itself is quite interesting, but the main achievement that it presents is the ability to record from many locations in the cortex for a long tome, with apparently no damage to the cortex or degradation in the signal.

    44. Significance of differences between the hemispheres of the brain

    Some researchers, specially neurolinguists, give a lot of significance to the differences between the hemi-spheres of the brain. In some cases, they build models with different mechanisms in the two hemi-spheres.

    However, cases where young children suffered damage to one of the hemi-spheres which made it completely non-functional make it clear that both hemi-spheres have the mechanisms that are required for full human cognition. Therefore, the differences that are seen in normal adults between the hemi-spheres are not differences in underlying mechanisms, but in the "knowledge" and "skill" that each hemi-sphere has acquired during development and growth (the same way that different people acquire different skills and knowledge).

    The fact that there is a strong tendency for specific distribution of "knowledge" and "skill" between the hemi-sphere (e.g. language processing tend to be in the left hemi-sphere) suggests that the two hemi-spheres differ in their tendencies. This is probably a result of some parameter(s) in the mechanisms being set differently between the hemi-spheres.

    I suspect some researchers would still want to argue that these differences in parameter settings lead to a different developmental program and hence to different mechanisms, but there is no evidence at all for a different developmental program or different mechanism in adult. The only motivation for it is that it makes models look nicer.

    45. "Primitives in vision"

    It is quite common for researchers to assume that there are some identifiable primitives that are used by the visual system. A typical example is in Vision by Marr (section I.6). Another example is this "Research Focus" (full text), which starts:

    A key issue for theories of perception is specifying the primitives used by the visual system to isolate and identify the objects in an image.
    Thus the author takes it for granted that there are specifiable primitives that the visual system uses. This author interprets "primitives" somewhat differently from the normal usage (which is something like "the simplest elements from which larger elements are built"), to mean "features that are processed first". But even with this definition, it is not obvious that there must be specifiable features that are reliably recognized first. In principle, there could be none, in practice some have already been identified (colour for example), but there is no reason to assume that vision in general can be characterized as using a specifiable set of primitives (in either meaning), and the stochastic connectivity of the cortex makes it unlikely.

    The "focus" makes it clear that the search for the primitives is in general unsuccessful beyond the few ones like colour that were identified early on, but also makes it clear that for this author that is not a reason to have any doubt about the basic assumption. I suspect he doesn't even realize that it is an assumption, and regards it as trivially obvious fact, and this is based on "computational" thinking.

    46. "Substraction mechanism"

    [23 Oct 2004]

    This article (A general mechanism for perceptual decision-making in the human brain, Heekeren et al, Nature 431, 859 - 862 (14 October 2004); doi:10.1038/nature02966) promises a lot : a general mechanism. The "general mechanism" is substraction. It may sound obviously stupid, but not to the authors, reviewers and the editorial person that wrote the comment about it, who wrote (p.xiii, Nature, 431, 14 Oct 2004) :

    These findings suggest that the human brain might use a simple substraction mechanism to make decisions about high level objects.

    The logic behind this idea is that showing that some feature in the behaviour of an individual correlates with the difference between some measurements in his brain shows that the feature is implemented by substracting the measurements. That is clearly confusion of correlation with causation, coupled with the simplicity assumption, with both errors taken to the extreme.

    The article itself contains this pearl:

    These results provide strong evidence that perceptual decisions are made by integrating evidence from sensory processing areas.
    Which is pretty dumb, considering that the assertion that they claim to provide evidence for is regarded as obviously true by every neuroscientist.

    The data in this article is from fMRI, which means that it will never be reproduced. The worst thing, though, is the data itself: in their figure 4b they present their data, which is clearly random spread of dots which is not completely homogenuous, and fit a line to it. From figure 4b it is obvious that this fit is completely spurious, but both the authors and the reviewers seem to have failed to notice it. Maybe they were swayed by the high "confidence" associated with the fit (p=0.004), but that just shows that the method that was used to compute p is a bad method, because it is absolutely clear from the figure that the fit is spurious.

    It is quite stunning that this kind of rubbish finds its way into Nature.

    46. "Decoding"

    [23 Oct 2004]

    This article (A Population Decoding Framework for Motion Aftereffects on Smooth Pursuit Eye Movements, Gardner et al, The Journal of Neuroscience, October 13, 2004, 24(41):9035-9048; doi:10.1523/JNEUROSCI.0337-04.2004; full text) is about "decoding". What does that mean inside the brain (the cortex in this case)? In general, processing in the brain, specially in the cortex, is done by large population of neurons activating another large population of neurons (the two population may overlap), which then activate another large popuation etc. When does any of this become "decoding"? That is not obvious at all.

    That doesn't bother the authors. The first sentence of the abstract is:

    Both perceptual and motor systems must decode visual information from the distributed activity of large populations of cortical neurons.
    So they think that whatever "decoding" means, it must happen.

    In the paper, they suggest their "decoding framework", which is vector averaging with some addition for adaptation. This suggests that they think that decoding means putting the information from the large population into a single variable (or a small number of variables). This is a reasonable definition for the word "decoding", but it clearly doesn't happen in the brain. Yet they think it "must" happen.

    They "verify" their model by matching it to actual movement (smooth eye pursuit). That gives two possible interpretations:
    1) They regard the eye movements as the small number of variables (i.e. the left hand side of their equations (5) and (6)), and regard all the processing up to and including the eye movements as the "decoding".
    2) They take it for granted that the eye movements are expressed somehere in the brains as small number of variables.

    Interpretation(1) doesn't really make sense, and I think that (2) is the case, and they really believe that the movement is expressed somewhere in the brain as small number of variables, even though it is clearly false.

    Even if we take the (1) interpretation, their model is still clearly false, because the instructions to the eyes are transferred from population activity in the cortex (where they took measurements) to population activity in sub-cortical structures and then to population activity in the cranial nerves, which activate/deactivate the eye muscles. None of these steps is anything like their vector summation, so their "framework" does not correspond to anything that actually happens in the brain or out of it.

    What they did show is that they can fit their "framework" to the overall result of the process that happens between the cortex and the eye movements, but that is a mathematical necessity of the way the model is setup (see comment {2} for showing that in the input case). Thus their "framework" contributes nothing to our understanding of how the brain works.

    This article (Latham and Nirenberg: Synergy, Redundancy, and Independence in Population Codes, Revisited, The Journal of Neuroscience, May 25, 2005, 25(21):5195-5206; Full text) also starts in a similar way:

    Decoding the activity of a population of neurons is a fundamental problem in neuroscience.
    The problem is that in the brain "decoding", the way they interepret it, does not happen. In particular, all of their equations are based on making all the neurons equal, such that switching the activity of two neurons consistently through all the operations leaves the result unchanged. In the real system, ofcourse, which neuron is active and which is not is the most important feature (I normally refer to this as "the pattern of activity"). Thus their analysis tells us nothing about the brain and what is significant or not in it.

    It is an interetsing question if the authors realize this. They certainly know enough to understand the issue, but it somehow escapes them.

    46. What people report as dreams

    [29 Oct 2005]

    As far as I can tell, everybody in the field from psychology to neuroscience believes that what people report as dreams is a good refelection of what happened in their brain before they woke up. I didn't see anybody saying otherwise.

    The problem with this idea is that it is obviously nonsense. The mechanism that records memories (i.e. causes some changes that allow recall later) is not working when we sleep in the same way that it works when we are awake. Thus the relation between what we 'remember' after waking up and what happened insite our brains before waking up is not obvious.

    The fact that in many cases it "looks" like we remember a significant and long sequence of "actions" is probably because the 'memory' is not good enough for us to recognize it as not real. The 'memory' that is created is not real memory. It is a "record" that is "broken" in many ways, and sometimes what is broken is the markers that normally tell us us how to distinguish between reality and noise (whatever these markers are).

    Another reason why dreams "look" significant is that many people don't have any inhibition telling their dreams, while most people would avoid reporting the result of their imagination unless it seems to have some value.

    It seems that most people do not realize that the fact that they "remember" something doesn't mean it actually happened, even when this something is inside their own brain. We can rely on our memory while we are awake, but not when we are asleep or semi-asleep. While in the case of lay-person it is quite understable when they fail to realize the unreliable nature of memory , scientists should be able to figure this out, but it seems they don't.

    46. "The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features."

    [1 Sep 2007] That is apparently what these researchers think (Schäfer R, Vasilaki E, Senn W (2007) Perceptual Learning via Modification of Cortical Top-Down Signals. PLoS Comput Biol 3(8): e165). They are apparently from the department of physiology, rather than neuroscience, but it is stunning that anybody can believe that. More stunning is that it passed the review process. It shows how well the the stochastic connectivity of the cortex is hidden.

    =====================================================================
    =====================================================================
    Notes
    ---------------------------------------------------------------------

    {1} If you don't believe anybody can be that daft, see the counter argument in Douglas and Martin Neocortex (1990) in Shepherd (Ed.) The Synaptic organization of the brain. p.436. These authors clearly felt that the point needs explanation, so at least they believed that some people seriously believe in layers in the cortex as layers in a connectionist network.

    {2} The activity of a 'directional' cell i (Ai) can be approximated by cos(Ti), where Ti is the angle between the current movement/stimulus and the cell preferred direction (Vi). Selecting the direction of the movement/stimulus as the X-axis, then the component along this direction is cos(Ti), and normal to it is sin(Ti). Thus the X component is
    1. sigma(AiVi)x = sigma(cos(Ti) * cos(Ti)) = positive value, proportional to the number of cells.
    The Y component is
    2. sigma(AiVi)y = sigma (cos(Ti) * sin(Ti)) ~= 0
    [ for homogeneous distribution of Ti between -PI and PI ].
    Thus the vectorial sum must have a large component in the right direction and 0 (very small for large but finite number of cells) component in the normal direction. Analyzing the noise, assuming the noise of each cell equal to the size of the signal for each cell, and there are 100 cells, gives an expected error of around 4 degrees. Because the researchers can select the cells that they use to include in the summation, the expected error should be smaller.
    Note that the result is not dependent on the exact nature of the approximation for Ai. Any function which fits the description of a directional cell (mathematically: symmetric around 0, anti-symmetric around pi/2 and monotonically decreasing from 0 to pi) would yield a similar result, because the value of the Y component would be ~0, while the value of the X component would be large and positive.

    {3} The simplest way of doing this is to memorize the day of the week of the first day of each month. For 100 years, this gives 1200 'data-points' to memorize, which is a very easy (though time-consuming) task for somebody that finds it interesting. A slightly more sophisticated method is to remember the day of the week of March 1st of each year, the offset in days of each month, and then just add the day of the previous March first, the offset of the month and the day of the month modulus 7 (March to avoid the problem of February with 29 days). Even more complex is to remember one date, and then calculate the offset of the year by adding 1 day per normal year and 2 for leap year. None of these methods is beyond the arithmetic ability or memory of an 8 years old child, and the last method may be interesting enough that some normal children will actually learn it.

    {4} Claiming to show "networks" even when the technique used cannot detect connections seem to be quite common error. I suspect they would claim either that they don't mean network, in which case they shouldn't use the term, or that is seem plausible that there are connections between the regions they find. However, this plausibility does not come from their data, and it is simply a reflection of the fact that most regions in the cortex are connected to most of the other regions. ----------------------------------------------------------------------
    Yehouda Harpaz
    yh@maldoo.com
    2Dec96 - 20Dec2002
    http://human-brain.org/