Last updated 12 Aug 2004
The term 'very partial models' means here models that model only a very small set of features of what they model (and here what they model is always the brain). Typical examples are models for generation of oscillations, models for generation of directional selectivity etc.
Obviously, most cognitive scientists believe that these models are useful, because these models are very common. The positive side of these models, compare to fuller models, is obvious: they are easier to build. At the moment, we simply don't know how to build better models.
However, that does not necessarily make these models useful. For these models to be useful, they need to point to the right direction of research. So do they?
It requires very little effort to realize that a-priory, at least, they are unlikely to point to the right direction. The brain is an integrated system, which is 'designed' to (coded by genes), and normally succeeds to, perform all of what we call mental operations. It is not 'designed' to generate oscillations or directional selectivity. These are just aspects of its 'design'.
When a model is built to generate any of these aspects, its parameters are set and optimized to get the 'best result', i.e. to match what the model tries to model with what is seen in the brain in this aspect. The question is whether the resulted parameters, and other conclusions from the model, are likely to be close to what is happening in the brain.
The answer to this question is quite trivially 'no'. The reason is that the brain is a very complex system, so any relatively simple aspect of its activity (e.g. oscillations, directional connectivity) can a be result of a huge number of possible settings of the parameters (of neural structures and activity) in the brain. The probability of the parameters in the model matching the ones in the brain is therefore minute, unless something constrain the settings and optimization of the parameters in the model to the region of the parameters in the brain.
Very partial models simply don't have these constraints. In principle, researchers claim that they use realistic components and settings in their model. However, at the moment what we have on the brain is huge amounts of data with very little understanding of it. Thus the researchers can always pick some pieces of data that fit their model, and ignore the rest of the data. Therefore, at the moment, 'realistic components and settings' have much much wider range than the actual parameters in the brain, and are far from giving enough constraints to the models.
On the other hand, the models are constrained by the resources that they are implemented on, typically computer speed, memory size and program complexity. All of this are much much smaller than the corresponding values in the brain. Therefore to get a similar behaviour in the model, the settings of parameters need to be different from the settings of the brain. To get around this problem, it is needed to think about it specifically and finding ways to compensate for the differences, but currently we don't understand the brain to do that, so researchers simply ignore it. Thus the parameter settings in partial models are not only unlikely to be similar to the settings in the brain, they are guaranteed to be different. As a result, any additional conclusion from the model itself is also unlikely to be related to the brain itself.
An important point to note is that what the researchers 'conclude from the model' is not necessarily coming from the model itself. In many cases, the researchers use other information, hints and gut-feeling to reach their conclusion, but without saying that, many cases probably without even being aware of it. Thus the conclusions that researchers get out of their models are not necessarily false. But they are much more likely to be false than if the models weren't used.
Some examples:
This article (Aviel et al, Neural Computation 15, 1321-1340 (2003)) analyzes embedding synfire chains in a balanced network. When doing this, they assume that there are pools of neurons that are connected to carry the synfire wave, and the rest of the neurons are connected randomly with respect to the pools.
The problem with these assumption that they obviously false about the brain itself. While there are groups of neurons that are more connected to each other than to neurons outside the group, the connections to 'outsiders' are not going to be random with respect to the activity of the neurons inside the group. Therefore they are not going to be an unbiased noise: in some cases they will hinder the "synfire wave", and in some cases they will help it. The hindering situation is not interesting as far as synfire is concerned (there simply will not be a "synfire wave"), but the "helping" cases are, because these are the ones that are going to happen.
The problem with such "helping" cases is that they are not actually different from general neural activity, and like general activity we have no idea how to analyze them. Assuming that the rest of the neurons contribute unbiased noise makes the analysis simple, but it means that the results have no relation to what is happening in the brain.
======================================================================
======================================================================
Yehouda Harpaz