related texts

Last updated 29 Jun 2004

Why not representations in the brains

Yehouda Harpaz,, - 24 Jul 2003

1. Representation in the brain

[1.1] The idea that we have in our brains representations of various things is quite common, and it seems that most of people take it for granted. However, from what we know about the brain, it cannot contain representations (of anything).

[1.2] First, we need to be clear what "representation" means. In its normal use within the context of information processing entities, "representation" means a separable set of attributes/properties in a system with a behaviour that is linked in a reasonably simple way to some attributes/properties of the represented entity(representee). By separable I mean that the behaviour of the representation has a weak dependence on the behaviour of anything else except the representee. Thus a watch represents the time because the hands progress linearly with the time, and in normal circumstances nothing else affect the behaviour of the hands. Similarly a map represents an area because directions and distances between marks on the map correspond to the directions and distances (divided by some factor) between the entities that the marks correspond to. Inside a computer, a piece of memory may represent a person by virtue of the fact that when the software performs the "get_name" operation on this piece of memory it gets the perosn's name, when it does "get_address" it gets the appropriate address, etc. Note that in this case, the representation is a representation with respect to some specific software, as opposed to the map and watch, where the representation is "absolute", in a sense that any intelligent system can interpret it in the same way.

[1.3] The separability of the representation (i.e. the weakness of the dependence of its behaviour on the behaviour of the rest of the world) makes the representations useful in information processing systems, because that means that they can be used to keep information about the real world. For example, changing the address of a person means changing the address in the piece of memory that represents the person. But that would be useful only if the address changes only when the person moves, i.e. it has zero dependency of anything apart from the real person's address.

[1.4] To be realy useful inside information processing systems, representations need also to be manipulatable: it needs to be possible to perform operations (like copy, compare, erase, combine) on them. Since computers and their software are designed to be useful, they are designed such that typical representations in computers are manipulatable, and most of people (if not all), in effect, include manipulatability in the definition of representations inside information processing systems, in the sense that they assume that any such representations will be manipulatable. Thus the really useful definition of representations (the one that corresponds to what people think of when they use the term) should include manipulatability.

[1.5] The reason for saying that there are no representations in the brain is that we know that information is retained in the brain by changes in strengths of synapses. At the biochemistry level, that means increase or decrease in the concentration of some molecules (most likely, kinds of proteins) at the synapse. When a person learns a new thing (anything), some synapses in his/her brain change their strength. Thus if we compare the brain of the person after learning X to how this brain would be if the person hasn't learned X, all we can see is some changes in strengths (note, by the way, that these changes will be different across individuals).

[1.6] In principle, this set of changes can be called the "representation of X", but that is clearly non-sensical. First, the behaviour of this set has strong depndency on other things apart from the representee. Second, this set cannot be measured in the brain as it is: it can be measured only by comparing the brain to its state before learning. Thus it is not actually a property of the brain. third, this set is not manipulatble: it cannot be individually copied, compared, erased, associated, combined, activated, 'chunked', transformed or manipulated in any way. In fact, this set doesn't have any interesting property at all, and it doesn't constitute any kind of (useful) entity, which is evidenced by the fact that even though neuroscientists agree that this is the way information is retained in the brain, they did not come up with a name for this set, and rarely discuss it at all.

[1.7] Hence, long-term knowledge is retained in the brain in a form that doesn't have anything that behaves like representations.

[1.8] In the short-term (seconds), information is also retained by the pattern of activity in the brain (i.e. which neurons fire), and it is common to claim that the active neurons represent the transient information. First, we should note that it cannot the neurons themselves that are the representation in this case, because they existed before the information was acquired. Thus the "representation" in this case must be the pattern of activity.

[1.9] The patterns of activity have more "presence" than the sets of strength changes, because they can be observed by neuroscientists. In some cases, they have quite weak dependence on anything else except the representee, and in these cases they may be regarded as representations in the less useful sense of the word. However, inside the brain, patterns of activity cannot be individually copied, compared, associated, combined, 'chunked', erased, observed by the brain itself or transformed by some other agent. The only kind of manipulation that happens to patterns of activity is the self-caused transformation to other patterns of activity (when active neurons activate other neurons). Thus we don't have anything like representation in the useful sense in the short-term either.

[1.10] The most common counter-argument is that representations are abstract entities. But abstract entities are useful only if their properties correspond to important properties of the corresponding real entities. Since the properties of representations do not correspond to properties of the real entities (things in the brain), representations are not useful abstractions. They are better described as fictional entities.

2. But how do we think ?!?

[2.1] That is a common "counter-argument", which many people seem to think is conclusive. The question how we think is an interesting question (BTW, my answer is here), but on its own it is obviously not an argument. So we first need to figure out what the full argument is supposed to be. I can think of several possibilities:

[2.2] "It is easier to see how thinking works when we assume that there are representations in the brain".
The problem with this argument is that it is trivially false: With or without representations, we don't understand how thinking works. I believe the latter statement is actually agreed by everyone, and that therefore this form of the argument is clearly false. However, as long as you don't make the argument explicitly it is possible to miss the point, and therefore it is possible that some people intend this form when they say "But how do we think".

[2.3] "It will be easier to see how thinking works if there are representation in the brain"
This prediction seems to be very likely, but it is not an argument about the likelihood of representation in the brain: the mechanism that created the thinking system (more accurately, the mechanism that created the system that creates the thinking system), i.e. evolution, doesn't give a toss about whether it is easier for us to understand it or not. Therefore whether something make it is easier or not for us to understand thinking has no bearing at all on the probability that it is true.

That this kind of argument is not valid seems obvious, but not to everybody. I have heard people make it explicitly in conference talks.

[2.4] "It will be easier to see how thinking works if we assume that there are representation in the brain."
This prediction seems unlikely in light of what we know about the brain, and does not have any real supporting evidence. It seems to be supported mainly by intuition, but intuition clearly cannot be useful in understanding how thinking works: if intuition was useful in understanding how thinking works, then with the amount of effort that went into it we would have by now progressed much further in understanding how thinking works.

[2.5] It seems to me that this prediction is supported in many people's mind by the observation that there are many models of thinking which are based on representations and very few that are not. However, that situation is the result of the (relative) easiness of constructing and understanding representational models, and, as a corollary, the easiness of publishing them. Thus concluding anything from the frequencies of models is the same as concluding it from the easiness of constructing models, which is clearly invalid argument. As above, the problem is that many people think this way unconsciously, and therefore find it difficult to see that it is wrong. The argument of "the only game in town" by Fodor, at least the way most people interpret it, is doing this error.

[2.6] There can be additional interpretation of how the question "how do we think?" is supposed to be an argument for representations, but I think a closer look at any of them will boil them down to one of the interpretations above.

3. Why the idea of representations doesn't go away

[3.1] The idea of representations continues to survive because of several factors:

  • [3.2] It is based on a long tradition in philosophy of mind and cognitive psychology, which still has an extremely strong effect. From discussions with philosophers of mind and cognitive psychologists, it seems that in many cases it is worse than just believing in representations and being unable to comprehend a thinking system without representations: many of them seem to be unable to understand the concept of "thinking system without representations." They interpret a statement like "the brain does not contain representations" as something like "the brain does not contain representations of kind X" (where X is some kind), and then evaluate the arguments and present counter-arguments as if the latter claim was made. It is possible to interpret this as an intentional demagogical manoeuvre, but my impression is that it is a result of a genuine failure to understand the original claim.

  • [3.3] Hand-in-hand with the traditional belief in representations is the contempt for neuroscience. It looks to me like most of philosophers of mind and cognitive psychologists regard it as absolutely impossible that any neuroscientific evidence will show that anything that they believe in is wrong. Concerning this point, it should be noted that advances in neuroscience did not play any role in the advance of the connectionist explanations: it was computer-science advances in training "neural" network (using biologically-impossible technique, backward error propagation) which made connectionism respectable in cognitive science.

  • [3.4] Computers software almost always is based on representations, so anybody that thinks that computers are useful analogy for brains is led to believe that brains use representations. The connectionists (at least some of them) are, in an interesting sense, non-representational, so may help to repel the idea of representations, but the effect is still weak, and even connectionists in many cases talk about how their networks "represent" things.

  • [3.5] Neuroscientists themselves are also quite strongly affected by the tradition of representations, and hence fail to make the inferences that are required to reach the conclusion that representations don't exist in the brain. For example, in the many neuroscientific textbooks that I checked, I haven't found anybody making it explicit that (a) learning something means inside the brain a set of strength changes, (b) hence that these sets are the only candidates for being long-term representations, (c) therefore it is important to understand the properties of these sets. You can find cases when it is explained that the brain represents information by changes in the strength of synapses, but the discussion stops at that point. In general, to avoid the contradictions, neuroscientists tend to only talk of short-term "representations", referring to patterns of activity, which even though they cannot be reasonably called representations (see [1.9] above), they at least have some "presense". As a result, at the moment the question of how long-term knowledge is kept in the brain is effectively ignored by neuroscientists.

    [3.6] The persistance of the idea of representations is very damageful in neuroscience, because it causes large confusion. Neuroscientists keep finding "representations" of things, by calling activity that correlates with some stimulus (or something else) a "representation of the stimulus (or something else)". That gives people that don't actually do the research the impression that neuroscientists actually find representations (in the useful sense of the word), and hence get the wrong impression of what is found in the brain.

    4. Examples

    [18 dec 2003]

    [4.1.1] Here is book chapter (Dienes, Z. & Perner, J. (2002). A theory of the implicit nature of implicit learning. In A. Cleeremans & R. French (Eds.). Implicit Learning and Consciousness: An empirical, philosophical, and computational consensus in the making(68-92). Hove, East Sussex: Psychology Press) which tries to clarify the concepts of implicit and explicit representation. They start using the normal usage of the word "representation", and pretty quickly reach this conclusion (p.2 of the pdf):

    Thus, most of modern psychology (which largely deals with representations not consciously intended to represent anything) would be undermined.
    ("psychology" here should be interpreted as "cognitive psychology". Other fields in psychology don't regard representations as central concepts.)

    [4.1.2] That conclusion is obviously unacceptable to them, so they try to go around it. They settle on this definition (p.3, top):

    According to one dominant (and we think persuasive) approach in philosophy, representations represent something precisely because of the functional role they play. Thus, on Dretske's (1988) approach, A represents B just in case A has the function of indicating B.
    [4.1.3] This definition is not far away for the normal usage of the word (with which, according to them, "most of modern psychology .... would be undermined"), and it does indeed undermine most of modern cognitive psychology. That is because with this definition, we first need a clear physiologcal concept of what "function of indicating" means. Once we have this concept, the claim "A represents B" in the brain will have an experimental meaning, and only cases that meet this meaning would be justifiably called "representation".

    [4.1.4] That is, ofcourse, far from what modern cognitive psychologists do. They use the term "representation" and its derivatives completely freely, with rarely any effort to connect it to function. For example, in perception correlation of activity to stimulus is regarded as enough to call the activity "representation", without any effort to justify it.

    [4.1.5] The authors get over this problem by simply ignoring it, i.e. they don't try to relate their definition to the way that the term is used in neuroscience, even though the following text shows that they do think that neuroscience is relevant. They also don't bother to create their own clear concept of "function as indicating". The example they give immediately after the quote above is:

    For example, if a pattern of neuronal activity has the function of indicating cats, then it represents "cat". If it fires because of a skunk on a dark night, then it has misrepresented the skunk as a cat.
    [4.1.6] So how do we know that the pattern of activity "has the function of indicating cats"? The authors exclude the possibility that it is because it fires because of a cat, which is what most cognitive psychologists would say, but don't give any other criterion or even a hint what it could be.

    [4.1.7] From reading the whole chapter, it seems that the authors interpret "representation" in its useful sense like anybody else, and don't say that either because they don't succeed to express it or because they realize that it doesn't fit with observations about the brain (they obviously take it for granted that there are representations in the brain).

    [4.2.1] [12 Jan 2004] In the abstract of this article (Boyden and Raymond , Neuron, Vol 39, 1031-1042, 11 September 2003, full article), we find this statement:

    At the mechanistic level, new learning may either reverse the cellular events mediating the storage of old memories or mask the old memories with additional cellular changes that preserve the old cellular events in a latent form.
    [4.2.2] That is, ofcourse, trivially nonsense. The new learning is done by some changes to the strength of some synapses in the system, which cause the system to change its behaviour in the appropriate way, but there is nothing that constrained these changes to have any relation to any previous changes, including the old learning. In a very simple system, the number of possible strength changes that affects the system's behaviour will be small, so we can predict some relation between the old and new learning. But for a complex system like the brain, there are many ways of achieving the same effect, so the new learning is not going to be related to the old one in any consistent way.

    [4.2.3] The question is, therefore, how come neuroscientists write such stupidity, and I think it is a result of thinking in terms of "representations", rather than changes in connection strengths. As long as you think about relations between "representations", rather than between set of changes to strengths of connections, it doesn't look that stupid.

    [4.3.3] These lecture notes give a cute example. In the section titled "How do learning and memory take place?" it says:

    There are six features of neurons which are essential to their function in memory formation and storage:

    Input Each neuron receives chemical and electrical signals from other neurons, from sense receptors or from other chemical signals such as hormones.

    Integration The neuron integrates these signals, for example during temporal and spatial summation of excitatory and inhibitory postsynaptic potentials.

    Conduction Neurons conduct the integrated information over distances.

    Output The neuron itself sends information to other neurons using chemical messages.

    Computation One type of information is "mapped" onto another both by single neurons or by neurons in a network.

    Representation The features described above form the basis of representation of information in the nervous system.

    So this pretty senior "reader" (director of Human Cognitive Neuroscience Unit, among other things) lectures to his students that representation is an essential feature of neurons at the same level as input of chemical and electrical signals. He does say it is based on the other features, but still presents it in the same list. Obviously he doesn't regard it as proposition that can be questioned.

    Here(Giesbrecht et al, Cerebral Cortex, May 2004, 14:521-529) is a good example of how the idea of representation is promoted by stealth. Using fMRI data (which, by the way, is probably irreproducible anyway), they claim that their data is "suggesting that semantic knowledge consists of multiple representational codes". That is, however, because they consider only two options: multiple representational(semantic) codes vs. unitary semantic code. The possibility that there are no semantic codes at all is not considered at all. Using this approach, the idea of representations in the brain is "supported experimentally".

    There is a little bit of progress in understanding that the brain do not represent anything, though not much.


    Yehouda Harpaz
    latest update 7 Oct 2007