THE NEURONS IN THE BRAIN CANNOT IMPLEMENT SYMBOLIC SYSTEMS 

Yehouda Harpaz 
Harlequin Ltd.
Barrington Hall,
Barrington
Cambridge CB2 5RG
UK
yh@maldoo.com
http://human-brain.org


ABSTRACT: it is widely accepted that symbolic systems are useful in
understanding the working of the brain, and there are many symbolic
models of functions of the brain. This is based on the assumption,
commonly implicit, that in the brain itself there is a symbolic
system. In this article I challenge this belief, by showing that
symbolic systems cannot be implemented by neurons in the brain. In
particular, I show that there is no way to implement symbol tokens in
neuronal substrate, where the individual connections of individual
neurons (as opposed to cell populations) are not well defined
(stochastic).

1. INTRODUCTION

1.1  It is common for cognitive scientists and researchers in related
areas to assume that the human cognition can be described as a
symbolic system. For example, Eysneck and Keane (1989), in a
`Student's Handbook', list the basic characteristics of the
information-processing framework, which they say is agreed as the
appropriate way to study human cognition (p. 9). The second and the
third items on this list are: 

· The mind through which they [people] interact with the world is
general-purpose, symbol-processing system.

· Symbols are acted on by various processes which manipulate and
transform them into other symbols which ultimately relate to things in
the external world.

(they do mention connectionism later).

1.2  Other cognitive psychology textbooks are less blunt, but they
also tend to regard the symbolic view as the appropriate way of
looking at the brain, with connectionism as an alternative. For
example, Stilling et al(1995), spend more than 2/3 of the chapter
titled "The architecture of mind" (pp. 15-63) on discussion of the
"symbolic paradigm", and the rest of the chapter (pp. 63-86) on
connectionism and comparison between the two approach. Note that this
is also "an introduction" book, rather than a speculative effort. In
line with all other textbooks, they don't discuss the question of
implementation in the brain at all.

1.3  The general appeal of symbolic systems stems mainly from two related
reasons:

· They are in general relatively simple, i.e. easier to understand
  than other models.

· They are easy to implement on computers. This is because computers
  are themselves symbolic system, in the sense that addresses in
  computer satisfy the requirements of `symbolic tokens' (see next
  section).

1.4  However, this two characteristics are irrelevant to the brain,
because brain systems are not necessarily simple or easy to implement
on computers. The processing in the brain is done by neurons (possibly
with some modulation by neuroglia), and every mechanism the brain is
using must be implemented by neurons. Therefore, models of the
mechanisms of the brain must be, in principle, implementable by
neurons with the characteristics of the neurons in the brain. In this
article I will show that symbolic systems are in principle
unimplementable with neurons with these characteristics, and hence
that symbolic systems are unlikely to be relevant to the brain.

1.5  The discussion is arranged as follows:
Section 2: The essential features of Symbolic Systems.
Section 3: A short description of the characteristics of neurons.
Section 4: A discussion of the stochastic connectivity of neurons and what
           it means.
Section 5: The main argument, explaining why Symbolic Systems cannot be
           implemented by neurons with stochastic connectivity.
Section 6: Explanation why higher levels of implementation are not possible.
Section 7: Explanation why the whole person can do symbolic operations, 
           while the brain itself cannot.
Section 8: Discussion of the relevance of Symbolic Systems to the brain, 
           in the light of the previous sections.
Section 9: Implications.

2. THE ESSENTIAL FEATURES OF SYMBOLIC SYSTEMS: SYMBOL TOKENS 

2.1  Symbolic systems are based on symbols. Symbols are, according to
Newell(1990) (p. 77):

"Patterns that provide access to distal structures" and "A symbol
token is the occurrence of a pattern in a structure". Thus the
implementation of a symbolic system requires tokens, which must have
these two characteristics:

· They give the system means to access a specific distal structure.
 (In the rest of the text I will say that the pattern `points to a
  structure' to mean it allows access to it). 
· They can be stored in various structures.

2.2  For a symbolic system to work, the operation of storing a token must
satisfy two requirement:

· It must be dynamic, by which I mean that store operations happen as
  part of the computation. That means that the time scale of a store
  operation must be shorter than the time scale of the computation.

    This is an obvious requirement, because the way the system is
    operating is by moving symbol tokens around.

· It must be arbitrary, by which I mean that he built-in structure of
  the system does not specify which symbol tokens are stored where.

     This is necessary to allow the system to deal with any information
     that is not built-in.


2.3  These requirements are normally less explicit in discussions of
symbolic systems, because they are obvious. The first requirement
(dynamic) is universally taken for granted in the design and theory of
symbolic systems. The second is also normally taken for granted,
though sometimes it is expressed explicitly, e.g. in Newell and Simon
(1976, P.116): "A symbol can be used to designate any expression
whatever". It is sometimes expressed in other terms (e.g. Newell
(1990) talks about completeness, P. 77). Nevertheless, they are
essential, and the theory and all the implementations of symbolic
models use them, mostly without making them explicit. It is these two
requirements which are I will argue are not implementable by neurons
in the brain. Systems which require only one of these, or none, are
not discussed in this article.

2.4  As mentioned above, the question of implementation of symbol tokens is
rarely even mentioned. For example, in Newell & reviewers (1992),
which discuss the symbolic system SOAR (Newell, 1990), none of the
participants raises the question of implementation of SOAR in real
neurons. in Vera & Simon and reviewers(1993), which is also a multi-
author discussion concerning symbolic systems, it is mentioned
briefly: "The way in which symbols are represented in the brain is not
known. Presumably they are patterns of neural arrangement of some
kind" (Vera & Simon 1993b, p. 9).

2.5  However, implicitly it is assumed that symbolic systems are
implemented in the brain, as Vera & Simon (1993c, P. 120) say: "The
symbolic theories implicitly assert that there are also symbol
structures (essentially changing patterns of neurons and neuronal
relations) in the human brain that bear one-to-one relations to the
symbols of category 4 [symbols in computer memory] in the
corresponding program." These authors say later in the same article
(P.126): "We are aware of no evidence (nor does Clancey provide any)
that research at neuropsychological level is in conflict with the
symbol system hypothesis". In the next four sections I will show that
our knowledge at the neurobiological level is in conflict with the
symbol system hypothesis.

3. CHARACTERISTICS OF NEURONS IN THE VERTEBRATE BRAIN 

3.1  It is not my intention to give a full description of what is
know about neurons in the brain. The interested reader can find more
in any textbook about the brain (for example, Brodar 1992, Dowling
1992, Gutnick & Mody 1995, kandel et al 1991, Nicholls, Martin &
Wallace 1992, Shepherd (ed.)1990, Shepherd 1994). Instead, I will list
those characteristics which are essential to my argument. It is
important to note that the characteristics listed here are `textbook'
knowledge, supported by large body of consistent experimental
evidence, accumulated by over 100 years of research.

3.2  In the following text, I use the term `brain' to mean
`vertebrate brain', and the characteristics listed here are not
necessarily true for simpler brains. When numerical values are
mentioned, they are mainly based on the structure of the cerebral
cortex, which is the main site of thinking in the brain. Other parts
of the brain deviate from this values, but these deviations do not
introduce any new principles.

3.3  The characteristics which are relevant to the argument are:

3.3.1   The only way in which neurons perform computation is by activating
  (including inhibition, which I will regard as negative activation)
  other neurons. This happens through specific connections (synapses).
  There are also non-synaptic interaction between neurons, but these
  only modulate the computation.

3.3.2   Synapses are unidirectional. Each synapse is an output of the
   presynaptic neuron, and an input of the postsynaptic neuron.

3.3.3  Each neuron form many synapses (typically thousands) with
   other neurons, both for input and output. This is not necessarily true
   for all the neurons in the brain, but it is true for the bulk of it.
   In particular, it is clear that the complex computations that the
   brains carry are done by the neurons with many connections.

3.3.4  The activity of a neuron is determined by some kind of
   `integration' of its input. The word `integration' means here that
   each input has an effect on the probability of firing at any point in
   time. The effect may be of a complex form, and may interact with the
   effects of other inputs.

3.3.5  The connectivity (by which means the existence or
   non-existence of connections, ignoring how strong they are) of the
   neurons does not change as part of the computation. In an adult, there
   is very little changes in the connectivity, and the changes that do
   occur happen on much longer time scale than the operations of
   thinking. In other words, the pattern of neurons and the connections
   between them is effectively static during computation.

3.3.6  The strength of each synapse (the size of the effect of the
   activity of the presynaptic neuron on the activity of the postsynaptic
   neuron) do change, in a large range of time scales. The change of each
   synapse is dependent mainly on the activity of the post- and
   presynaptic neurons. There may be also dependencies on some diffusible
   factors, and on the activity in neighbouring synapses, but there is no
   way in which distal events can affect the strength of a specific
   synapse.
 
3.3.7  The connections that each individual neuron form are
   stochastic (i.e. they are not specified accurately by any mechanism)
   at the level of individual neurons. This is a crucial characteristic,
   which is commonly overlooked, and it is elaborated in the next
   section.

4. THE STOCHASTIC NATURE OF THE CONNECTION OF INDIVIDUAL NEURONS. 

4.1  At the scale of organs, brains have a well defined structure. Parts of
the brain have a reasonable well defined structure in smaller scale,
in the region of 1mm. The connectivity at lower scale (low-level
connectivity), however, is not well specified.

4.2  For example, When an axon from the Lateral Geniculate Nucleus
enter the visual cortex, it is directed to some location in the
cortex, to preserve the topographic mapping of the information. This
is commonly given as an example of highly ordered connection (e.g.
Shepherd (1990), p.395). However, in the cortex the axon branches to
an `axon tree' which span more than 1mm squared, and is made of
hundreds of branches (Shepherd (1990), p.396). Within this region the
neuron forms contacts with only part of the neurons, depending on the
type of the target neuron and location of its dendrites (mostly layer
4, in this case). This still leaves a choice of several tens of
thousands of neurons to choose from (or even more), and the axon forms
connections with few thousands of these. The selection of these few
thousands is essentially stochastic, by which I mean it is not related
in a consistent way to the selection that other neurons do, in the
same brain or in other brains.


4.3  The evidence for this is from comparison of the axon trees of
different neurons, within the same brain and from brains of different
animals of the same species. It is clear that the structure of the
axon trees of individual neurons are not well specified. When it come
to comparison between brains, or between the two hemispheres in the
same brain, it is not even possible to match individual neurons
between brains, because they are too different. Since the low-level
connectivity is different between individuals, it cannot be specified
during development (by the genes or otherwise), and hence must be
stochastic.

4.4  This conclusion tells us more than just about differences
between individual brains. It tells us that the set of neurons which
will tend to become active as a result of activity of some specific
neuron is stochastic, i.e. uncorrelated to the set that will tend to
become active as a result of the activity of any other neuron, even in
the same brain. It follows immediately that the set of neurons that
will become active as a result of the activity of some specific set of
neurons is stochastic. In other words, the relation between a some
pattern of activity |X| and the pattern of activity |Y| that it will
activate (the transformation |X|-> |Y|) is stochastic, i.e.
uncorrelated to the relation between any other pattern of activity
|X'| and the pattern of activity |Y'| that it will activate (the
transformation |X'| -> |Y'|).

4.5  It should be noted that this lack of relations applies within
the same brain, and this is what is meant by the term stochastic
connectivity in this article. In particular, the term does not mean
variations over time, and does not mean lack of correlation between
relations between patterns of activity (|X| -> |Y|) and relations
between entities in the outside world.

4.6  It can be argued is that even though the connectivity as defined
by the axon trees are not well defined, some process reduce the
strength of irrelevant synapses, so they become insignificant. The
problem with this possibility, however, is that this process requires
that the information about the correct connectivity be stored
somewhere, and then affect the modification of the synapses. This
information cannot be stored in the neurons themselves (because of
their stochastic connectivity), and there is no other place in the
brain, body or outside the body where this information (i.e. which
synapses needed to be eliminated to get the right connectivity) can be
stored.

4.7  Note what the argument above does not say:

1) The brain is completely unspecified.

   This is not true because at the level of cell populations there is
   order in the brain, and at that level it is possible to match
   different brains.

2) The low-level connectivity cannot be related to anything.

   This is not true because information about other things (in
   particular, about the outside world) can be used in modification of
   synapse strength. Thus the low-level connectivity can be related in
   some way to the outside world.

4.8  In the Peripheral Nervous System (PNS) The individual
connections are less stochastic, but even there, in most of the cases,
the low-level connectivity is not well specified. For example,
normally each muscle fibre is innervated by a single axon. Initially
the fibre is innervated by several axons, and then there is a process
of selection, which causes all of these, except one, to retract. Which
axon stays is a stochastic choice, a conclusion which is again based
on comparison between individual animals.


4.9  The stochastic nature of the low-level connectivity is almost
never mentioned explicitly in neurobiological textbooks, probably
because they don't believe this fact have any consequences. Instead,
these books emphasize the order that exists in coarser resolution,
many times in a confusing way.

4.10  For example, Nicholls, Martin & Wallace (1992, p. 341) ask:
"What cellular mechanism enable one neuron to select another out of
myriad of choices, to grow toward it, and to form synapses?". They
later bring examples of specific connectivity. However, in all the
examples which concern vertebrate Central Nervous System (CNS), the
specificity is in the level of cell populations, rather than
individual connections. Thus the answer to the question is that in the
CNS a neuron does not "select another". Rather, it selects a region
and cell types, which still leaves quiet a large spectrum for
individual choices.

4.11  Maybe the worst example is in kandel et al (1991). On page 20
appears, as part of the `principle of connectional specificity' which
is supposed to be general property of neurons, this assertion: "...
(3) Each cell makes specific connections of precise and specialized
points of synaptic contacts - with some postsynaptic target cells but
not with others." The `specific connections' is true in some
invertebrate systems, but it is simply false when applied to the
vertebrate brain. In chapter 58, `Cell migration and axon guidance',
the author tries to support this assertion, but all the examples of
specific connectivity are from invertebrates. There are some examples
from vertebrates, but they all show connectivity between cell
populations, rather than individual cells. In addition, they are all
about peripheral neural system, except one example from the spine of
bullfrog. The vertebrate brain is not even mentioned in this chapter.
It is obvious that this is because there are no example of specific
connectivity there, but the text does not actually say this. The next
chapter, `Neural Survival and synapse formation', discusses only
neuron-muscle junctions, and there is no further discussion on the
question of specific connectivity.
 
4.12  Disappointingly, This is true even in books that are explicitly
about the computational aspect of the brain, e.g. Churchland &
Sejnowski (1992), Baron (1987), Gutnick & Mody (Eds.)(1995). For
example, in Gutnick & Mody (Eds.)(1995), Section iii is about "The
Cortical Neuron as Part of a Network". However, only the chapter about
modelling this network (Bush & Sejnowski, 1995) mention individual
connections, by saying that they assume them to be random in their
simulations (P. 187). Even they don't actually discuss the point, and
none of the other chapters in this section, or in the rest of the
book, touches the point.

4.13  Even though the stochastic nature is not explicitly stated, it
is clear from the data that is presented in these books that this is
the case. One of the `distal' targets of the this article is to show
the significance of this fact, and hence to convince neurobiologists
(and others) to pay attention to it.  

5. SYMBOL TOKENS CANNOT BE IMPLEMENTED BY NEURONS 

5.1  How are symbol tokens implemented?

5.2  Since it must be possible to store symbol tokens in arbitrary
structures during computation (in other words, they are dynamic), they
cannot be implemented by static features. This means that symbol
tokens cannot be implemented by patterns of neurons and the
connections between them, because these are static in the time scale
of thinking. The dynamic features of the brain are the activity of
neurons, and to some extent the strength of the synapses. Thus symbol
tokens must be implemented by patterns of activity or strength of
synapses, or both.

5.3  First, let us assume that patterns of activity are used, and see
if they can fulfil the requirements for symbol tokens (section 2
above). I denote symbol tokens as |x|, |y|, and the corresponding
patterns of activity as |X|, |Y|.

5.4  To store a token in some arbitrary structure, it would require
to take the token |x|, i.e. the pattern of activity |X|, and propagate
it to the appropriate `location'. Note that the need to propagate the
pattern is always true, no matter what the `location' is. The
propagation must happen by the pattern of activity |X| activating
another pattern of activity |Y|, because there is no other way in
which a pattern of activity can have any effect (in the time-scale of
thinking). For the transformation (|X| -> |Y|) to be regarded as
moving the symbol token |x|, the result of the propagation, i.e. the
symbol token |y| which corresponds to the pattern of activity |Y|,
must be a `copy' of |x|, i.e. must points to the same location as |x|
does.


5.5  However, as discussed in the previous section, in the brain this
propagation is stochastic. This means that if the propagation was
successful for some symbol token |x|, i.e. the transformation (|X| ->
|Y|) causes |y| to point to the same location as |x|, it will not work
for any other symbol token |x'|, which will propagate in a different
way (|X'| -> |Y'|).

5.6  The stochastic propagation of patterns of activity is the most
crucial point to grasp in the whole argument. It is worth noting here
that this is in stark contrast with the situation in current
artificial devices. In these, the connectivity is defined exactly and
completely, and the relation between a pattern of activity |X| and the
pattern of activity that it will activate |Y| is well-defined for all
|X| at any location. As a result, it is possible to propagate any
pattern, to any place, without any restriction, and without changing
the pattern itself.

5.7  The other crucial point to note is that the stochastic
propagation is not a noise that is added to the signal. It is the
signal itself that is being transformed stochastically. This contrasts
with noisy channels in artificial devices, where the signal is not
transformed, but is contaminated by noise.

5.8  The fact that propagation of arbitrary data on computer is not a
problem is probably the reason that most of people intuitively assume
that there is no problem to implement symbolic systems in the brain.
The problem with this intuition is that it does not take the
stochastic nature of the low- level connectivity in the brain into
account.

5.9  It can be argued that the way to propagate symbol tokens is
learned, or acquired by some other process. This, however, would
require some part of the brain to know (in some sense) in advance the
appropriate transformation (|X| -> |Y|) for each |X| between each pair
of locations, so it can direct the acquisition process. In a system
with stochastic low-level connectivity, there is no way to know this
transformations in advance, so this is not a possible explanation.

5.10  Hence there is no way to propagate patterns of activity to
arbitrary locations, so they cannot be used to implement symbol
tokens.

5.11  The other possibility of moving symbol tokens, by propagating
synapse strengths, is also eliminated by the argument above, because
synapse strengths can be propagated only by patterns of activity, so
it is stochastic too.

5.12  The `cope out' solution, of regarding any pattern in the target
location as a `copy' of the source pattern is obviously unacceptable,
because this will not fulfil the other requirement of symbol tokens,
i.e that they point (allow access) to some structure. Two patterns
which are related to each other in a stochastic way cannot, in
general, point to the same structure.

5.13  Thus we have reached the conclusion that there isn't any
feature in the brain that can be used as symbol tokens. It is
important to note that the argument is general, is not dependent on a
specific implementation details, and is applicable both to innate
(genetically programmed) and learned mechanisms.

6. POSSIBLE IMPLEMENTATIONS AT HIGHER LEVELS 

6.1  A possible objection to the argument above is that there may be
a higher level of organization that may support symbol tokens.
However, this is clearly not the case.

6.2  Up to a level of ~1mm, the connectivity is clearly stochastic,
so the argument in section 5 applies. This eliminates any
implementation that relies on primitive elements which are smaller
than ~1mm, whether localized or distributed. Thus implementing symbol
tokens in higher levels of organizations means that it is based on
primitive elements of dimension of 1mm or larger, and the total
activity of the whole element, rather than its pattern at higher
resolution, is the significant variable.

6.3  At that level, however, we can easily tell that there is no
coherent connectivity between different elements. By `coherent
connectivity' I mean a connectivity that allows one primitive element
to affect separately other elements. To do this, the output from an
element to other elements have to be separatable in some way, so it
can be controlled separately. However, when we look at any 1mm square
of the cortex, the neurons that send processes to other elements are
all mixed up together, on a very small scale (tens of microns, at
most). Because at that level the connectivity is stochastic, these
neurons cannot be controlled separately.

6.4  It is important to note that this is true for all the
connections inside the cortex. Hence it is independent of what are the
actual elements that are postulated to be the base for implementation
of the symbolic system, provided these elements are large (1mm or
larger).

6.5  The lack of coherent connectivity means that the state of the
element as a whole cannot be propagated inside the cortex. Instead, it
is distributed approximately equally to all its neighbours, and
sometimes to further elements by intracortical projections. In the
neighbours, or the further elements connected by projections, it is
mixed with local activity and activity from other neighbours, in a
stochastic fashion. As a result, an element which is further away, and
is not connected by a projection, can never `see' (be affected by) the
activity in the original element. Instead, it always `sees' a
stochastic mixture of activity of many elements. In that sense, the
connectivity at the 1mm level is stochastic as well, and the argument
of section 5 apply to that level as well.

6.6  A more coherent connectivity is seen outside the cortex, and in
connections in and out of the cortex, mainly sensory input and motor
output. However, these connections clearly are not coherent enough to
transfer specific activity across the cortex (in the case of sensory
and motor connections, it does not transfer activity across the cortex
at all).

6.7  Hence, in the brain, there is no higher level of organization
that can support symbol tokens.

7. IF NEURONS CANNOT PERFORM SYMBOLIC OPERATIONS, HOW CAN THE WHOLE
PERSON PERFORM SYMBOLIC OPERATIONS?

7.1  A possible counter-argument to the argument in the previous
section is as follow: If this argument is correct, it can be used to
prove that people cannot handle symbols. But people can handle
symbols, so the argument must be wrong. This counter-argument is
wrong, because the person as a whole is different qualitatively from
the components of the brain, by two fundamental properties (at least):


7.2  1) In addition to the brain, a person has also sensory systems. 

  7.2.1  The sensory systems differ from neurons in the brain in that their
  direct source of input (e.g the objects that emit the photons that the
  eyes receive) is very variable. In the case of the eyes, the direct
  source of input changes in time scales of several 10ms, both because
  the eyes move, and because objects in the world (or images of objects
  in TV) are moving. In other senses the changes are slower, but the
  even the slowest changes (in taste and smell) happen many times in
  each day.

  7.2.2  In contrast, the direct source of information to any neuron, or 
  a group of neurons, which is the set of neurons that form synapses on
  its/their dendrites, is almost static in the human CNS (with very few
  exceptions). Once the development of the brain stops in the first year
  of life, the set of neurons that deliver information to any group of
  neurons in the brain is almost constant for the rest of the life of
  this person. Major changes are rare, and do not happen during normal
  computation.

  7.2.3  This means that argument that relies on the question of 
  propagation of information is not applicable to the whole person as it is
  applicable to the brain itself. In particular, if we try to apply the
  argument in section 5 to the whole person, it will fail on the fact
  that every visual information in the world can be propagated to the
  eyes without any transformation (and the same for auditory and tactile
  information).

7.3  2) The brain implements a learning and thinking system.

  7.3.1  We don't know much about the details of this system, but we
  know it works. This system is capable of learning new skills, by trial
  and error learning, by analysing situations and deciding on the right
  actions, and by receiving communication from other people. These new
  skills are not limited to the capabilities that are inherited in the
  genetic make-up of the person.

  7.3.2  Components in the brain do not have these learning
  capabilities, so they are limited in a way that the whole person
  isn't. The possibility of learning how to deal with symbol tokens
  inside the brain is discussed in section 5.

7.4  Since components of the brain do not have sensory input and the
learning capabilities of the whole person, there are many tasks that
the whole person can do that components cannot do. Thus, that the
person can perform some task (e.g written communication, symbolic
operations) does not prove that components of the brain can do it.

8. ARE SYMBOLIC MODELS STILL RELEVANT TO THE BRAIN? 

8.1  In sections 4-6 it was shown that there is no way to implement symbol
tokens in the brain, so it cannot be a symbolic system. This means
that the brain is not a symbolic system, and theoretical analysis of
symbolic systems is not applicable to it. However, it can still be
argued that experimenting with symbolic systems is useful for
understanding the brain. A typical argument would be: Both the brain
and the symbolic models are information-processing systems, so
experimenting with symbolic systems will tell us something about the
brain.

8.2  This argument is flawed, because there is no general way to know
which of the features of symbolic systems are applicable to
information- processing systems in general, and therefore to the
brain. Hence every feature that is found in symbolic systems have to
be first tested on the brain (possibly indirectly through behaviour)
before we know if it is applicable to the brain.

8.3  In theory, symbolic systems can still be used to direct research
on the brain by suggesting hypotheses which are worth testing, and the
argument in sections 4-6 is silent about this possibility. This,
however, is a heuristic approach, which may or may not work. The
experience with symbolic systems in the last ~40 years suggests that
this approach does not work.

8.4  In general, a model is useful when it generates useful insights
into the system under investigation. Symbolic systems clearly did not
generate any insight into the neurobiology or anatomy of the brain,
but it can be claimed that they generated useful insights into human
thinking.

8.5  It is problematic to decide what is a `useful insight', but a
plausible heuristic is that useful insights will be mentioned in the
basic textbooks of the relevant subject. Inspection of textbooks in
cognitive psychology (Eysneck and Keane 1989, Matlin 1994, Mayer 1992,
Stilling et al 1995) and even more symbolic systems model specific
books (e.g. Baars 1988, Johnson-Laird 1993, Newell 1990) does not show
any insight into human behaviour or thinking which was generated by
testing symbolic system models hypotheses.

8.6  These books are full of models of human behaviour, but in all
the cases the behaviour was first noticed, or postulated based on the
researcher's knowledge, and then modelled. Thus the model was not
useful in finding the behaviour. It can be argued that the model was
useful for testing the mechanisms underlying the behaviour, but if the
brain does not implement a symbolic system, these tests are invalid.

8.7  The illusion that symbolic models are useful is mostly based on
the implicit assumption that the brain implement a symbolic system
too, and hence that if a model can reproduce the behaviour of humans
in some situation, or generates hypotheses that can be tested, it is
necessarily useful. When it is realized that the brain cannot be
implementing a symbolic system, and the symbolic models are evaluated
by the real parameter, i.e. generating insights, they seem much less
useful.


9. IMPLICATIONS 

9.1  When it comes to Artificial Intelligent, the argument in section
4-6 is of no great consequences. Even if the brain is not a symbolic
system, symbolic systems may still be the best way of building
artificial systems. It is also possible in principle that there are
living intelligent creatures somewhere in the universe that have
thinking systems based on symbols.

9.2  When it comes to research about the way the brain works, the
argument has crucial implications. It shows that symbolic systems are
incompatible with what we currently know about the brain

9.3  Thus, these systems need very strong supporting evidence before
they can be regarded as real candidates for modelling brain
mechanisms. Since this evidence is lacking, symbolic systems do not
deserve the attention they get, and researchers of the brain would do
better to explore other avenues. 


REFERENCES: 


Baars, Bernard J.  (1988). A Cognitive Theory Of Consciousness.
Cambridge, UK: Cambridge University Press.

Baron, Robert J. (1987). The Cereblar Computer: An introduction to the
computational structure of the human brain. Hillsdale, NJ: Lawrence
Erlbaum Associates.

Brodar, Per(1992). The Central Nervous System, structure and function.
New York, NY: Oxford University Press.

Bush, Paul & Sejnowski, Terrence J. (1995). Models of Cortical
Networks, in Gutnick, Michael J. & Mody, Istvan (Eds.) The cortical
neuron. New york, NY: Oxford University Press, pp.174-189.

Churchland, Patricia S. & Sejnowski, Terrence J. (1992). The
Computational Brain. Cambridge, MA:MIT Press.

Dowling, John E. (1992). Neurons and Networks: An introduction to
neuroscience. Cambridge, MA: Harvard University Press.

Eysneck, Michael W. & Keane, Mark T. (1990).Cognitive Psychology A
Student's Handbook. Hove and London: Lawrence Erlbaum Associates.

Gutnick, Michael J. & Mody, Istvan (Eds.)(1995). The cortical neuron.
New york, NY: Oxford University Press.

Johnson-Laird, Philip (1993). The Computer And The Mind (second
edition). London, UK: Fontana Press.

kandel, Eric P., Schwartz, James H., Jessel, Thomas M. (Eds.) (1991).
Principles of Neural Sciences (third edition). New York: Elsevier.

Matlin, Margaret W. (1994). Cognition (third edition). Fort Worth:
Harcourt Brach Publishers.

Mayer, Richard E. (1992). Thinking, problem solving, cognition (second
edition). New York: W.H. Freeman and Company.

Newell, A., & Simon, H.A. (1976). Computer Science as Empirical
Enquiry: Symbols and Search. Communications of the association for
computing machinery, 19, pp. 113-126.

Newell, Allen (1990). Unified Theories of Cognition. Cambridge, MA:
Harvard University Press.

Newell Allen and reviewers(1992). Precis of Unified theories of
Cognition. Brain and Behaviour Science, 15, pp. 425-492.

Nicholls, John G., Martin, Robert A. & Wallace, Bruce G. (1992). From
Neuron to Brain (third edition). Sunderland, MA: Snauer Associates
Inc.

Robinson, William S. (1995). Brain Symbols and Computationalist
explanation. Minds and Machines, 5, pp. 25-44.

Rumelhart, David E., & McClelland, James E. (1986). Parallel
Distributed Processing. Cambridge, MA: MIT Press.

Shepherd, Gordon M. (Ed.)(1990). The synaptic organization of the
brain (third edition). New York, NY: Oxford University Press.

Shepherd, Gordon M. (1994). Neurobiology (third edition). New York,
NY: Oxford University Press.

Smolensky, Paul (1988). On the proper treatment of connectionism.
Brain and Behaviour Science, 11, 1-74.

Stilling et al. (1995). Cognitive Science an introduction. Cambridge,
MA: MIT press.

Vera, A. & Simon H.A. and reviewers(1993a). Special Issue: Situated
Action. Cognitive Science, 17, pp. 1-133.

Vera, A. & Simon H.A. (1993b). Situated Action: A symbolic
interpretation. Cognitive Science 17, pp. 7-48.

Vera, A. & Simon H.A. (1993c). Situated Action: Reply to Clancey.
Cognitive Science, 17, pp. 117-133.