[ 17 Jul 2010 ] By now it is not online anymore. If you find here something that is interetsing and you want to see the actual text in MITECS, you can try to google part of the quote that I give. With some luck you can find the text online. Note also that these commenst are pretty old by now.
The text below is intended to be read side by side with the MITECS text. Indented text is always an exact quote from MITECS, and the user can use search to find the quote in the MITECS text. Where the quote ends with several dots, the comment is about all the paragraph starting with this text.
Note from the MITECS Executive Editor:
Keep in mind that the current mitecs site is a developmental, unedited site. The final site will be posted this spring.
This page contains comments on the psychology domain. Other pages contain comments on the other domains:
General: Underlying almost of the texts in this domain is the 'sameness assumption' error (Reasoning errors). When researchers try to formulate a theory for the mechanism underlying a cognitive feature, e.g. analogy, reading, problem solving, theory of mind etc., they must be relying on this assumption, because otherwise (i.e. if the underlying mechanism is variable), there cannot be a theory of the underlying mechanism. In particular, it does not make sense to compare the behaviour of a model to the average behaviour of humans, unless the sameness assumption is true.
A possible response is that the models are describing some of several possible mechanisms, but that is not what the researchers say, and even in this case it is not sensible to try to compare the behaviour of the model to the average behaviour of humans.
The other error that underlies almost of the texts is ignoring knowledge about neurons (Reasoning errors).
The simplest explanation for analogy making in humans is that in most of the cases it is an aspect of the more general mechanism of pattern matching, and in some cases it is based on more elaborated thinking process. The author and the research described here completely ignore this explanation.
There is considerable evidence that similarity-based retrieval is driven more by surface similarity and less by structural similarity than is the mapping process.....This paragraph is the only experimental evidence given in the text, and it clearly does not agree with the emphasis on structural mapping in the models developed before.
The way the author puts it, it suggests that the difference is between the mapping process and the 'retrieval' process. However, the author does not give us any evidence that in humans the mapping is driven more by structural similarity, and this implied assertion is based on the way the models do it. Either the author cannot distinguish between the way the models do it and the way humans do it, or he wants to confuse the reader. It is not obvious to me which is the case.
The proposal of a specific neurologically based problem in understanding minds was a significant step in this endeavor. The hypothesis that autistic children lack the intuitive understanding that people have mental states was originally tested with the Sally-Ann false belief paradigm (Baron-Cohen, Leslie and Frith 1985).Other people's minds is the most complex phenomenon with which the child has continuous contact. For normal children, the continuous contact gives enough information to learn to understand minds despite their complexity. Thus, any problem with learning should mostly manifest itself in problem in understanding other people's mind. In autist children the problem is probably lack of motivation ("Restricted repertoire of interests" in the author's words).
This explanation for the problems that autists have with understanding mind is pretty straightforward, and is compatible with the evidence. When I asked Baron-Cohen whether his data can be explained this way, he said he never thought about it. I suspect that the underlying reason why he and the rest of the researchers in the field don't think about this explanation is that it is much less 'glamorous' than the 'theory of mind' theory.
Binding is the problem of representing conjunctions of properties. It is a very general problem that applies to all types of KNOWLEDGE REPRESENTATION,This is false. Only modular systems suffer from the binding problem. This author seems to be unable of perceiving of the possibility of non-modular systems.
For example, to visually detect a vertical red line among vertical blue lines and diagonal red lines, one must visually bind each line's color to its orientation (see Treisman and Gelade 1980).This need arise only if the color and orientation are separated first. There is no evidence that this happens in the brain. The usual 'evidence' for it is the distinction of 'streams' (discussed in the comments about the neuroscience domain), which really stands for tendencies in the cortex. Since representations are distributed, that does not say the representations of different attributes are separated. It just says that the distribution of different attributes is different.
Similarly, to understand the statement, "John believes that Mary's anger toward Bill stems from Bill's failure to keep their appointment," one must bind John to the agent role of believes, and the structure Bill's failure to keep their appointment to the patient role of stems from (see THEMATIC ROLES).Confusing the issue by giving a false example. The binding problem is about binding together information from the same stimulus. In the case of understanding language, what is needed is to 'bind' information from the separate stimuli (the different words).
Binding lies at the heart of the capacity for symbolic representation (cf. Fodor and Pylyshyn 1988; Hummel and Holyoak 1997).Since the brain does not have symbolic representation (see in Brain-symbols), this is irrelevant to the brain, and anyway it is false. Only modular systems have the binding problem, whether they use symbolic representation or not.
For example, a neuron that responds to vertical red lines at location x, y in the visual field represents a static binding of vertical, red, and location x, y.That is nonsense. This neuron 'represents' a vertical red line at location x,y. Calling this static binding is just linguistic gymnastics, with the intention of forcing all the discussion to be in terms of binding, thus enforcing the impression that binding is necessary.
I put 'represents' in quotes both because the neuron at best is part of the 'representation' rather than all of it, and because the word representation is used here in its wider definition (it does not mean formal representation. See in Reasoning errors).
Variants of this approach have been proposed in which bindings are coded as patterns of activation distributed over sets of units (rather than the activity of a single unit; e.g., Smolensky 1990). Although this approach to binding appears very different from the localist (one-unit-one-binding) approach, the two are equivalent in all important respects.That is nonsense. There are many differences between the two approaches. For example, the number of possible different 'representations' is huge (many many orders of magnitudes), and 'representations' can overlap in the distributed approach but not in single unit approach. This sentence is the first step in an intentional misleading maneuver.
In the extreme case, the units coding, say, red diagonal lines may not overlap at all with those representing red vertical lines.In the extreme case, that is true, but not in general. The statement in the previous quote is intended to encourage the reader to accept that this is a general problem, because it is a problem for single-unit representation, and hence (because of the equivalence in all important aspects) also a problem for distributed representation. That is the second step in the misleading maneuver.
Dynamic binding permits a given unit (here, R) to participate in multiple bindings, and as a result (unlike static binding), it permits a representation to be isomorphic with the structure it represents (see Holyoak and Hummel in press).Complete the maneuver in the previous two quotes. The implication that in 'static binding' (i.e. distributed representation), a unit cannot participate in multiple representation is based on the assumption that in 'static binding' the coding of different representation do not overlap. This is simply false, but the author hopes that the reader, confused by the previous quote, will not notice this.
binding permits greater representational flexibility than static binding, but it also has a number of properties that limit its usefulness....Surprisingly after the misleading argument in the previous paragraphs, the discussion of the problems of 'dynamic binding' is quite reasonable.
The most popular proposed binding tag is based on temporal synchrony: If two units are bound, then they fire in synchrony with one another;This solution is impossible, because synchrony cannot propagate in the cortex. See in Myths and misconceptions [7] for a discussion.
to the extent that it binds properties statically, it will be free to operate in parallel with other processesThis is false in a distributed representation system, because the other processes will use overlapping sets of units, and hence interfere with each other. In a sparse distributed representation, few processes may work in parallel, but there is some limit. This assertion relies again on the misleading maneuver above of equating distributed representation with single-unit representation.
For example, these (and other) considerations led Hummel and Stankiewicz (1996) to predict that attended object images will visually prime both themselves and their left-right reflections, whereas ignored images will prime themselves but not their reflections.Clearly, an image prime itself more than its reflection, and attended image will prime both itself and its reflection stronger than ignored image will prime itself and its reflection respectively. Hence we can predict trivially that the reflection of an ignored image is primed less than all the other images. To demonstrate this effect, all that is required is to adjust the parameters of the experiment so the priming of reflection of ignored images is below the threshold of detection, while the priming of other images is above it. Thus this experiment proves nothing, except that the experimenters cannot evaluate their results objectively.
Only von Neumann components seem capable of manipulating variables in a way that matches human competence (see BINDING PROBLEM)A blatant nonsense, which is not based on anything. The BINDING PROBLEM section does not mention von Neumann components at all.
It is Generally found that people are quite accurate in judging ambient distance (up to approximately 20 feet). This is typically demonstrated by having them survey the scene, close their eyes and walk to a predesignated object (Loomis et al. 1992).That is ridiculous. What the evidence shows is that people estimate accurately the number of steps (or possibly the 'amount of motion') that is requires to get to the predesignated object. It does not tell us anything about distance judgement.
Episodic memory is a recently evolved, late developing, past-oriented memory system, probably unique to humans, that allows remembering of previous experiences as experienced.This is the hypothesis that this section is going to argue for, but it starts already by stating it as if it is an established fact.
Semantic memory is the closest relative of episodic memory in the family of memory systems.This already takes for granted the existence of two systems (episodic memory and semantic memory), which are part of a 'family of memory systems'. This is repeated in all the text. Since the existence of separate system is not established yet, the author should have been talking about the operations of 'episodic recalling' and 'semantic usage' (which by definition can be distinguish behaviourally), and then discuss whether these are done by separate systems. By using consistently the 'system' terms, the text confusingly gives the impression that the separation is already established.
Episodic and semantic systems share a number of features that collectively define 'declarative' (or 'cognitive') memory in humans.That is very confusing, because the list of features clearly are not the definition of 'declarative' memory. The way the features are written, they are clearly supposed to be observations about 'declarative' memory (i.e. empirical statements), and that the way all readers are going to interpret them. In fact, it is a mixture of features that are true by the definition of 'declarative' memory (Knowing that some specific statements are true and others are false), trivial observations, and false statements.
The important point about this list, however, is that all through it discusses the features of two separated systems. It would have been more natural to write about the features of 'declarative' memory, but the author wants to instill in the reader mind that there are two systems before trying to prove it.
(ii) Cognitive operations involved in encoding of information are similar for both episodic and semantic memory. Frequently a single short-lived event is sufficient for a permanent 'addition' to the memory store, unlike in many other forms of learning that require repeated experiences of a given kind.This assumes that we don't recall these 'short lived events' in the time period following them, consciously or unconciously, during awake time or during sleep. This assumption is simply stupid.
(viii) Information retrieved from either system can be expressed and communicated to others symbolically.An example of blatant nonsense. Clearly large parts of our knowledge is not communicable. The author himself claims this in the last sentence of the first paragraph.
(ix) Information in both systems is accessible to INTROSPECTION: we can consciously 'think' about things and events in the world, as we can 'think' about what we did yesterday afternoon, or in the summer camp at age 10.Another blatant nonsense, like (viii) above. There must be tons of semantic and episodic information in the brain that is not open to INTROSPECTION.
In the list of different features:
(i) The simplest way of contrasting episodic and semantic memory is in terms of their functions: episodic memory is concerned with remembering, whereas semantic memory is concerned with knowing. Episodic remembering takes the form of 'mental travel through subjective time,' it is accompanied by a special kind of awareness ('autonoetic,' or self-knowing, awareness).This is presented as if it is an empirical observation, but it isn't: It is the definition of the difference between episodic recall and usage of semantic knowledge.
The special attributes of episodic memory are expressed in somewhat mystic terms. In reality 'self knowing awareness' is simply the awareness of the person that the things he remembers were perceived or thought by him/her directly, normally also with awareness where and when they were perceived/thought, and associated with other things that were perceived/thought at the same time.
(ii) The relation between remembering and knowing is one of embeddedness: remembering always implies knowing, whereas knowing does not imply remembering.Here the author gets close to a more realistic view of the difference: Episodic memory is Semantic memory that happens to be associated with 'autonoetic,' or self-knowing, awareness feeling (i.e. remembering where and when you perceived/thought it before). The same is more or less repeated in (vi).
(iii) Episodic memory is arguably a more recent arrival on the evolutionary scene than semantic memory. Many animals other than humans, especially mammals and birds, possess well developed knowledge-of-the-world (semantic memory) systems. But there is no evidence that they have the ability to autonoetically remember past events in the way that humans do.That is dead stupid. Until an animal has enough intelligence to have self-awareness, it cannot have Episodic memory by definition. Even if it does, it has to be intelligent enough to discuss it with us for us to know it. The suggestion that episodic memory is different in evolutionary sense is useful for the author, because it implies that it is separate from Semantic memory in some genetic sense.
(iv) Episodic lags behind semantic memory in human development. Young children acquire a great deal of knowledge about their world before they become capable of adult-like episodic remembering.That is simply because self-knowing awareness is a complex ability that takes time to learn.
(v) Episodic memory is the only form of memory that is oriented towards the past: Retrieval in episodic memory necessarily involves thinking 'back' to an earlier time.This is true by definition, because you cannot be self-aware of anything in the future.
The evidential basis for the distinction between episodic and semantic memory has been growing steadily over the past ten or fifteen years.....The only evidence for biological differences is from neuroimaging, which is irreplicable. The 'evidence' from dissociation is the typical reasoning error which is discussed in reasoning errors.
Several of these connect, through the lateral geniculate nucleus, to a single neuron in primary visual cortex, but the groups that so connect are arranged along lines: the cortical neuron thus maintains selectivity for position orthogonal to the line, but relaxes selectivity and summates along the line (Hubel and Wiesel 1962).While this is a plausible hypothesis to explain the activity of some neurons in the visual cortex, it is not actually based on evidence. Hubel & Wiesel certainly did not trace connections from gangalions cells through the LGN to the visual cortex.
these collect together information from neurons in cortical area V1 that come from a patch several degrees in diameter in the visual field (Newsome et al. 1990; Raiguel et al. 1995), but all the neurons converging on one MT neuron signal movements of similar direction and velocity.The second part of the sentence is an hypothesis, and is not based on actual evidence. In addition, it is unlikely that all the neurons converging on one MT neuron signal the same thing. More likely, it is only part of them, and even this is probably true for only part of the neurons in the MT area. The sentence as is gives the impression of much ordered system that the real system is likely to be.
Thus all the information about motion with a particular direction and velocity occurring in a patch of the visual field is pooled onto a single MT neuron, and such neurons have been shown to be as sensitive to weak motion cues as the intact, behaving animal (Newsome et al. 1989).That is simply false. The information always go to many neurons. A single neuron in the cortex does not have enough effect on the system to be effective. This is an example of doing the second part of the 'intelligent neuron' misconception (ignoring how the activity of the neuron affects the rest of the system, myths and misconception [6]]) without doing the first (ignoring how the activity of the neuron is determined by the rest of the system).
In sharp contrast, modern empirical work has shown that infants as young as 42-minutes-old successfully imitate adult facial gestures (Meltzoff and Moore 1983).These studies are not reproducible. Contrary to the impression that the text ( specially Figure 1) gives, the results are not so good. In the 'best' studies of very young infants, there wasn't a robust respond from the babies. It was better than chance. Together these are the hallmarks of biased experiments.
Research on communication with infants and young children proves the existence in the developing human brain of emotional and cognitive regulators for companionship in thought and purposeful action (Aitken and Trevarthen 1997).Plain lie. There isn't anything that can even be remotely considered as evidence for 'regulators for companionship' in the brain.
Infants demonstrate that they perceive persons as essentially different 'objects' from anything non-living and non-human (Legerstee 1992; Trevarthen 1998).Since humans are different from anything non-living and non-human, that is not an interesting observation.
Dynamic forms of vocal, facial and gestural emotional expression are recognized and employed in interactions with other persons from birth, before intentional use of objects is effective.The claim that these 'dynamic forms' are recognized from birth is a simple lie, and to call what the baby is doing 'employing emotional expressions' is ridiculous.
Scientific research into the earliest orientations, preferences and intentional actions of newborns when they encounter evidence of a person, and their capacities for IMITATION, prove that the newborn human is ready for, and needs, mutually regulated intersubjective transactions (Kugiumutzakis 1998).The 'scientific research' in this case happened to be non-replicable, and does not lead to the conclusion anyway.
Infants' emotional well-being depends upon a mutual regulation of consciousness with affectionate companions (Tronick and Weinberg 1997).'mutual regulation of consciousness'? What does that mean?
Events in the infant-adult 'dynamic system' (Fogel and Thelen 1987) are constrained by intrinsic human psychological motives on both sides (Aitken and Trevarthen 1997). These intrinsic constraints are psychogenic adaptations for cultural learning.This assertion is not based on any evidence.
Before they possess any verbalizable THEORY OF MIND, children share purposes and their consequences through direct other-awareness of persons' interests and moods (Reddy et al. 1997).Nonsense. The evidence show that they recognize patterns in the behaviours other than persons. It does not tell us anything about 'sharing' anything.
The term "memory" implies the capacity to encode, store and retrieve information.This is one of the meaning of "memory", and the wrong one, because we have no evidence that humans have this kind of memory. The memory that humans have allows them to form memories and recall them. See in Reasoning errors for a discussion.
The possibility that memory might not be a unitary system was proposed by William JAMES who suggested two systems which he named primary and secondary memory. Donald HEBB (1949) also proposed a dichotomy, suggesting that the brain might use two separate neural mechanisms with primary or short-term storage being based on electrical activation, while long-term memory reflected the growth of relatively permanent neuronal links between assemblies of cells.Note that these are two different ideas: James suggests two different systems, while Hebb suggests two different mechanisms. The evidence that the author brings cannot distinguish between these possibilities.
There is still some support for attempts to account for the data within a unitary system, but my own view (Baddeley 1997) is that this is no longer a tenable position. In particular, the neuropsychological evidence seems to argue for a distinction between an episodic long-term memory system (depending on a circuit linking the temporal lobes, the frontal lobes and parahippocampal regions), and a whole range of implicit learning systems, each tending to reflect a different brain region.That is just a biased view of the data. Over the basic distinction of specific sensory areas for each modality, specific motor areas and the Broca and Wernicke areas, the neuropsychological data does not show any consistent evidence for specialization of regions in the cortex, which is the most important part of mental activity.
Only 100 years ago, it was widely believed that the world perceived by newborn infants was, in the words of william JAMES, a "blooming, buzzing confusion." In the decades since then, developmental research has demonstrated dramatically that James's view was erroneous.Nothing in the evidence that the authors quote demonstrates (and certainly not 'dramatically') that James's view was wrong. All of it is based on overinterpretation of tendencies in infants (longer looking times, habituation) as if they show adult like perception. The tendencies that baby have shows that they are sensitive to the stimuli, but does not show that they perceive an organized picture of the world. In addition, most of the evidence is in older infants (few months), which are not the same as newborns.
James's view is confirmed by the stochastic connectivity of the cortex, which the authors totally ignore (as usual).
Children seem to understand important aspects of the mind from a strikingly early age, possibly from birth, but this knowledge also undergoes extensive changes with development.This is a ridiculous statement. There isn't anything that can be even remotely be considered as evidence that very young children, "possibly from birth", understand "important aspect of the mind." See below about the evidence that the author brings.
Newborns who see another person produce a particular gesture will produce that gesture themselves (Meltzoff and Gopnik 1993).Simple confabulation. It is based on irreplicable studies, in which even the 'best' results where of statistical nature, rather than a clear-cut response.
Similarly, very young infants show special preferences for human faces and voices, and engage in complex non-verbal communicative interactions with others (Trevarthen 1979).Heap of overinterpretations. There is a literature about 'face recognitions' in neonates, but this is all about preferring (looking better) at caricatures of faces, not faces (see the discussion of FACE RECOGNITION in the comments about the neuroscience section). Older infants (several months) do prefer faces, but that can be learned.
Young infants probably prefer soft and rhythmic sounds. I don't actually know if they prefer human sound to other such sounds.
The 'complex non-verbal communicative interactions' is a simple confabulation. The infant does some gesture, and the researchers interpret it as 'communication'.
By nine months infants begin to follow the gaze of others and to point objects out to them (Butterworth 1991). In the behavior known as "social referencing," infants who are faced with an ambiguous situation turn to check the adult's facial expression and regulate their actions in accordance with it (Campos and Sternberg 1980).Nine months infants have already have nine months of learning, so this can be explained by learning.
These very early abilities suggest that there is a strong innate component to our "theory of mind."The author completely ignore the possibility of learning. In addition, the author jumps from simple abilities to "theory of mind" without any justification. That seems to be the basis of the ridiculous statement above.
Leslie (1994) and Baron-Cohen (1995) have suggested that the developments reflect the maturation of an innate Theory of Mind module, by analogy with similar modular theories of language and perception.There is no neuroscientific support for any of these theories. They are all based on the assumption of modularity, and overinterpretation of behavioural data.
Working memory is the cognitive system that allows us to keep active a limited amount of information (roughly, 7±2 items) for a brief period of time (roughly, a few seconds).This presents the major error associated with 'working memory', i.e. the assumption that there is a separate cognitive system for doing this. In this text, the question whether there is a separate system for doing the the things that 'working memory' is doing is not discussed at all, and it is taken for granted that the 'working memory' is a separate system.
Even more ridiculously, the text actually presents evidence that working memory is not separated from the rest of the system. Instead of reconsidering whether it is a separate system, the author takes it as evidence for the 'role of working memory in higher-level cognition'. This author seems not to be able even to consider the possibility that 'working memory' is not a separate system.
Within verbal and spatial working memory, there is evidence for a further subdivision, that between a passive storage process and an active rehearsal process.Note that here the author discusses division to processes, not systems. This is much more sensible.
Other persuasive evidence for working memory's role in higher-level cognition comes from computational research, specifically the use of symbolic models to simulate higher-cognitive processes. Simulations of this sort routinely give a major role to working-memory operations, ....Another ridiculous idea. The reason that working memory has a major role in computational models is that the modellers are convinced, based on the theories of psychologists, that it is important. In addition, it is easier to model a system with separate components. It does not tell us anything about how things are organized in the brain.