related texts

Comments on (MITECS), computation intelligence section

[ 17 Jul 2010 ] By now it is not online anymore. If you find here something that is interetsing and you want to see the actual text in MITECS, you can try to google part of the quote that I give. With some luck you can find the text online. Note also that these commenst are pretty old by now.

The text below is intended to be read side by side with the MITECS text. Indented text is always an exact quote from MITECS, and the user can use search to find the quote in the MITECS text. Where the quote ends with several dots, the comment is about all the paragraph starting with this text.

Note from the MITECS Executive Editor:

Keep in mind that the current mitecs site is a developmental, unedited site. The final site will be posted this spring.

This page contains comments on the Computational Intelligence domain. Other pages contain comments on the other domains:

================================================================================
================================================================================

General: This domain is supposed to be about Computational Intelligence, but it also contain discussions which are supposedly related to the brain. All of these ignore the stochastic connectivity in the cortex, and are therefore spurious.

Cognitive modeling, Symbolic

General: As usual, the author completely ignores the question of implementation (see brain-symbols for discussion).
In particular, Fodor and Pylyshyn argue that any valid cognitive theory must have the properties of productivity and systematicity. Productivity refers to the ability to produce and entertain an unbounded set of novel propositions with finite means.
The 'productivity' requirement is a Blatant nonsense. There is no way in which a finite (including time dimension) physical system can produce and entertain an unbounded set of novel propositions, and humans are finite physical systems. The actual requirement corresponding to 'productivity' is that a valid cognitive theory has the ability to produce and entertain much larger number of novel propositions than a human can conceivably achieve in a life time, and do it with a number of components comparable to the size of the brain.
Both productivity and systematicity point to the need to posit underlying abstract structures that can be freely composed, instantiated with novel items, and interpreted on the basis of their structure.
Another piece of Blatant nonsense. Neither systematicity nor productivity (even in its bogus meaning) require instantiation or interpretation. These requirements are simply spurious, and are based on intuition that comes from experience with symbolic systems.
Though the problem of under-constraining data is a universal issue in science, it is sometimes thought to be particularly acute in computational cognitive modeling, despite the variety of empirical constraints described above......
It is nice to see that this author does not skip the problem of under-constraining data. However, he ignores the reason that it is particularity acute in computational cognitive modeling: the experimental data is very variable. That applies to all the kinds of data that were introduced in the previous paragraph.
Theorists have responded to these problems in a variety of ways. One way is to adopt different levels of abstraction in the theoretical statements: in short, not all the details of the computer model are part of the theory.
Another Blatant nonsense: using different levels of abstraction does not in general help in coping with under-constraining data. The distinction between levels of implementation is used mainly to justify the ignoring of implementation details.
A complementary approach to reducing theoretical degrees of freedom is to apply the same model with minimal variation to a wide range of tasks. Each new task is not an unrelated pool of data to be arbitrarily fitted with a new model or with new parameters. For example, a computational model of short-term memory that accounts for immediate serial recall should also apply, with minimal strategy variations, to free recall tasks and recognition tasks as well (Anderson and Matessa 1997).
Correct, but the research which is described in the rest of the text almost totally ignores this rule.

Cognitive modeling: Connectionist

General: Even one of the main proponents of connectionism seems to be unaware of the stochastic connectivity of neurons in the brain, and the implications.

Computation

Of these, the most influential (especially in cognitive science and artificial intelligence) has been the claim that computers are formal symbol manipulators (i.e., actively embodied FORMAL SYSTEMS). In this three-part characterization, the term 'symbol' is taken to refer to any causally-efficacious internal token of a concept, name, word, idea, representation, image, data structure, or other ingredient that represents or carries information about something else (see INTENTIONALITY).
The reference to INTENTIONALITY is extremely misleading. The notion of 'carries information about' is totally different between formal symbol manipulators and INTENTIONALITY. In the manipulators, the internal tokens allow the system to access what they refer to using its underlying implementation operations (that what causally-efficacious means). That is not part of the definition of INTENTIONALITY, and it is actually false: If you have in your brain a 'mental representation about a tree', it does not allow the brain to access the tree using its implementation operations. Confusing these two notions gives the impression that humans, which by definition have INTENTIONALITY, must be formal symbol manipulators (which they cannot be, because of the stochastic connectivity in the cortex).
Turing machines have also figured in cognitive science at a more imaginative level, for example in the classic formulation of the Turing Test: a proposal that a computer can be counted as intelligent just in case it is able to mimic a person answering a series of typed questions sufficiently well to fool an observer.
Nonsense. Turing machines and the turing test are unrelated concepts, except that they originated from the same person. Turing machines are based on totally formal analysis, while Turing tests are based on human intuitions.

Computational vision

General: As usual, this ignores e stochastic connectivity. The other mistake is the underlying assumptions that computational models necessarily tell us something about the brain, and that there are distinct and separate stages and processes in the analysis of visual input ( Mudulariry assumption and Computer models errors ).

Marr

General: Marr legend is one of the most misleading group of concepts in cognitive science, and his tragic death at young age made them even more potent. A critical analysis of his main ideas, as they are appear in the book Vision, is given here.

Supervised learning in multilayer neural layers

However, some biologically unrealistic neural networks are both computationally interesting and technologically useful (Bishop 1995).
This implies that there are realistic neural networks that are "computationally interesting and technologically useful", which is nonsense. There is still no relistic neural networks which do anything of interest.