related texts

Comments on (MITECS), philosophy section

[ 17 Jul 2010 ] By now it is not online anymore. If you find here something that is interetsing and you want to see the actual text in MITECS, you can try to google part of the quote that I give. With some luck you can find the text online. Note also that these commenst are pretty old by now.

The text below is intended to be read side by side with the MITECS text. Indented text is always an exact quote from MITECS, and the user can use search to find the quote in the MITECS text. Where the quote ends with several dots, the comment is about all the paragraph starting with this text.

Note from the MITECS Executive Editor:

Keep in mind that the current mitecs site is a developmental, unedited site. The final site will be posted this spring.

This page contains comments on thephilosophy domain. Other pages contain comments on the other domains:

General comments: An outstanding feature of many articles is that philosophers believe they can understand how the brain works without any reference to its constituents (i.e. neurons). Note that this does not make sense even when investigating the question of minds in general, rather than humans minds only. Because the brain is the only thing that is even close to implement a mind, learning how it does it is the most promising avenue of investigating minds.

Computation and the brain

Two very different insights motivate characterizing the brain as a computer.
The text does not explain at all why these two 'insights' motivates characterizing the brain as a computer. For this, it needs to claim (and then support) that conclusions from computer models are likely to be applicable to the brain. It does not touch this claim at all.
Broadly speaking, there are two main approaches to addressing the substantive question of how in fact brains represent and compute. One exploits the model of the familiar serial, digital computer, where representations are symbols, in somewhat the way sentences are symbols, and computations are formal rules (algorithms) that operate on symbols, rather like the way that "if-then" rules can be deployed in formal logic and circuit design.
By now, the authors take for granted that research on computers tells us something how the brain works, without presenting any supporting evidence.

The comparison between sentences and representation inside computer is nonsense, because these are fundamentally different. Sentences need interpreter to interpret them before their information can be used, but representations inside computer do not need interpretation, and can be used by the computer directly with its implementation level instructions.

The second is rooted in neuroscience, drawing on data concerning how the cells of the brain (neurons) respond to outside signals such as light and sound, how they integrate signals to extract high-order information, and how later-stage neurons interact to yield decisions and motor commands.
That is an extremely narrow-minded view of what mental processes are. These are made of much more than extracting information and generating motor commands.

Computational theory of mind

CTM's proponents view the theory as an extension of the much older idea that thought is MENTAL REPRESENTATION -- an extension whose novel virtue is that it shows us how a commitment to mental states can be compatible with a causal account of mental processes, and a commitment to materialism and the generality of physics......
Note that all this paragraph is not about compatibility with any evidence, but with some philosophical ideas, which themselves are not based on any evidence.
CTM has been touted both for its connections to successful empirical research in cognitive science and for its promise in resolving philosophical problems.
The author carefully avoids explicitly claiming that that CTM was actually useful in empirical research, and with a good reason. While it helped in making many computer models (and publishing papers thereof), it did not give us any single insight about humans psychology.
Fodor (1975) has argued that the main argument in favor of the language of thought hypothesis and CTM is the only game in town argument: cognitive theories of language, learning, and other psychological phenomena are the only viable theories we possess, and these theories presuppose an inner representational system.
It is not enough for a theory to be the only game in town: It also has to be useful in research about the system it is about. Since CTM does not give us any insight of how humans think, it is useless. Since it is incompatible with neurobiology, it is not going to give us any insights in the future either. Sticking to it just hinders research on other options.
CTM gained a great deal of currency in the late 1970s and 1980s. Since that time, it has been criticized on a number of fronts.
Note that even in the critical part, the question of compatibility with neurobiology is not mentioned at all.

Connectionism, philosophical issues

General: The only thing we know for sure about the internals of human thinking is that it is based on neurons. Thus, ultimately, psychological explanations and mental representations must be based on real neural system. This section almost totally ignore this point.

Consciousness

General: as usual, there is no clear definition of the word 'consciousness', or any related terms, so any counter argument can be countered by claiming that the text meant something else. See the discussion in Methodological points, [6].
The question is: Why should there be something that it is like for certain processes to be occurring in our brains?
That is blatant nonsense. The simple answer is that processes are in the brain are done by some neural activity, and the 'something' is this neural activity, and our 'experience' is the total effect of this neural activity (or, if you want to restrict 'experience' to what you can recall later, the part of the total effect that can be recalled (== reactivated) later). Since the connectivity in the brain is mainly stochastic (variable across individuals), most of the effects are stochastic , and hence 'subjective'. More generally, 'experience' is the effects of the internal working of the system on its thinking. Note that to be 'experienced' in any sense, the effects themselves must have some effects, which in the brain means they are themselves either neural activity, or some effect on the strength of connections between neurons.

By simply ignoring the simplest answer, the author (and others) make it much more complicated than it should be.

So it may appear that subjective facts are not to be identified with the facts that are spelled out in those scientific theories.
Obviously true, but irrelevant, because the theories (physical and neurophysiological) do not describe the idiosyncratic (and hence 'subjetcive') patterns of activity of specific individuals. It is possible that the author does not realize that the connectivity of each individual is unique.
Why there are any experiential correlates of the neural codes is left as a brute unexplained fact.
Repeat the blatant nonsense question, and adding confusion by using confusing vocabulary. First, it is not 'neural codes', it is neural activity, and secondly, there aren't 'experiential correlates' of neural activity, there is 'experience', which is simply the effects of the neural activity.

Explanatory gap

It's very hard to see how the qualitative character of seeing blue can be explained by reference to neural firings,
Why? This dismisses the simplest answer without giving any reasonable argument.

The implicit reason is probably that this would make 'consciousness' too much down-to-earth, and eliminate its (and hence ours) magical attributes.

It always seems reasonable to ask, but why should a surface with this specific spectral reflectance look like that (as one internally points at one's experience of blue)?
Another blatant nonsense. The simple answer is mostly random chance, with some learned associations (for example, 'blue' is necessarily associated with sky and water, because these are blue(ish), and we learn it. This leads to association with coolness). The author does not give us any reason to believe that there is anything except learned association and random variations.
Contrast this example with what it's like to see blue. After an exhaustive specification of both the neurological and the computational details, we really don't know why blue should look the way it does, as opposed, say, to the way red or yellow looks.
Another blatant nonsense, this one on two accounts. First, we are very far from 'exhaustive specification' of the nuerological details that are common to humans. Secondly, because the variability between individuals, this specification is not enough to tell us about the subjective experience of each individual, because this is the result of the variability of the neural system between individuals.
That is, we can still conceive of a situation in which the very same neurological and computational facts obtain, but we have an experience that's like what seeing red or yellow is like.
Any argument that is based on our ability to conceive a situation is nonsense, because our 'ability to conceive' is not limited by natural laws, so cannot tell us anything about natural phenomena. It is common to use 'thought experiments' in science, but these must be constrained by the relevant natural laws to be useful.

We don't have any reason to believe that the full neurological details of an individual person can be reproduced without reproducing the same experience, and, except for dualists, it is clearly impossible.

Also, since we can imagine a device which processed the same information as our visual systems but wasn't conscious at all, it's clear we don't really know why our systems give rise to conscious experience of any sort, much less of this specific sort.
Again, it is nonsense to use our imagination as if it necessarily correlates with reality, but the main error of this sentence is the implication that the experience we have is of some 'specific sort', rather than randomly variable (except learned associations).
Frank Jackson (1982) imagines a neuroscientist, Mary, who knows all there is to know about the physical mechanisms underlying color vision. However, she has been confined all her life to a black and white environment. One day she is allowed to emerge into the world of color and sees a ripe tomato for the first time. Does she learn anything new? Jackson claims it's obvious she does; she learns what red looks like. But if all the information she had before really explained what red looked like, it seems as if she should have been able to predict what it would look like.
Again, it is nonsense to use imagination for argument about the world, and it could be easily imagined that if she really knew everything about color vision she will not learn anything new.

However, the main error here is that knowing anything there is to know about the physical mechanisms underlying color vision would not tell Mary what would be the patterns of activity in her brain when she sees red, and what effects these will have. That is because these are determined by the connectivity of the brain, which varies stochastically between individuals. The only way for her to know how 'red looks like' is to learn and analyze all the connectivity in her own brain, and, if she gets it right, in this case she is not going to learn anything the first time she saw red.

There are various responses to the explanatory gap.....
As usual, the discussion completely ignores neurobiology.

Functional Decomposition

The organization is no longer aggregative, but sequential, with relatively independent functional units.
There is no evidence for independent functional units. The sentence is not actually a lie, because of the term 'relatively' which is open-ended. The author corrects himself to some extent in the last sentence of the section.
Steven Pinker (1994), for example, argues that language is an ability that is relatively independent of cognitive abilities in general. The clear implication of such independence is that it should be possible to disrupt linguistic abilities without impairing cognition, and vice versa. The studies of aphasia exhibit such patterns.
The usual error about dissociation. See in Reasoning errors.
A commitment to some form of functional decomposition or modularity might seem inevitable when dealing with a phenomenon as complex as mental life.
A blatant nonsense. Functional decomposition or modularity are experimental propositions, i.e. they can be true or not, and have to be supported or disprove by observations.

Functional Role Semantics

One of the most important features of our CONCEPTS is that they refer -- that is that they pick out objects in the world.
Since the discussion is about mental representations, CONCEPTS here refers to mental representations of CONCEPTS. This introduces one of the typical confusion in philosophy of mind, where many philosophers (maybe most) fall on their face.

The point is that mental representations do not refer to anything. What they do is to cause patterns of neural activity to arise in the brain, and that is it. This neural activity, in general, generates such behavior (including thinking) that can be interpreted as referring to objects in the world. Thus the question is not how the representations refer ('pick out object in the world'), but how the the neural activity generates the right behavior. To ask how representations refer to the world is as stupid as asking how a falling stone computes newton's law of gravitation, but that does not prevent many philosophers to 'investigate' this 'problem'.

For example, suppose that you and I say "I am ill." One aspect of the meaning of 'I' is common to us, another aspect is different.
A blatant nonsense. The mental representations of 'I' is not shared, while the word 'I' and its meaning ('the person/entity that says/signals this word') is shared. The author confuses the issue by confusing mental representations and words.

Mental representation

Following Peirce (Hartshorne, Weiss, and Burks 1931-1958), we can say that any representation has four essential aspects: it is realized by a representation bearer, it has content or represents one or more objects, its representation relations are "grounded" somehow, and it is interpretable by (that is, it will function as a representation for) some interpreter.
In the brain, all there is neural activity, which causes more neural activity (and also changes in strength of connections). There isn't anything like an interpreter. By defining representation as requiring an interpreter, and taking for granted that the brain contains such representations, the author takes for granted an interpreter in the brain, even though this is clearly false.
If we take one of the foundational assumptions of cognitive science to be that the mind/brain is a computational device (see COMPUTATIONAL THEORY OF MIND), the representation bearers of mental representations will be computational structures or states.
Note that this not only takes for granted that the brain is a computational device, which it does explicitly but without justification, but it also takes for granted that the brain has representations according to the above definition, i.e. that the brain contains an interpreter.
pre-theoretically conceived, the human cognitive capacities have the following three properties: (a) each capacity is intentional; that is, it involves states that have content or are "about" something;
The usual stupidity (see above in the comments about Functional Role Semantics). Human capacities are not more "about" something than a falling stone is "about" newtonian mechanics.
Cognitive scientists study not only the contents of mental representations, they are also concerned to discover where this content comes from, that is, what it is about the mind/brain that makes a mental representation of a tree have the content of being about a tree.
The obvious answer is that the representation of a tree is "about a tree" because it is activated by stimuli associated with tree, and activates the appropriate representation to generate the appropriate behavior (e.g. assuming that it has hard trunk, but not as hard as metal, assuming it got roots etc.). I should assume that this is Functional Role Semantics, but from the section about this (see above) it seems that the supporters of Functional Role Semantics still believe in the error that representation 'pick up objects in the world' somehow, and certainly do not consider neural activity.
Not all philosophers interested in cognitive science regard the positing of mental representations as being necessary or, even, unproblematic....
The critical part does not contain any reference to neurons, either.

Functionalism

Given functionalism, it may be true that every individual mind is itself a physical structure. Nevertheless, by the lights of functionalism, physical structure is utterly irrelevant to the deep nature of the mind.
A blatant nonsense. Even in the lights of functionalism, the activity of the mind is totally dependent on its physical structure. Thus the physical structure can be 'totally irrelevant to the deep nature of the mind' only if what the mind does is 'utterly irrelevant' to its 'deep nature'.
Consequently, functionalism is foundational to those cognitive sciences that would abstract from details of physical implementation in order to discern principles common to all possible cognizers, thinkers who need not share any physical features immediately relevant to thought. Such a research strategy befriends Artificial Intelligence since it attends to algorithms, programs and computation rather than cortex, ganglia and neurotransmitters.
The first sentence is sensible. The second one is blatant nonsense, and contradicts the first. It is the 'cortex, ganglia and neurotransmitters' which implement minds, so if we want to 'abstract from details of physical implementation' we need to investigate them. The 'algorithms, programs and computation' of AI do not, as yet, implement anything like a mind, so are not obviously useful.
The meaning of the sentence -- what it represents -- depends on the meaning of those words together with its structure. So when we learn a language we learn the words together with recipes for building sentences out of them. We thus acquire a representational system of great power and flexibility, for indefinitely many complex representations can be constructed out of its basic elements. Since mental representation exhibits these same properties, we might infer that it is organized in the same way.
A blatant nonsense, even with the qualifications 'might'. Languages were developed for communicating thoughts, so obviously they will have some similar features to thoughts. This doesn't even hint about similarity in their organization. This is an example of the 'misanalyzing the null hypothesis (Reasoning errors).
A minimal language of thought hypothesis is the idea that our capacities to think depend on a representational system, in which (i) complex representations are built from a stock of basic elements;
The last sentence is vague enough that it is not clearly false, because we may take each 'basic element' to correspond to a neuron, in which case this sentence is unproblematic, However, that makes the sentence trivial, and it is not what the proponents of Language of Thought think about. Any other interpretation, however, is false, because the stochastic connectivity in the cortex rules out the existence of units that can be called 'basic elements'. Thus the Language of Thought idea is obviously false, and seems plausible only because researchers ignore the stochastic connectivity of the cortex.
So Fodor thinks of Mentalese as semantically rich, with a large, word-like stock of basic units. For example, he expects the concepts truck, elephant and even reactor to be semantically simple. Moreover, this large stock of basic units is innate.
If fodor really says that (the author does not give a specific reference), that is hyper nonsense. There is no way truck or reactor could be innate, because these concepts have not existed long enought to affect evolution at all.
The Language of Thought hypothesis has been enormously controversial....
As usual, even the critical section ignores neurobiology altogether.

Modularity of mind

General; Even though the author opposes modularity in general, it seems she hasn't worked out yet that dissociation does not show modularity (see in Reasoning errors]).

Twin Earth

But what the twins say is different: Oscar says that water is good for plants, and that's what he believes. What T-Oscar says and believes is something we might put as "T-water is good for plants," where "T-water" is a non-technical word for XYZ.
A blatant nonsense. What the twins say is identical. It is our (the people that imagine the twin earth) evaluation of the meaning of their utterance that is different. The author, along with, it seems, the rest of the philosophers, seems to be unable to distinguish between what the twins say and between his (ours) interpretation of it.

It may seems that the what the twins mean is different, but that is false as well. Both twins, when they utter "water", mean something like "the ubiquitous, transparent kind of liquid that rain, rivers and oceans in my planet are made of." Only the evaluation of this meaning yield different results (XYZ on twin earth, H2O on earth), but that is because it contains indexical expression (my planet).

Failing to differentiate between the meaning of the word and the result of evaluating this meaning is equivalent to failing to recognize the significance of the evaluation process. It is this failure that causes most of the paradoxes that are associated with the meaning of the word MEANING (see in myths and misconception, ).

Narrow content

This view is supported by Putnam's TWIN EARTH example, according to which the referents of our terms, and hence the truth conditions and meanings of utterances and the contents of our thoughts, depend on conditions in our environment and so are not determined by (do not supervene on) our individual, internal psychology alone.
See the discussion of TWIN EARTH above. This also introduces the other confusion that philosophers do, between words and the 'contents of the mind'. It does not make sense at all to talk about the 'meaning of the contents of the mind', because the term 'meaning' is only applicable to entities which are used for communication between intelligent agents. This confusion, together with the confusion between meaning of words and evaluation of this meaning, makes this discussion plain waffle.

Nativitism

Chomsky and the INNATENESS OF LANGUAGE
For the quality of Chomsky's arguments, see here.

Natural kinds

In particular, children do not tend to classify objects merely on the basis of their most obvious observable features, features which may be unrevealing of natural kind membership; and when children are told that two individuals are members of a single category, they tend to assume that these individuals will share many fundamental properties, even when the most obvious observable features of the objects differ. Some authors have suggested that these tendencies may be innate.
All that the evidence shows is that children can recognize features that are not obvious. From this to jump to innate tendencies is a gross overinterpretation.
Relevant here too is work in cognitive anthropology. Atran's work (1990) on folk taxonomies reveals deep similarities in the ways in which different cultures divide up the biological world. More than this, these taxonomies have much more than just a passing resemblance to the more refined taxonomic categories of the biological sciences.
Without explicitly doing it, the author invites the reader to do the 'mis-analyzing of the null hypothesis' error (Reasoning errors). In this case, it is the assumption that without innate tendencies, the folk taxonomies will not tend to be the same. This is obviously false, as these tendencies are based on efforts to understand the same set of data, and hence will be similar in as much as they are successful.

Intentionality

General: Intentionality is (obviously) a 'virtual' attribute, in that it is attributed by 'us' to some entities, and is not intrinsic property of the entity itself. The author seems to be unable to understand this point.
The term "intentional" is used by philosophers, not as applying primarily to actions, but to mean "directed upon an object." More colloquially, for a thing to be intentional is for it to be about something.
Being "directed upon an object" or about something are both virtual attributes, which are attributed from outside to some entity. In some cases, it can be argued that some intrinsic properties of the entities make them naturally regarded as about something, but in this case the discussion should be about these intrinsic properties. The author does not touch these intrinsic properties at all.
Perhaps suspiciously, the instrumentalist views are not usually extrapolated to the aboutness of perceptual states or of representations posited by cognitive scientists; they are restricted to commonsense beliefs and desires.
Blunt demagoguery. Maybe the authors that it is arguing with (Dennet and Davidson, see previous paragraph of the text) limit their discussion to commonsense beliefs desires, but the 'virtuality' of the 'intentionality' attribute is unproblematically true about perceptual states as well.
They do shed the burden of psychosemantics, i.e., of explaining how a particular brain state can have a particular content, but they do no better than did the representationalist views in explaining how thoughts can be about abstracta or about nonexistents.
A blatant nonsense. There is no problem to attribute intentionality about abstracta or nonexistents, and the author does not give any hint why he thinks it is a problem.

Propositional attitudes

General: like the section about Intentionality (above), this author fails to realize that being a Propositional attitudes is a virtual attribute, attributed to the mental state from outside.

Psychological Laws

General: The author ignores the main problem with finding psychological laws, which is the psychological variability of humans.

Qualia

The first claims that one can conceive of the qualitative features of one's pains or perceptions in the absence of any specific physical or functional properties (and vice versa), and that properties that can be so conceived must be distinct (Kripke 1980).
I haven't read Kripke (1980), and the way the claim is put here it is not obvious what it actually claims. In particular, it is not obvious if 'absence of any specific physical or functional properties' means
absence of any physical changes in the brain
in which case that is a dualistic claim with no empirical base
an absence of conception of specific physical or functional properties
in which case it is a trivial and uninteresting claim about the distinction between conceiving pains and perceptions (through sensory and nociceptive input ) and conceiving internal states (directly).
The second argument claims that one cannot know, even in principle,
See the discussions in the comments about EXPLANATORY GAP and WHAT-IT'S-LIKE.
also commonly thought that (sincere) beliefs about our own qualia have special authority (that is, necessarily are for the most part true), and are also self-intimating (that is, will necessarily produce, in individuals with adequate conceptual sophistication, accurate beliefs about their nature). Insofar as they have these special epistemic features, qualia are importantly different from physical properties such as shape, temperature, and length, about which beliefs may be both fallible and uncompelled. Can they nonetheless be physical or functional properties?
A blatant nonsense. the 'beliefs about own qualia' are 'necessarily true' because, by definition, there is no way to falsify them. They are self-intimating because they affect our thinking process directly, not through the senses.

what-it's-like

First, we want to know what distinguishes those states that there is something it is like to be in from those there is nothing it is like to be in; those states that have a subjective character from those that don't.
A blatant nonsense. The distinction of "states that there is something it is like to be in" is simply an arbitrary distinction, used to refer to states of systems that can, at least in principle, "report" it in some way (not necessarily linguistic). Which systems are regarded as able to "report" their states, and which of their states are regarded as "there is something to be in it" varies arbitrarily between speakers, with the exception that humans always count as "reporting" systems, and all the states they can report linguistically are "reportable".

It is not really wrong to have an arbitrary distinction. The blatant nonsense is the claim that there is something in the states themselves that distinguishes them.

But there seems to be another question we want answered; namely, what's it like for the bat to perceive the world in this way? To this question, it doesn't seem as if the computational theory can provide even the beginning of an answer.
Another blatant nonsense. We don't have any reason to believe that a total understanding of the circuitry of a bat will not allow us to know everything there is to know about the way the bat perceives the world. That is what our understanding of the world (as a physical system) tells us, and all the author (and the original paper) can give as counter-argument is an intuitional argument.
Though she knows all there is to know about the physical mechanisms underlying color perception, it seems that she wouldn't know what it's like to see red until she actually sees it for herself. Again, it seems as if one has to undergo conscious experiences to know what they're like. But why should that be?
As discussed above (in the comments about EXPLANATORY GAP), that is because she doesn't know the connectivity in her own brain.
How seriously one treats the problem of subjectivity in the end is largely a matter of one's attitude toward philosophical intuitions.
The author admits that it is just an intuition that creates the problem, but only after the main discussion, as if it is not an important point. The author does not even try to give us any indication why he (or anybody) thinks that in this case we should prefer the intuitions of philosophers to our scientific knowledge.

Radical interpretation

If we knew what the foreigners meant by their sentences we could discover their beliefs and desires; if we knew their beliefs and desires we could discover what they meant.
That is ridiculous, because we will never be in a situation where we don't know any of the beliefs of the aliens. We can be sure that they believe that legs are for walking and mouth for eating, women give birth, the sun comes out each day in the east, rain come from clouds and cutting yourself hurts, animals move and plants don't. In short, we always know large fraction of the beliefs of the aliens.
Knowing neither, 'we must somehow deliver simultaneously a theory of belief and a theory of meaning' (Davidson 1984: 144).
Nonsense. We don't need theories of belief and meaning to learn the language of the aliens. We need to form associations between concepts and the appropriate linguistic terms. For that we need, based of the assumptions that we can make of the way the aliens think and on their behavior, figure out what concept they think of when they utter a specific term.

As a result of this nonsense, the following discussion is completely irrelevant to the actual problem.

Reference, theories of

General: This discussion does the usual confusion between words and thoughts, but the main confusion is between the meaning of words and the result of evaluating of this meaning. See the discussion above in the comments about the Twin Earth section and in Myths and misconceptions.

Sense and reference

General: The problems that are discussed in this section arise because of the confusion between meaning of words and the result of evaluating the meaning, as in the previous section.

Simulation vs. theory-theory

General: The basic errors underlying the discussion are the 'sameness assumption' error (Reasoning errors), and the 'simplicity assumption' (Reasoning errors). The way humans understand other humans is variable between individuals, and for each individual it is a complex combination of many approaches. Thus there cannot be a single theory that explains how humans do it, and the large amount of contradictory (and variable) evidence show this. The researchers, however, seem unable to appreciate this point.

Self

General: This ignore basic facts. See the discussion in Myths and misconceptions