related texts

Comments on (MITECS), Linguistics and Language

[ 17 Jul 2010 ] By now it is not online anymore. If you find here something that is interetsing and you want to see the actual text in MITECS, you can try to google part of the quote that I give. With some luck you can find the text online. Note also that these commenst are pretty old by now.

The text below is intended to be read side by side with the MITECS text. Indented text is always an exact quote from MITECS, and the user can use search to find the quote in the MITECS text. Where the quote ends with several dots, the comment is about all the paragraph starting with this text.

Note from the MITECS Executive Editor:

Keep in mind that the current mitecs site is a developmental, unedited site. The final site will be posted this spring.

This page contains comments on the Linguistics and language domain. Other pages contain comments on the other domains:

General

Many of the texts in this domain take for granted one the following baseless assumptions, which are in contradiction with the stochastic connectivity in the cortex:
  1. That some function of human cognition can be described by a reasonably compact formal description. That ignores the possibility that it is variable between individuals, i.e. doing the 'sameness assumption' error (see in Reasoning errors, [3.1]). This assumption is kept religiously in the face contradictory evidence, that causes every theory to fail. This error is done by almost all the sections in this domain.

  2. That there is a universal grammar. That assumption in effect includes the previous assumption.

  3. That there is in the brain a LEXICON, as a separate entity.

Introduction: linguistics and language

General: This text argues for innateness of language competence, but in some places the language it uses is ambiguous, e.g. does "grow in us" means innate or not. In the cases of ambiguity, I assume that the meaning is supposed to be innate-oriented, because this is the main thrust of the text, and that is the way readers are likely to interpret it.

A common thread through the text is ignoring the functional significance of the "universals", i.e. how they are useful for communication. In some cases the authors do comment about this, but do not actually try to consider this question seriously.

This fact makes no sense without the assumption that our mind must be especially equipped with something, a cognitive device of some sort, that makes us so successful at the task of learning words. This cognitive device must be quite specialized for such a task, as we are not as good at learning poems or the names of basketball players (cf. WORD MEANING, ACQUISITION OF).
Blatant nonsense. Even a modest sport fan that watches basketball, say, two hours a week, can easily learn to recognize several hundreds basketball players in a year. If he spent on it as much time as humans spend in "practicing" language, he could easily learn several thousands names in a year, i.e. his learning rate would be comparable to the rate of learning words. Note that recognizing basketball players is a far more complex task than recognizing words.

Another way to think about is to compare it to a computer: a computer can easily acquire the meaning of 45000 words, but would find it difficult to learn to recognize basketball players.

The trick of exaggerating the difficulty of learning words is one of the most common tricks in the argument for innateness. Part of the reason that this trick works is that when people think about learning words, they think about memorizing an arbitrary list of words. This is indeed a very difficult task, but is not the way a child learns. The child learns words as part of his experience of the world, which is a far more effective way of learning.

The world of sounds that make up words is similarly complex.
The capability to distinguish between phonemes is the only capability in language comprehension for which there is real evidence that it is really innate (we also have, in language production, the innate capability to coordinate all the vocal tract). Psycholinguistics use this example in many cases as a proof that language processing in general is innate.
There must be a form of specialized, unconscious knowledge we have that makes us say "Yes, 'him' can refer to the subject in (1) but not in (3)." A very peculiar intuition we have grown to have.
The interesting question is if this knowledge is learned or innate, and in this case it is clearly learned, when we learn how to use 'persuade' and 'promise'. The sentence about 'peculiar intuition' is completely spurious, because this distinction is clearly functional.
For example, the patterns of word order are quite limited. The most common basic orders of the major sentence constituents are Subject-Verb-Object (abbreviated as SVO) and SOV.
That is clearly functional constraint, because the subject and the verb are the most prominent parts of the sentence, and hence tend to come first.
Another language universal one might mention is that all languages have ways of using clauses to modify nouns (as in "the boy that you just met," where the relative clause "that you just met" modifies the noun "boy").
Another clearly functional universal.
It seems plausible to maintain that universal tendencies in language are grounded in the way we are; this must be so for speaking is a cognitive capacity, that capacity in virtue of which we say that we "know" our native language.
This is an example of ambiguous statement. The "way we are" may be interpreted as "the way our genes make us" (i.e. innately) or plain "the way we are", which also includes for each individual his own personal history (including learning). In the second sense this statement is trivially true. In the first sense it is not obviously true, and is implausible if you take into account the stochastic connectivity in the cortex.
A term often used in this connection is "linguistic competence."
So is "linguistic competence" means innate or not? In the rest of the text I assume it means "innate linguistic competence," because the interpretation is in line with the thrust of the text.
Another important aspect of the dynamic character of language is the fact that a speaker can produce and understand an indefinite number of sentences, while having finite cognitive resources (memory, attention span, etc.).
The way this sentence is written it is trivial and uninteresting, unless 'indefinite' is interpreted as 'infinite'. In the latter case, it is simply nonsense, as humans are finite systems, and no such finite system can produce or comprehend infinite number of sentences. The correct statement would be something like 'can produce and understand huge number of sentences', and this version is not problematic. Stressing the 'infiniteness' of human linguistic capacity is a common psycholinguistic trick.
How is this possible? We must assume that this happens by analogy with the way we, say, add two numbers we have never added before.
If the human linguistic capacity was really infinite, there was some sense in this statement. With a finite system, it is just confabulation.
But the algorithm for adding we have learned through explicit training. The one for speaking appears to grow spontaneously in the child.
Ambiguous again. It is not obviously if 'spontaneously' supposed to mean only 'not a result of explicit training' (as the context implies), or is it supposed to exclude any learning (as the normal meaning of 'spontaneously' implies). The former interpretation is true but uninteresting, the latter is not obviously true (and not obviously false).
The fact that linguistic competence doesn't develop through explicit training can be construed as an argument in favor of viewing it as a part of our genetic endowment (cf. INNATENESS OF LANGUAGE). This becomes all the more plausible if one considers how specialized the knowledge of a language is and how quickly it develops in the child.
The specialization shows nothing, because every knowledge is specialized unless some intelligent process generalizes it. Because linguistic knowledge is mostly arbitrary, it is hard to generalize. The 'speed of development' would show anything only if it can be shown that it is too fast for the human cognitive general abilities. Psycholinguists show this by using nonsense arguments, e.g. the one of speed of learning words above.
In a way, the child should be in a situation analogous to that of somebody who is trying to break the mysteries of an unknown communication code. Such a code could have in principle very different features from that of a human language. It might lack a distinction between subjects and objects. Or it might lack the one between nouns and verbs.
Blatant nonsense. There is no reason why that should make any difference to the child, because he does not enumerate all the possible codes. He tries to communicate, and learns mainly by pattern matching.

This statement is even worse, because A natural language (i.e. language that is used to communicate between intelligent agents about the world in general) must have verbs and nouns and subjects and objects, because the simplest way of describing the behaviour of the world is in terms of "agents" (nouns, subjects) acting (verbs) in relations to other "agents" (nouns, objects)("agents" here contain also non-living objects). That is a typical example of ignoring the functional role of natural language.

Many languages of practical use (e.g. many programming languages) are designed just that way.
But programming languages are not natural language, and cannot be used to describe the world. Either the author is unable to make this distinction (unlikely for a professional linguist), or he is intentionally misleading the reader.
The range of possible communication systems is huge and highly differentiated. This is part of the reason why cracking a secret code is very hard.
Cracking a secret code is difficult because the secret code is designed to be difficult. In contrast, natural languages are not design to be difficult. Here the author intentionally misleads the reader.
As hard as learning an unfamiliar language as an adult.
See in myths and misconceptions for why adults find it difficult to learn second language. Note that it is not actually that difficult, and most adults can do it quite easily if they really need it.
Yet the child does it without effort and without formal training. This seems hard to make sense of without assuming that, in some way, the child knows what to look for.
It is easy to make sense if you note that human are natural pattern matchers (because neural networks are natural pattern matchers, and that what the brain is).
This is one of the planks of what has come to be known as GENERATIVE GRAMMAR, a research program started in the late 1950s by Noam Chomsky, which has proven to be quite successful and influential.
It is certainly influential, and successful in the sense that many papers was published about it. In the sense of giving us insights about the working of the brain, it is a total failure, because it did not give us any such insight.
Humans seem to be endowed with a powerful all-purpose computational device, very good at extracting regularities from the environment. Given that, one might hypothesize that language is learned the way we learn any kind of algorithm: through trial and error.
Pattern matching, not trial and error. This is another case of intentionally misleading the reader.
According to this view, the child acquires language similarly to how she learns, say, doing division, the main difference being in the nature of the input.
Division is rarely learned spontaneously, so this is a spurious example. More appropriate examples would be learning to recognize family members and other objects in the environment and their behaviour. The reason that division is much more difficult than language is that it is more difficult to see any patterns in it.
As soon as reflexives and non-reflexive pronouns make their appearance in the child's speech, they appear to be used in an adult-like manner (cf. Crain and McKee 1985; Chien and Wexler 1990; Grodzinsky and Reinhart 1993).
This is because these are simple patterns that are easy to learn.

Evolution of language

Children are remarkably proficient at obeying subtle syntactic and morphological constraints for which there is little evidence in the sentences they hear (e.g., Crain 1991), suggesting that some capacity for language has emerged through biological evolution.
Nonsense, because children are 'proficient at obeying subtle constraints' in other domains as well, so this capacity is general. Psycholinguists simply ignore the complexity of other skills that children learn.
Could this capacity have emerged as an accidental result of the large brains that humans have evolved, or as a by-product of some enhanced general intelligence? Probably not; there are people of otherwise normal intelligence and brain size who have severe problems learning language (e.g., Gopnik 1990), as well as people with reduced intelligence or small brains who have no problems with language (e.g., Lenneberg 1967).
That is nonsense argument, because the difference in mental capacity between the most intelligent human and the least intelligent human is very small compare to the different between humans and other species.
Furthermore, the human language capacity cannot be entirely explained in terms of the evolution of mechanisms for the production and comprehension of speech.
Of course not, you also need general intelligence.
Although the human vocal tract shows substantial signs of design for the purpose of articulation -- something observed both by DARWIN and by the theologian William Paley, though they drew quite different morals from it -- humans are equally proficient at learning and using SIGN LANGUAGES (Newport and Meier 1985).
So how does this refute the hypothesis that the capacity for language is made of comprehension and generation mechanisms of language plus general intelligence? Clearly, we have the mechanisms for comprehension and generation of SIGN LANGUAGE (but they are more cumbersome to use). That is an example of blatant nonsense, relying on the reader to get confused and hence accept the argument without understanding it.
Does language show signs of complex adaptive design to the same extent as organs like the hand and the eye?
Here the author does the normal confusion of regarding a language as a biological entity. It may be claimed that 'language' here is a shorthand for 'capacity for using language', but the following text shows that this is wrong.
Many linguists would claim that it does, arguing that language is composed of different parts including PHONOLOGY, MORPHOLOGY, and SYNTAX,
PHONOLOGY , MORPHOLOGY and SYNTAX are parts of language, not of the capacity for using language, in the same way that wheels are part of a car, but not part of the capacity for driving a car. By considering language as a biological entity, the idea that its components are biological entities as well is sneaked in.
The conclusion that language decomposes into distinct neural and computational components is supported by independent data from studies of acquisition, processing, and pathology (Pinker 1994).
There is no evidence for distinct neural components, except that Wernicke and Broca area are important, mainly for comprehension and generation respectively. The distinction of computational components is based on computational models, which has no relevance for the brain.

Note that the reference to Pinker is to a populist book, not a refereed paper. Some of Pinker's ideas and methodology are discussed in Psycholinguistics blatant nonsense examples,[2.3],[2.11],[2.12],[2.13].

Based on these conclusions, some scholars have argued that language has evolved as a biological adaptation for the function of communication (Newmeyer 1991; Pinker and Bloom 1990). Others have proposed instead that the ability to learn and use language is a by-product of brain mechanisms evolved for other purposes,
Considering that we know that Wernicke and Broca are important in spoken language, mainly for comprehension and generation respectively, the simplest suggestion that we have specialized regions for comprehension and generation of phonemes, and the capacity for language is made of these plus general intelligence. This hypothesis is not mentioned at all.

The reason for this omission is that the author (and others) wants to find systems specialized for language, and the only candidates in the brain are Broca and Wernicke areas. Hence he does not want to attribute any specific function to these regions, because that would mean that there is no evidence for specialized regions for the other functions of language.

Innateness of language

General: the main objection to the innateness hypothesis is the stochastic connectivity in the cortex. The author totally ignores this point.
Thus (4a,b) seem to have identical surface structures, yet in (4a) Mary is the subject of please (she will do the pleasing) and in (4b) Mary is the object of please (she will be pleased). How will a learner learn this, since it seems that the information isn't directly provided to the learner in the surface form of the sentence?

(4) a. Mary is eager to please.
b. Mary is easy to please.

A typical blatant nonsense. The difference is identified by the reader by using his/her knowledge of the meaning of the words 'easy' and 'eager', and how they are used. This meaning and usage is English specific, so cannot be innate. See in Psycholinguistics blatant nonsense [2.2] and [2.5] for similar examples.
The basic results of the field include the formal, mathematical demonstration that without serious constraints on the nature of human grammar, no possible learning mechanism can in fact learn the class of human grammars.
This is an example of somewhat more sophisticated demagoguery, with the confusion based on the term human grammar. What does this term mean in this sentence? The sentence makes sense if human grammar means the grammar that is used in human language (sense 1), in which case this sentence tells us that human languages must use restricted set of grammars (otherwise they cannot be learned), but does not tell us anything about the human mental systems. However, human grammar is more plausibly interpreted as meaning the innate grammar (sense 2), in which case the last part of the sentence does not make sense. The author hopes that the reader will get confused, and will accept constraints on human grammar (sense 2), by accepting the sentence with human grammar (sense 1).

The literature does contain texts that try to prove the claim that the innate grammar must be constrained in order to learn the grammar of language, but these are uniformly nonsensical. See examples in Psycholinguistics blatant nonsense [2.4],[2.9 - 2.12].

In addition to the APS, there are a number of other arguments for the Innateness Hypothesis. These include (1) the similarity of languages around the world on a wide array of abstract features, even when the languages are not in contact, and the features do not have an obvious functional motivation;
There isn't any evidence for similarities which are not either a result of contact, or have functional significance, or can be explained by chance. The author does not offer any reference here, and in general this point is not discussed in psycholinguistic (see for example).
(2) the rapid and uniform acquisition of language by most children, without instruction (see PHONOLOGY, ACQUISITION OF; SYNTAX, ACQUISITION OF; SEMANTICS, ACQUISITION OF) whereas many other tasks (e.g. problem solving of various kinds) need instruction and are not uniformly attained by the entire population.
First language learning is simply an easy task, and extremely useful for the child, so it is done by almost all children. The only difficult part is learning to make the correct distinction between phonemes, which is also learned, but based on dedicated areas (mainly Wernicke area).

Poverty of stimulus arguments

See in Psycholinguistics blatant nonsense [2.3] and [2.10].

Syntax, acquisition of

General: This assumes the innateness of principles and setting of parameters, but towards the end actually tries to argue for it. See in Psycholinguistics blatant nonsense [2.15].

Word meaning, acquisition of

The vocabulary of a monolingual high school graduate is in the neighborhood of 80,000 words (Miller 1996). This number is impressive --
Why is this impressive? A computer can learn this amount of associations in a fraction of a second.

The impressive thing is the knowledge that the graduate has, which he uses when he uses language as well as when doing other tasks, but that is not what psycholinguistics build theories about.

Quine (1960) gives the example of hearing a word, "gavagai," in an unknown language under the most transparent of circumstances; let us say, only and always while viewing a rabbit. A first thought is that the listener would be warranted in supposing that this word means "rabbit." But Quine points out that gavagai has a logical infinity of possible meanings, including rabbit tails or legs, rabbit shapes, colors, or motions, temporal rabbit-slices, and even undetached rabbit parts. Since all of these conjectures for the meaning are consistent with the listener's experience, how can he or she zoom in on the single interpretation "rabbit"? In actual fact, real children are apparently faced with just this problem, yet they seem to converge just about unerringly on the adult interpretation -- that the word refers to the whole animal. But so saying leaves the problem raised by Quine unanswered.
Another piece of blatant nonsense. There are two obvious reasons why the word "gavagai" is interpreted as "rabbit":
  1. None of the other options (except "rabbit") is consistent with our experience of the way people use words. While it is logically possible that some people talk about rabbit tails or colors or slices whenever they see a rabbit, our experience tells us that nobody actually does that. This is how adults know it refers to rabbit.
  2. Because of the correlations between all the features of the rabbit (they tend to appear, move and disappear together), we tend to perceive it as a unit. In addition, the rabbit's mobility make it an interesting object (compared to other objects). As a result, the 'moving unit' (the rabbit) is what we (and children) think of when we see a rabbit, and hence tend to associate with it any word that is correlated with it.
I am not sure what Quine originally intended to show in his example, but it does not demonstrate any problem for learners.
Matters get worse. Contrary to what is often assumed, words are typically not presented in such "transparent circumstances." Adult speech to young children even in middle-class American homes is frequently about the past or the future, when the word's referents often are not in view (Beckwith, Tinker, and L. Bloom 1989).
Here and in some of the following the authors are confused about what is needed for learning the meaning of a word: obviously, what is needed is association between thinking about the referent and hearing the word. Viewing an object is just one way of getting to think about it, and there are other ways of getting to think of an object, or any other concept. The authors seem to be completely unable to consider this point.
The solution to these puzzles must involve attributing certain powers to very young children. Some of these are conceptual. Surely children enter the language-learning situation equipped with natural ways of interpreting many things, properties, and events in the world around them.
If the word 'natural' means 'innate', then the last sentence is baseless, because no evidence was given that these 'ways of interpreting' are not learned. The sentence can be made obviously true by widening the meaning of 'natural' to something like 'not dependent on conventions'.
Quite surprisingly, part of the solution is that even infants under two years of age will not passively associate all and only new sounds with all and only newly observed objects or actions.
Of course not. As I mentioned above, they associate sounds with what they think about at the time, which is not the same as 'newly observed object'.
For example, if 18-month-olds hear a novel word while they are playing with a novel toy, they will assume that the word names that toy only if the speaker is also attending to it: If the speaker is looking at a different object, children will spontaneously follow her line-of-regard, and assume that the object she is looking at is the referent of the new word (Baldwin 1991).
A clear example where what the child thinks is the determining factor. Following the speaker gaze to the other object makes the child think about the other object.

Language Acquisition

it is striking that close-to-adult proficiency is attained by the age of four to five years despite large differences in children's mentalities and motivations,
With all the respect to four to five years old children, it is ridiculous to claim that they attained 'close-to-adult proficiency'. Psycholinguistics commonly 'proves' this by simply regarding any difference in proficiency as irrelevant.
Language acquisition begins at birth, if not earlier.
Typical confusion introduced by regarding language as a single entity. The capabilities that are acquired around birth are very simple, and even animals which are far less intelligent than human can acquire them. By treating language as a single entity, the author (and others) tries to leads the reader to think this way: (1) Only humans have language, (2) humans acquire language around birth, and hence (3) already at birth, humans acquire something that animals can't acquire. The last statement is simply false, and the argument is based the games with the word 'language'.
In the first several months of life, they discriminate among all known phonetic contrasts used in natural languages, but this ability diminishes over time such that by about 12 months, children distinguish only among the contrasts made in the language they are exposed to.
It should be noted that discriminating between phonemes is one of the few special human abilities that is clearly innate. Using 'language' as a single entity allow psycholinguistics to claim innateness by referring to phenomena that are based on this ability.
In general, the acquisition sequence does not seem to differ for spoken and signed languages, suggesting that the language-learning capacity is geared to abstract linguistic structure, not simply to speech (see SIGN LANGUAGES).
Another piece of blatant nonsense. With the exception of phoneme-discrimination, the sequence is simply from simple to complex operations, which is the natural sequence for any mechanism.
Notice, as an example, that while an adjective can appear in two structural positions in certain otherwise identical English sentences, e.g., Paint the red barn and Paint the barn red, this is not always so: Woe to the learner who generalizes from See the red barn to See the barn red. It has been proposed, therefore, that "negative evidence" -- information about which sentences are ill-formed in some way -- might be crucial for deducing the true nature of the input language.
Another piece of blatant nonsense. Obviously, the sentence 'Paint the barn red' matches the quite common pattern 'Paint the [object] [colour]' and hence is acceptable, while 'See the barn red' does not match anything, and hence is unacceptable.
As mentioned above, neonates can detect and store at least some linguistic elements and their patterning from minimal distributional information (Saffran et al. 1997).
As mentioned above, this includes only phonetic discrimination (phoneme and also rhythm) . This is typical example of how phonetic discrimination is confused with language in general.
Recent computational simulations suggest that certain lexical category, selectional, and syntactic properties of the language can be gleaned from such patterns in text (e.g., Cartwright and Brent 1997).
This is supposed to suggest to the reader that neonates do that too, which of course does not follow at all, and is not supported by evidence.

Acquisition, formal theories of

General: This section and all the theories it describes ignore the most important criterion for the child as he acquires language: the effectiveness of communication. Therefore, they are all irrelevant to children's acquisition of language.

Computational lexicon

This section takes it for granted that computational models tell us something about the way humans comprehend language, without even a hint of supporting evidence. Considering what is known from brain-damage studies, it is clear that in the brain there isn't anything like a a lexicon with separate entries, but that does not disturb the author.

Connectionist approaches to language

It worth noting that the networks that are discussed in this section are all minute compared to the brain, and that all the mathematical results that are discussed are totally useless when applied to a network with the size of the brain. Thus all the discussion is not relevant to the way the brain works.

Distinctive Features

General: Humans clearly have some specializations for distinguishing between phonemes and generating them. This section, however, goes too far in its formal assumptions.
Most contemporary theories of PHONOLOGY posit a universal set of distinctive features to encode these shared properties in the representation of the speech sounds themselves. The hypothesis is that speech sounds are represented mentally by their values for binary distinctive features, and that a single set of about 20 such features suffices for all spoken languages.
The first sentence and the second part of the second sentence are reasonable. The first part of the second statement isn't, because the sounds are processed in the cortex and are distributed over the secondary auditory areas (Wernicke area mainly, known from brain-damage data), and because of the stochastic connectivity in the cortex, it cannot support binary representation.

All the evidence that the author brings later does not tell us anything about the way the features are represented in the brain.

Generative Grammar

The motivating idea of generative grammar is that we can gain insight into human language through the construction of explicit grammars.
This is an assumption, but is presented as obvious truth, without qualification. In this section, the author does not give any support for this assumption.
Generative grammatical study and the investigation of human mental capacities have been related via widely discussed claims including the following: ...
This list skips the most important claim associated with generative grammar, i.e. that humans have an innate language specific modules ('Language Acquisition Device'). It is this claim that is realy controversial.
(v) the ability to have (and acquire) such rules must be a significant (probably species-defining) feature of human minds.
The human clearly have this ability. The real question is whether it is language specific or it is a general intelligence (or anywhere between these extremes).

Grice, h. paul

.. and because it requires the controversial assumption that language is essentially a vehicle for communicating thoughts rather than a medium of thought itself.
One of the most ridiculous statements I have ever read. Language is obviously a vehicle for communicating thoughts. The medium of thought itself is the brain (i.e. neurons).

Language production

The action system for language production has a COGNITIVE ARCHITECTURE along the lines shown in Figure 1.
Figure 1 is a blunt assault on the truth. There isn't even a shred of evidence to anything that is drawn in figure 1, accept that humans can convert some of their thoughts to language. The evidence that the author brings later only shows some differences between the grammatical part of the process and the phonological side, which is obvious because they deal with different aspect of the task. It does not support anything that is shown in figure 1. That is true for the rest of the text, which presents hypothetical computer models as if they are established facts.

Lexical selection involves locating a lexical entry (technically, a lemma) that adequately conveys some portion of a message, ensuring that there exists a word in one's mental lexicon that will do the job.
That is how computer models do it. There isn't any evidence that there is anything like that in the human brain.
A rough analogy is looking for a word in a reverse dictionary, which is organized semantically rather than alphabetically
Nice analogy, but, since none of us ever used a reverse dictionary, useless. Nobody knows how to build such a reverse dictionary that can cope with the requirements of generating human language, and, most importantly, there isn't anything like an evidence for it in the human brain.
The cognitive processing systems responsible for comprehension and production may nonetheless be distinct
Since language comprehension is completely dependent on input (normally auditory), while language generation is not, and conversly about muscular control, they are clearly distinguishable on these accounts. The cognitive processes underlying these tasks are clearly different as well. However, the evidence does not show distinction of the cognitive systems that perform the tasks.

Linguistic Universals and universal grammar

If a fact about an individual speaker's grammar turns out to be a fact about grammars of all the world's languages, if it is demonstrably not a fact acquired in imitation of input data, and if it appears to be specific to language, then we are warranted to suspect that the fact arose from a specific feature of UG.
Very inaccurate. There are at least two major factors that have to be considered first:
  1. Chance. Because the number of independent grammars is finite and quite small, there are many commonalities (universals) between them by chance. Researchers that look for universals are certain to find these, so when a universal is found it is important to check if it is an attribute that can be there by chance.
  2. Language is a tool for communication, so all languages must have attributes that make them good for communication. Thus universals have to be checked if they have functional explanation. The author does raise this point later, but does not take it seriously (see below).
By ignoring these factors, many universals can be made 'innate'.
On the other hand, while the repertoire of thematic roles may be language-independent, the opposite is true of the apparently universal mapping of specific thematic roles onto specific designated syntactic positions -- for example, the fact that Agents are mapped universally onto a structurally more prominent position than Patients (e.g. subject position).
A typical blatant nonsense. The agent is cognitively more prominent, so obviously it would be mapped to the more prominent part of the sentence.
For example, syntactic research has paid particular attention to a number of limitations on form-meaning pairs that have just this property of "dysfunctionality." One example is a set of restrictions specific to how and why questions. How did you think Mary solved the problem? can be a question about Mary's problem-solving methods, but the sentence How did you ask if Mary solved the problem? cannot. (It can only be a question about methods of asking.)
A typical blatant nonsense, because this is clearly functional difference. The structure of the first question makes it a question about thinking, and of the second question about asking. However, it does not make sense to ask "how do you think", so the first question cannot be about thinking, and 'defaults' to a question about Mary's solution.
The restriction concerns the domains from which WH-MOVEMENT may apply, which in turn correlates with sentence meaning. The restriction appears to be a genuine universal, already detected in a wide variety of languages whose grammars otherwise diverge in a number of ways. Crucially, the restriction makes no evident contribution to usability.
Another typical blatant nonsense. The restriction tells the listener what the question is about, without making it too difficult to understand the question, i.e. it is clearly functional.
By contrast, in no known language are verbs obligatorily placed in third position. In other words, UG allows languages to vary -- but only up to a point.
And what is the basis for the assumption that this restriction is a result of UG? The author (or anybody else, for that matter) did not give us any reason to believe is that it is not functional, e.g. it does not make it difficult to connect the subject to the verb.
There are several theories of how variation is built into UG......
Note that all the 'implementations' of UG completely ignore the question of compatibility with neural systems, which is the main objection to the existence of UG. Because these implementations are free from any biological restriction, they are taken from infinite model space, and hence their ability to explain observations language shows nothing (see Reasoning errors, [3.3] for discussion).

Meter and Poetry

That machinery is identical to that needed to account for the way ordinary speakers assign stress to the words of English (see STRESS, LINGUISTIC), and is part of the natural endowment of human beings that enables them to speak a language, what linguists have come to call Universal Grammar (UG).
And where is the evidence for this last statement? This is completely spurious.

Modularity and language

Studies of the brain lead to the same conclusion.
This studies are irreplicable, and anyway don't tell us anything that wasn't known in the end of the 19th century.
Though the overall picture of language representation in the brain is far from clear, the debates today mostly concern how areas specialized for language (or object recognition or spatial relations) are distributed in the brain and organized, not whether specialized areas exist.
The only known specialized areas are the Wernicke and Broca areas, that are mainly involved in comprehension and generation of phonemes, and hence important for spoken language. These areas are not separated from the rest of the cortex, so can be (and probably are) part of an integrated system.
In studies of sentence production, Bock (1989) presents evidence for purely syntactic priming not dependent on semantic content or the particular words in a sentence. .....

Both studies provide evidence for the existence of autonomous syntactic structures which participate in language processing.

Another typical blatant nonsense. An integrated system that deals with all the aspects of language would also show sensitivity to syntactic structures (as well as sensitivity to other attributes). This could be regarded an example of the 'mis-analyzing the null hypothesis' (Reasoning errors, [3.21]).
The point about the McGurk effect is that it is not simply a guess on the part of confused perceivers about how they can resolve conflicting perceptual inputs. It is an actual perceptual illusion, as expected if SPEECH PERCEPTION is part of an input system specialized for speech inputs.
Another typical blatant nonsense. An integrated system can show this confusion as well. Another example that can be interpreted as 'mis-analyzing the null hypothesis' (Reasoning errors).
However, in a sentence biased toward the less frequent meaning of the ambiguous word, both the contextually appropriate and the contextually inappropriate meaning of the ambiguous words are activated, as expected if word recognition is informationally encapsulated.
And another blatant nonsense. The simple explanation is that the bias does affect the word activation, but not enough to override the advantage of the dominant word.
If the grammar or grammatical subsystems act as modules, it also becomes less surprising that grammars have the eccentric properties that they do, e.g. relying on strict module-internal notions of prominence such as "c-command" (Reinhart 1983) rather than on generally available notions based on, say, precedence, loudness, the importance of the information conveyed or even just being higher in a constituent-structure representation.
Typical example of ignoring the functional role of language. Grammars have 'eccentric' properties because it allows communicating more information and less ambiguously.

Optimality theory

General: This theory has effectively infinite parameter space, not only because the constraints can be ordered in any order, but also because the application of the constraints themselves to natural language is not obvious.

Parameter-Setting Approaches to Acquisition, Creolization and Diachrony

How is knowledge of one's idiolect -- I(nternal)-language, in Chomsky's (1986) terminology -- represented in the mind/brain?
The author mention is here the mind/brain, but ignores the brain and its constituents in the rest of the text.
In the Principles and Parameters/Minimalist approach (Chomsky 1981, 1986, 1995; see SYNTAX, ACQUISITION OF and MINIMALISM), linguistic knowledge, in addition to a (language-specific) LEXICON (see WORD MEANING, ACQUISITION OF and COMPUTATIONAL LEXICONS), consists of a computational system that is subject to an innate set of formal constraints, partitioned into principles and parameters. The principles are argued to be universal; they formalize constraints obeyed by all languages (see LINGUISTIC UNIVERSALS). Alongside these principles -- and perhaps within some of these principles (e.g. Rizzi 1982) -- what allows for diversity in TYPOLOGY (possibly, in addition to the lexicon proper) are the parameters. These are an innate and finite set of 'switches,' each with a fixed range of settings.
A better description of the parameters would be 'fudge factors'. Because the theory does not have any restriction of what parameters the principle can define, it has effectively infinite model space (see Reasoning errors for discussion). Hence its ability to match observations shows nothing.

Semantics, acquisition of

In the absence of systematically available negative semantic evidence (e.g., parental correction) it is difficult therefore to see how children could learn constraints.
That makes the typical error of assuming that children learn the constraints that linguistics postulate. A more plausible assumption is that children learn to match the patterns of speech of their parents (and other people around them), and then the question of negative evidence disappears. This hypothesis easily explain the examples in the rest of the text.
Among the hallmarks of innate specification are universality and the early emergence of a linguistic principle despite the absence of decisive evidence from experience (see INNATENESS OF LANGUAGE and NATIVISM).
Universality and early emergence are clearly not 'hallmarks of innate specification', as they can be a result of other factors (e.g. usefulness and easiness to learn).

Syntax

Syntax is the study of the part of the human linguistic system which determines how sentences are put together out of words.
This already makes the baseless assumptions that there is a 'human linguistic system', and that a specific part of it deals with syntax.