Related texts
[Last updated: 9 Jul 2012]

Various bits

  • Effectively lying by implication
  • Why scientists are so conservative
  • Why do complex organisms need recombination?
  • "If a mental trait is beneficial, it must (or probably) has evolved genetically"
  • Garbage 'popular science' books
  • 'Evolution happens when a beneficial mutation happens and spread through the population'
  • Sexual selection and single gene evolution
  • 'The existence of sexual reproduction is a mystery'
  • What is meant by the word "evolution"
  • Significance of epistatic interactions
  • The 'BandWidth' of different senses
  • Genetic analysis of human evolution
  • The complexity of the immune system
  • Cultural Evolution and Memes
  • The requirement for evidence
  • The lucky-prejudice effect
  • False "understanding"
  • False chains of reasoning
  • Birdsong and human speech
  • Creativity and madness
  • "Cortical parcellation"
  • "Why should neural activity produce any kind of feeling at all?"
  • "design principles similar to those used in electronic networks"

    Effectively lying by implication

    In some cases, a writer is effectively lying by implication, i.e. deceive the reader to believe something by implication, rather than by an explicit statement. For example, these are the opening sentences of a review of one of my papers:

    It is a reasonable goal to perform meta-analysis of the imaging data to assess the replicability of cognitive activations across different studies. However, the present paper fails to accomplish this for two reasons.

    All the readers of these sentences would believe that my paper tries to perform meta-analysis of the imaging data, and that the reviewer believes it fails. However, this is simply false, as my paper is a survey, not a meta-analysis (and the reviewer wants to prevent its publication from some reason). Note that the reviewer did not actually make the false claim that my paper is a meta-analysis, but he still communicated it reliably, as all readers receive it. In fact, by implicating it rather than explicitly saying it, the reviewer achieved a stronger effect: it also implies that this is an obvious fact that does not even need to be explicitly stated. In this way, it prevents the readers from even entertaining the possibility that the paper is not a meta-analysis.

    A problem with lying by implication is that most people regard it as less dishonest than lying, or even an acceptable technique. In the example above, the magnitude of deception that the reviewer achieved is as large as he would achieve if he stated explicitly that my paper is a meta-analysis. Yet many people would not regard this text as dishonest. Thus they allow large amount of deception, because it is by implication rather than explicit. This puts people that don't use such deception at disadvantage, so encourage using lying by implication. The result is a large reduction in the quality of discussions everywhere.

    Part of the problem is that implications, even when they are clear-cut, are generally too complex to express in formal terms. As a result, we don't have theories of implications that can achieve a good explanation of wide enough range of implications to become widely acceptable. Hence we don't have widely accepted theories of implications, and their importance is not generally acknowledged, and many people don't even notice how they work.

    Additional problem is that an implication that is false is not necessarily a lie, because of two possible reasons:

    In many cases, however, neither of these reasons applies, and then the false implication is as dishonest as a straightforward lie. For example, in the quote above, the reviewer clearly does not believe that the paper is a meta-analysis (because the paper itself is clearly not a meta-analysis), and he obviously knows that what he says implies (very strongly) that the paper is a meta-analysis.

    Why scientists are so conservative

    It is a well established result that the fastest way of convincing a scientist older than 35 to change his mind is to wait until he dies. Why?

    Part of the explanation is that it is an illusion that scientists are more stubborn. The illusion comes because we expect scientists to change their mind about their theories when the facts indicate so, while other people are not expected to do the same. But it is also true that scientists hold to their theories much more strongly than they should.

    To understand this, we should first note that an important part of the reward of being scientist is esteem, and in particular self-esteem. Scientists, in general, don't make that much money, but they have the knowledge that they have contributed to human knowledge.

    The second thing to note is that this contribution is, at least in principle, always opened to challenge. If we take, for example, a pop star that 20 years after his peak is completely forgotten, it is still true that 20 years ago he was a pop star. That is not true for a scientist. If 20 years after a scientist made some contribution it is found that he was wrong in some sense, it is not only that now he is not a great scientist: he wasn't a great scientist when he made his false contribution as well.

    As a result, for scientists it is very important to make sure that their contribution stay good, and they find it extremely difficult to except changes that invalidate their contributions. Since the theories that the scientist believes in his field are normally either a necessary background of, or are based on, his contributions, he finds it very difficult to abandon them.

    [13May2002] The following is commonly attributed to Tolstoy (a search on the net gives many matches):

    I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they have delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.
    I couldn't find the original reference.

    Why do complex organisms need recombination?

    Recombination is mixing at random the genomes from the parents of each individual into each of its gametes, and therefore each individual of the species has a different genome. This is important for two reasons:

    1. More different genomes mean faster genetic evolution, because more potential changes are being 'tested' in each generation. Most importantly, it allows "immediate response" to sudden stress, because in many cases some individuals already have the appropriate gene combination to better cope with the new stress. (note that this kind of genetic evolution does not necessarily leads to morphological evoluton).
    2. High diversity is useful on its own in the fight against pathogens. These have much faster evolution, and can tailor their biochemistry to the biochemistry of the more complex organisms. When the population of the complex organisms is more diverse, it is more difficult to the pathogens to find a good match for all the individuals of the species.

    This explains why diversity is useful, but not why using recombination for it, as there are much simpler ways of getting diversity, e.g. by reducing the fidelity of the DNA polymerase. However, the other 'methods' of increasing diversity all have the problem that they cause high rate of harmful mutations. For example, of the single base mutations that actually have any phenotypic effect, most would be quite harmful, and only a small number would have positive or only small negative effect. Thus increasing the rate of single base mutations would increase the diversity in the species, but decrease the viability of the individuals.

    Recombination does not suffer from this problem. The two haploid that are being mixed are 'known' to be viable, because they are from 'successful' individuals (individuals that succeed to reproduce). Hence there is a high probability that the mixed haploid will also be viable. Of course, each haploid probably contains recessive defective alleles of some genes. However, in natural populations the chance of each defective allele to be paired (in fertilization) with another defective allele is small, so the resulting diploid is likely to be viable.

    In short, recombination is useful for high diversity (for fast genetic evolution and a fuzzy target to pathogens), without reducing the viability of the individuals of the species.

    I wouldn't have bothered to write this, because I would expect it to be obvious to anybody that understands the basics of genetics and evolution. However, after reading quite many texts about evolution and genetics, I haven't found anybody that puts it this way, so I decide to write this. If you know about anybody that explains recombination this way, let me know. [Nov2000] Actually, the problem is deeper. Evolutionists in general seem to fail to understand the importance of recombination. See the discussions about single gene mutation and sexual reproduction below.

    "If a mental trait is beneficial, it must (or probably) has evolved genetically"

    Evolutionary psychology is based on this error, but other people also do it. The reason that this is an error is that for evolution of a trait to happen, it is not enough that the trait is beneficial. It also has to arise by random mutations. Even for a very strongly beneficial trait, the probability of it arising is necessarily smaller than the probability of the occurrence of the mutations that cause it. If the number of genetic steps (genetic combinations that give different functionality) leading to the trait is very small, then we can assume that in a large population this probability is quite high.

    However, for mental traits, which are quite complex, the number of genetic steps would be quite large, and all of them have to arise by chance. In general, each of these steps (i.e. the genetic combinations) has to spread in the population on its own (with all the steps that already occurred in the individual(s) in which it happened) before it has a chance to go another step. A genetic combination can spread in the population by luck (e.g. a bottleneck), or because it is beneficial itself. Thus each genetic step has to either be lucky, or beneficial. Hence to know the probability that a strongly beneficial trait will arise genetically, we need to calculate the probability that some sequence of all the steps that are needed for it will happen, and all of the steps be lucky or beneficial.

    How do we calculate this probability? There is a simple answer to this question: we haven't got a clue. Therefore, when considering a trait that requires more than a very few genetic steps, we haven't got a clue how to compute the probability that it will arise. Note that this is true even for extremely beneficial traits. Therefore, the only way to know if a trait has arisen genetically is to observe that it did. For physiological traits, that is conceptually simple: If we see a trait, it must be genetic. However for mental traits that is not true, because these can arise by some form of learning. Thus for mental traits, we have to investigate the actual underlying mechanisms (in other words, study the brain) before we can tell if it is coded genetically or learned.

    There is obviously an exception, which is the learning mechanism(s) itself/themselves, because there must be at least one learning mechanism that is coded by genes. However, that is all we can say without actually understanding how the brain works.

    Another point that we do know is that the evolution of complex traits takes significant amount of time (hundred of generations at least). If this trait can be acquired by learning by most of the population, it would be probably acquired 'culturally' in a much shorter period (1-10 generations). Once it is acquired by learning, the steps that lead to it become even less beneficial, and hence even less likely. Since these steps constraint the behaviour of the individual in some way, it reduces its learning flexibility, so have some disadvantage. Thus a mental trait that can be learned is unlikely to ever arise genetically.

    Garbage 'popular science' books

    A widespread phenomenon is that of scientists publishing books that present new speculations as popular science books, aimed to the general public. The problem is that in many cases, the author uses lousy arguments, which can work only because the readers don't have enough knowledge to realize how lousy they are. The result is much worse than just accepting as plausible an idea that is not. People also get from these books totally wrong ideas about what is science, what is known to scientists and how scientists work.

    The solution is for scientists to scrutinize popular science books much more carefully, and criticize strongly books that contain many wrong ideas about what is known (Here is an example). Currently, scientists avoid such sharp criticism (an example), unless they need to do it to advance their own ideas. The result is that a person without the right academic qualification would find it very difficult (or impossible) to figure out where what is presented as a known fact or as unknown fact (many times implicitly) is actually so.

    [22Jun99] An amusing example can be found in the last Nature. On p. 652 we see a rare example of a scientist (Jeffrey Gray) criticizing a science writer (Rita Carter) for mixing facts, hypotheses and fiction together without giving the reader any hint how to distinguish between them. He makes it clear that her book is much more confusing and misleading than illuminating. On the same page, we are also informed that this book was short-listed for a prize for scientific books. Apparently, being misleading and confusing is not an obstacle for getting prizes.

    [ 20Jan2001] Rita Carter contacted me by e-mail and complained that these disparaging remarks are not based on examples from the book itself. Therefore I have added some comments on Mapping The Mind.

    'Consilience' by E.O. Wilson is a less extreme example. While many reviewers criticize his 'global' ideas of 'consilience', most of them fail to tell the reader that even the biological 'facts' that he presents are not actually facts, and some are plain lies (e.g. that neonates 40 minutes old imitate their parents). Together with the fact that E.O. Wilson is indisputably an important biologist, it leaves lay readers with the impression that at least in the case of biological facts they can rely on E.O. Wilson, and hence accept as facts his prejudices and speculations.

    [9June2000] Calvin's books are more extreme examples. For example, in chapter 7 of his latest book (with Bickerton), he writes:

    The axon acts like an express train, skipping many intermediate stops, giving off synapses only when about 0.5, 1.0, and 1.5 mm away from the tall dendrite (and sometimes continuing for a few millimetres farther, maintaining the integer multiples of the basic metric, 0.5 mm).

    And then bases his theory on this 'observation'. This 'observation', however, is simply a plain lie, and has no relation to any data that we have on the brain. Hence, all Calvin's theorizing is simply garbage. However, I haven't seen anybody pointing this problem.

    Just for the fun, here is a discussion of an a-typical example where a scientist trashes a book by another scientist.

    [28 Apr 2011] Another example in Science of the 11 Mar 2011, where a scientist thrashes Penroses's book. This is nice, but it appears in a journal that only scientists read, so doesn't help the general public.

    Single-gene evolution: 'Evolution happens when a beneficial mutation happens and spread through the population'

    This statement is true for a virus, and to a large extent for bacteria and asexually reproducing eukaryotes. It is simply false for evolution of sexually reproducing organisms.

    The reason that it is false is that evolution progresses through relatively successful individuals. Successful individuals are not a result of new mutations, but of a new combination of existing mutations in the gene pool. The beneficial effect of any single mutation is almost always dwarfed by the effect of the combination of the alleles in the rest of the genes. Thus what is selected is a combination of alleles.

    It is should be noted that the effect of a new combination is not simply an addition or multiplication of the effects of each of the mutations. With few exceptions, genes in multicellular organism interact with large number of other genes, some directly but in most of the cases indirectly. For example, even if the genes that control the detailed structure of the feet do not interact directly with the genes that control the connections of neurons in the spine, they may still interact in the sense that a change in the structure of the foot may require some changes in the wiring in the spine. A combination of a mutation in the foot and in the spine may be beneficial even if on its own each one of them does not have significant effect on the fitness. This kind of interactions binds together most of the genome in a large network of interactions, where the direction and size of the effect of any mutation is affected by large number of other potential mutations.

    Supporters of the 'single-gene evolution' would normally counter that the combination of alleles is broken each generation. However, the probability of the same combination, or a similar one which still has at least some of the positive effects, is increasing in the descendents of a successful individual, and that is enough for the combination to catch up.

    'single-gene evolutionists' tend to make it more difficult to see the point by ignoring several factors:

    The strength of the selection.
    In population genetics, it is typical to assume that the advantage of a mutation is in the region of 1% (or even less), i.e. the ratio between number of descendants per individual in the less successful sub-population and the number of descendants per individual in the more successful sub-population is 0.99. This is reasonable for a single-gene mutation (may be exaggerated for beneficial mutations), but not for gene combinations in individuals. For these the ratio is much lower, and successful individuals will typically breed more than twice faster than unsuccessful individuals, and in many species much more than twice. The result is a much faster spread of successful combinations of alleles, at least in the local population, than most population geneticists normally consider.
    The effects of selection between the descendents of the successful individual.
    While each individual inherits half of the genes of the successful individual, some will inherit more of the alleles that are important for the combination to be successful, and some less. Those that inherit more of the important alleles are (on average) much more successful, and therefore those alleles that are important in the combination are going to increase their frequency much more than the ones that are less important. 'Single gene evolutionists' normally do not think at all of the important alleles of a combination, and seem to assume that allele-combinations must be completely broken if only part of the genome is inherited.
    The effect of in-breeding.
    This is typically ignored, but in many species it has a large effect. In particular, in species where the advantage of a successful individual is limited because it can mate with a restricted number of individuals (e.g. because the species is divided into small groups) the effect of in-breeding is very large. The importance of in-breeding is hidden by the assumption that the selection is weak (the first factor above), which, if it was true, would make it reasonable to ignore in-breeding in the first few generations.
    Replacements by other mutations
    Even when some important allele in the combination is missing, it can be compensated by other genes. For example, assume that part of the combination is an allele of some gene that causes some bone to be thicker, and some individuals do not inherit it. The thickness of the bone will still be variable between these individuals, and those with thicker bone will be more successful, and increase the frequency of their genes. This will increase the probability of a recurrent of the phenotype that corresponds to the original combination, though with somewhat different genotype.

    The result of these factors is that while the whole genome of any successful individual is broken into two each generation, the important alleles for the success of the combinations are kept together, maybe with replacement of some of them, until in-breeding starts to counter the effect of recombination.

    It worth noting that the successful combination does not need to ever reach fixation, (i.e. to reach a state when all the population has the same combination as the original individual), and in nature it never does. Every individual has a different version of the combination, and some of these combinations will be an improvement of the original one. These combinations start to spread in the population, but before any of them reaches fixation they are again replaced by new, better combinations. This is very different from the spread of a single gene mutation, because a successful gene mutation is much rarer than a successful new combination. As a result, evolution by changing combinations is much faster than evolution by single gene mutations, which is why recombination is so essential for long-generation organisms, and useful even for short-generation organisms.

    I would expect 'single-gene evolutionists' to still argue that the overall result is that some mutations are fixed and others are not, but that is just a word-game. The important point is that, in complex organisms, analyzing the effect of a single mutation is never going to tell us anything about either the past or the future evolution, and is therefore a useless exercise. To actually understand anything, we need to analyze the combinations of genes.

    An arguments that I have found in various places is that "we can regard all the other genes as the environment" of the single gene of interest. That is simply false, because the other genes "evolve" much faster than the evolution of the gene of interest, because recombination continuously brings new mutations. The changing interaction of the different genes affects the evolution of the gene of interest. There is simply no way around analyzing all the genes and their interactions.

    The main (and common) exception to the rule that single-gene mutation is dwarfed by the combination is when the mutation is deleterious, and that is where population genetics is actually useful. This is useful for medical reasons, but evolution doesn't happen through deleterious mutations, so analyzing them is not going to tell us anything about evolution.

    The other exception is in cases when some operation is done by a single gene. This must be a reaction concerning a single molecule which is very deleterious, i.e. resistance to a toxin. The typical examples are resistance of bacteria to antibiotic or insects to insecticides. These are important phenomena, but their histories are very atypical of evolution processes in general, because they respond to an extremely strong selection. They do not contribute to morphological evolution, and has at most minor contribution to the evolution of new biochemical pathways. Thus they are not actually useful as a way of learning about evolution.

    The reason that the 'simple-gene mutation' is still alive is that analyzing allele-combinations is beyond our current abilities, and is going to stay so for a while [ 8 Aug 2016: by now that is a little too pessimistic. Comparing trees of different genes gives us a start in this direction.]. That doesn't make the single-gene analysis less of a waste of time, but many geneticists self-delude themselves that the theoretical analysis of single-gene is actually useful. This way they have a feeling that they actually achieve something, which they wouldn't get if they try to analyze combinations.

    Currently, the only way to look at gene combinations is to look at phenotypes and try to analyze them. One of the most impressive things when reading textbooks about evolution is how the experimental analysis in the level of phenotype is completely divorced from the theoretical analysis at the level of genes. This demonstrates how useless the theoretical analysis is.

    People with less expertise are led astray by the geneticists. For example, in 'Evolution' (Ridley, 1993, Blackwell Scientific Publications, Inc.), a "book that is intended as an introductory text" (preface, VIII), Mark Ridley writes (p.316) : "We can work through the argument in terms of the example of an improvement in lion hunting skill. (We shall express it in terms of selection on a mutation: the same arguments apply when gene frequencies are being adjusted at a polymorphic locus). When the improvement first appeared, it was a single genetic mutation." This is obviously nonsense, as discussed above.

    It is not clear if Ridley himself believes what he writes. The book contains extensive discussions of single-gene mutations, but where he discusses real data about evolution he does not force it into single-gene mutation interpretation. Anyway, he doesn't have a problem writing this nonsense, and uses it as part of his argument for the idea of 'single-gene selection'. Non-expert readers (which are the target readers) may or may not be impressed by the argument, which is of no significance anyway, but they will certainly get a strong impression that single-gene mutations drive evolution. It will require a very alert reader to realize that this description does not actually match the real data in the rest of the book.

    The impression that single-gene mutations are significant is enhanced by the publication and hyping (and often over-hyping) of experiments with engineered animals. For example, a mutation that make mice 'more intelligent' (actually, the mice perform better in some test, and saying they are more intelligent is over-hyping). However, these experiments are not relevant for studying evolution.

    The main problem is that the tests in these experiments are of very narrow scope. For real evolution, the 'test' that each individual has to pass is very complex, and involves 'performance' in many aspects. For example, an animal has to find, eat and digest its food, and each of these tasks is a complex operation. The animal has to be good at all of them to succeed, and also in the tasks of avoiding predators, fighting pathogens and mating. Thus the animal that is better on the narrow-scope test is not actually significantly fitter (in the evolutionary sense) than the rest of the animals. At best, it is very slightly better, and in most of the cases it is not clear that it is better at all.

    The other problem is that laboratory experiments are done on thoroughly in-bred lines of animals, which minimizes the polymorphism and hence the effects of recombinations. This makes the effects of single-gene mutations stand out, and that is indeed the intention of using in-bred lines, but it does mean that the results are not projectible to real-world populations.

    Note that it is not that the laboratory experiments are not useful, only that they are not useful for understanding evolution. In general, the researchers that publish (and hype) these studies do not promote them as evidence for the importance of single-gene mutations in evolution, but that is the impression that a lay person that reads about them is getting. The next section gives an example of such article.

    [ 9 Jul 2012] Little or no progress. In the May issue of Plos Biology we have an article that tries to do experiments about the selection of sex (Becks L, Agrawal AF (2012) The Evolution of Sex Is Favoured During Adaptation to New Environments. PLoS Biol 10(5): e1001317). They use a short-generation animal (1.5 days for a-sexual reproduction, and 2 generations in 6 days for sexual reproduction). That makes it easier to do the experiments, but makes it irrelevant for longer generation organizms.

    More interesting, in the theoretical discussion in the introduction they say:

    "Over a century ago, Weismann [19],[20] argued that sex might be beneficial because it helps generate the variation necessary for adaptation. While intuitively appealing, the idea is not necessarily correct as sex will increase the variance in fitness only if there is a preponderance of “negative genetic associations” such that good alleles are often found in genomes with bad alleles."

    That misses the point. It is the combinations of alleles which are good or bad, not the alleles themselves. For example, in the "thicker bone" example above, an allele that makes the bone thicker is (slightly) better in the new combination, but was (slightly) worse in the original combination. The rest of the discussion is based on the need for “negative genetic associations”. There is an occasional mention of epistasis, but they clearly don't think about the quality of combination as different from the sum of the quality of the alleles that it is made of.

    The comentary associated with this article (Roze D (2012) Disentangling the Benefits of Sex. PLoS Biol 10(5): e1001321) also claims that there is a need for “negative genetic associations”. Thus it seems that the idea that a combination of alleles is not the sum of the alleles is not common in the field. They are still thinking about each gene evolving on its own.

    'Sexual Recombination and the power of natural selection'

    This is the title of an article in Science (Rice and Chippindale, Science, Vol 294, p. 555, 19 October 2001). This article is a good example how the Single-gene evolution idea is promoted by stealth. The paper itself is not about single-gene evolution, but all the discussion is based on single-gene evolution, and hence implies it, i.e. a reader of this paper will get the impression that single-gene evolution is an uncontested truth. In addition, this paper stands out by artificially creating a single-gene evolution, and considering it as proper analogue to a real mutation.

    The authors of the paper are clearly single-gene evolutionists, which is made obvious by the fact that they don't mention any interactions between genes, even though the article contains quite extensive theoretical discussion. In fact, from reading the article it is not possible to tell if the authors are aware of the possibility of interactions between genes, and maybe they are not.

    The basic experimental setting of this article has the usual problems of laboratory experiments of evolution, i.e. small populations and short duration. The population they use is more heterogeneous than usual, but they compensate for it by using as their genetic background a population with a deleterious mutation, and using the wild-type, which has already been selected by natural selection, as the "new mutation".

    However, that is actually irrelevant, because of the way they implement selection. They use an environment that eliminates the deleterious effect of the mutation, and instead create an artificial selection. Each generation they take only fraction of the new adult males to use in the next generation. They give the "new mutation" (i.e. the wild type) an advantage by adding 10% more individuals of this phenotype.

    This artificial selection has two serious problems:

    The important point to see is that for the reader, this article gives a very strong impression of a single-gene evolution, independently of whether it convinces the reader about its main conclusions. The fact that it doesn't make this point explicitly makes it worse, because it implies to the reader that the point is not contentious.

    The perspective in the same issue (Come fly, and leave the baggage behind, Richard Lenski, Science, Vol 294, p.533, 19 October 2001) is not better, and also completely ignores interactions between genes.

    'The existence of sexual reproduction is a mystery'

    This is normally is based on the argument that an a-sexually reproducing individual transmits all of its genome to the next generation, as opposed to half of the genome in sexually reproducing individuals, and that this is a huge advantage. This ignores the large advantages of recombination for slowly reproducing species. Thus a-sexually reproducing individual is better off than the average sexual-reproducing individual in a static and disease free world. In the real world, the better resistance and better adaptation to environmental change of some of the sexually-reproducing individuals more than balances this advantage in slow-reproducing species.

    In more concrete terms, an a-sexually reproducing lineage can succeed for a short time, and then it will (in almost all cases) be wiped out by some epidemic or a relatively fast change in the environment (which may be quite temporal, e.g extreme weather). Sexual lines, which have much more genetic diversity, have much higher chance that at least some of their individuals will have the right genotype to cope, and hence the sexual lines are much less likely to go extinct. Obviously, this is not an exceptionless law, and many sexual lines go extinct fast, while some a-sexual lines survive very long periods, and there are many cases of mixtures. It is a strong tendency in complex organisms, which becomes stronger with longer generation times.

    At the point that the a-sexual lineage goes extinct, the fitness of all of this lineage, including its founder, is 0, because none of their DNA is propagated anymore. Individuals that tend to produce offsprings that are a-sexual are disadvantaged compared to individuals that produce only sexual ones, because the a-sexual ones do not contribute anything to their long-term fitness. Therefore they will tend to disappear, leaving only individuals in which the reproduction system is such that it is very difficult/impossible to produce an a-sexual offspring.

    Note that the argument above does not require that sexual lines do not go extinct. The important point is that the survivors of disastrous circumstances and the exploiters of new opportunities are almost only sexual individuals.

    It is not obvious why evolutionists find this point so difficult. For example, in 'Evolution' (Ridley, 1993, Blackwell Scientific Publications, Inc.), the question of sex is discussed on more than 13 pages (269-282), but the point of population crashes that kill a-sexual lines is not mentioned at all. The closest it gets is mentioning that it seems that a-sexual species have higher extinction rates. Natural catastrophes (e.g. extreme weather) are not mentioned at all, even though there are in the book examples of how such catastrophes cause population crashes.

    Around half of the section (275-281) is dedicated to a discussion of "coevolution of parasites and hosts", but catastrophic epidemics and their effects are not mentioned at all. Instead, almost all the discussion is about a gene-for-gene situation, which is neat mathematically but is relevant only in very rare cases. In most of the cases, fairly fast either the host or the parasite will take a new direction. As far as I know, there are no known gene-for-gene cases in animals, and even in plants it is relatively rare, and is prominent in the literature mainly because those cases that are found are easy to analyze and model.

    Note that Ridley does not argue that population crashes are not important. Instead, he completely ignores them. The most likely explanation seems to stem from reluctance to admit that the neat mathematical models don't actually encompass all the relevant factors in natural history, and that relatively rare, large-effect, random-like events play an important role too. When he does discuss population crashes, e.g. pp. 211-215, it is when he (believes that he) has a good mathematical model to analyze the situation.

    Another introductory book about evolution (Evolution: an introduction, Stearns and Hoekstra 2000, OUP) also have a complete chapter about "The evolution of sex" (pp.135-151), in which the question of extinctions and population crashes is not mentioned at all. The closest the authors get to this point is when they write on p. 140: "Purely a-sexual species often originated relatively recently; they appear to be short-lived offshoots of sexual ancestors (Bell 1982)." The trivial implication that sexual reproduction is kept because a-sexual species go extinct seem not to occur to the authors.

    Another book called 'Evolution' (Evolution: A biological and palaeontological approach (1993) (2000 reprint) Peter Skelton (editor), Addison-Wesley), which is a "interdisciplinary introduction to evolution" (Preface, unnumbered page), actually describes on p.196 a similar scenario to the one I gave above on above. It even brings the distribution of a-sexual species as supporting evidence. However, it is written in a very unconvinced tone.

    Part of the reason for this tone is that in this description the author does not mention pathogens and diseases at all. These are mentioned later (p.201), but it does not connect the extinction of a-sexual lines to sensitivity to pathogens. In addition, the way the text is written it seems to assume that environmental changes are slow, and hence that the elimination of the a-sexual lines will be slow. The worst mistake, though, is that the author seems unable to realize that when the a-sexual line goes extinct, the fitness of its founder is 0 (because its DNA is not propagated anymore). It seems that the author doesn't realize that in evolution, the long-term fitness is important. Thus in the end of the paragraph describing the process, it says (p.196): "Note that, in this argument, the cost of a-sexual reproduction is borne not by the individual organism but by the population; it persists for fewer generations than a sexual population." This is clearly nonsense, as the "cost of a-sexual reproduction", i.e. the extinction of an a-sexual line after several generations, is borne by the a-sexual line members, including its founder, and by the founder parents.

    John Maynard-Smith (Evolutionary Genetics, Second edition, 1998, Oxford University Press, ISBN 0198502311), has a similar discussion on pages 225-241. Again he reaches a similar conclusion to mine, but he insists on calling it 'Group selection'. It gives the impression that for Maynard-Smith any effect that is not readily analyzable mathematically is 'dumped' in the 'group selection' category. Maynard-Smith's failure to see that when a clone dies the survivability of its founder is 0, and the survivability of its sexual parents also diminishes, causes him to wonder why we don't see more sexual and a-sexual reproduction in the same individuals.

    Two additional points worth noting:

  • If it is unchecked, the a-sexual clone will overwhelm its parent sexual species in a small number of generation. With the maximum advantage of a-sexuality (1:2), in 100 generation the clone with out-reproduce the sexual one by ~10**30. Even assuming much lower advantage, e.g. only 1%, it takes the a-sexual clone only 7000 generations to out-reproduce by 10**30. Thus the population crashes that kills the clone (or at least some of its branches) must happen long before these numbers (depending on the advantage of the a-sexual clone) of generations pass.

  • The process is not necessarily associated with phenotypical evolution of the sexual species, and even less associated with morphological evolution. When an epidemic or extreme weather occurs, some individuals survive because they happen to have the right genotype. These individuals are not necessarily different from the population in a consistent way. Even if they are, if the population is under stabilising selection in the long-term, the surviving individuals are likely to evolve back to the phenotype of the original population (albeit with somewhat different genotype). Thus the process of population crash can happen many times, with sexual lines surviving due to their diversity, without much evolution in the phenotype, and possibly no morphological evoluton at all.

    [10 Jul 2012 ] The actual theory in the field is a little more advanced than what these textbooks said. see above in the discussion about single-gene evolution.

    What is meant by the word "evolution"

    A confusing factor in a discussion of evolution is that the word "evolution" itself is used in a confusion way. The core meaning in the field of natural history, and the one that most people use, is the appearance of new morphological, biochemical and behavioral traits, or improvements of such traits. It is not clear how many people include modifications that are not improvements in the definition of evolution.

    However, many researchers and some people out of the field, also include in evolution other changes. These include changes in DNA contents even when they are not associated with changes in traits, extinctions and disappearances of traits as a result of random events. The problem with extending the meaning of evolution in this way is that these processes have completely different behaviour, and therefore many statements that are correct about them are not correct about the core processes, and vice versa.

    For example, a statement like "Evolution sometimes happens through large random events" is false about the core meaning, but true about extinctions and about DNA changes (e.g. duplication of a chromosome may be regarded as such event). For somebody that uses the core meaning, and that is not aware of the extended usage, this statement is very confusing. The same is true for "Evolution happens mainly through random drift", which may be true for DNA changes, but not for the core meaning.

    The obvious remedy is to use "evolution" for the core meaning, and "natural history" for the extended one. However, the term "evolution" has an aura that "natural history" does not, so people prefer to discuss "evolution" even the subject is better described as "natural history".

    In the field of the evolution, the confusion around the meaning of "evolution" does not seem to be associated with serious misunderstanding (as opposed to in cognitive psychology, see here). But it confuses outsiders, mainly people that are not sure of the correctness of Darwinism. When they hear something like "Evolution happens mainly through random drift", and interpreting this with the core meaning, they are quite justified in questioning whether this is real science.

    The same kind of confusion is associated with the word "fit". The standard meaning of "fit" (for some task or situation) is something like "has the appropriate characteristics". In evolution, that would be something like "has the appropriate characteristics for survival and reproduction". With this definition, "Survival of the fittest" is a reasonable description of Darwinism, provided it is understood that the "survival" here means the long-term success. However, organisms are complex entities, so except in defective cases which are not interesting anyway, it is very difficult to evaluate the fitness of an organism. Rather than admitting this, researchers in the field started to use the term "fitness" as synonym to long-term success. That has two problems: First, it kind of hides the difficulty of measuring fitness. Secondly, for outsiders, specially doubters of evolution, it is another source of confusion, because it makes "Survival of the fittest" a tautology.

    Importance of epistatic interactions

    Proponents of the single gene mutations (above) can hold their position only by ignoring interactions between genes. The normal approach is to simply ignore such interactions completely, but sometimes explicit arguments are used.

    For example, in 'Evolution' (Ridley, 1993, Blackwell Scientific Publications, Inc.), on page 197, the author shows a plot of levels of linkage disequilibrium between pairs of genes in Drosophila, where most of the pairs has low level of disequilibrium (linkage disequilibrium should be the result of interaction between the genes).Then he writes (p.198): "One conclusion from this result would be that, although epistatic interactions are important in particular cases, like Papilio, they may not be of general importance in evolution."

    That is simply nonsense, as the plot shows analysis only of interactions of pairs of genes, and we know that in most of the cases the number of genes that interact in any mechanism or system is much larger than two. The effects of the different alleles of the other genes on the epistatic interaction of each pair of genes are random with respect to each other, and hence cause the apparent interaction of the pair to be low. That means that looking at pair of genes is useless, but not that epistatic interactions are not important.

    On page 198, the author mentions that not everybody is convince by this kind of evidence, and mentions possible objections, but ignore the question of interaction between more than two genes completely.

    John Maynard-Smith (Evolutionary Genetics, Second edition, 1998, Oxford University Press, ISBN 0198502311) uses the same data to claim that in general there are no linkage disequilibria in natural population (p.87), but does not explicitly claim that it shows no epistatic interactions. Maynard-Smith explicitly dismisses the significance of epistatic interactions when discussing the advantages of sex (i.e. recombination) (p.234), by claiming that recombination breaks favourable gene combinations and slow evolution. The obvious fallacy is that recombination also forms the combinations, most of which will not arise at all without recombination. Maynard-Smith also mentioned only pairs of genes, rather than combinations of many genes.

    In the book above Maynard-Smith does not bother about real argument or even a reference to support his position. There is a somewhat extended argument in his book The Evolution of Sex (1978, Cambridge University Press, ISBN 0521293022) (bottom of p.14 and top of page 15)). He mentions the cases of epistatic interaction between two beneficial mutations, in which case recombination reduces the speed of evolution. This case is not interesting, because in population of complex individuals that have been through selection, mutations that are beneficial on their own are very rare, much rarer than combinations that are beneficial.

    He then claims that there are some evolutionary changes that can happen only in asexual population. He gives the example of a positive interaction between two deleterious mutations. This (and the previous case) is based on a paper by Eshel & Feldman (1970, Theoretical population biology, Vol 1, pp. 88-100). However, the results of Eshel & Feldman are not actually useful, as they dealt only with the case of deleterious mutations, where their prevalence in the population is very small (they denote it by h), and can be ignored in the calculations. This is a reasonable assumption for significantly deleterious dominant mutations, but not for mutations with very small effects or recessive effects.

    Nevertheless, this seems enough for Maynard-Smith to discount epistatic interactions as a factor in sex evolution, as he simply ignores it in the rest of the book.

    [27 Nov 2004] here (Sanjuán et al, PNAS | October 26, 2004 | vol. 101 | no. 43 | 15376-15379) and here (Bonhoeffer et al, Science, Vol 306, Issue 5701, 1547-1550 , 26 November 2004) are articles that investigate epistatic interactions and recombination in viruses. They do it in viruses because they are simple, but from the same reason their results are useless to understand evolution in complex organisms. This comment (Michalakis and Roze, Science, Vol 306, Issue 5701, 1492-1493 , 26 November 2004 ) makes this point in the last paragraph.

    The 'BandWidth' of different senses

    Humans have 5 senses, and it is quite common to assume that it is reasonable to assume that features that were found in one sensory-system (e.g. taste) are applicable to other sensory systems (e.g. vision). This assumption is wrong, because there are fundamental differences between the senses.

    The most important difference is the 'BandWidth', i.e. the rate of information that the sensory system can input. First, it is important to note that information rate is not the same as emotional significance: if somebody drops a brick on your foot, it makes you very unhappy but does not give you much information. On the other hand, just looking around gives you a lot of information, with normally no emotional effects.

    Vision obviously has the highest information rate, and hence it is our main source of information about the world around us. Hearing is some distance behind: To give the same amount of information as a single look, we will probably need hours of verbal description. It is probably possible to improve the information rate by creating more efficient code for description of spatial relations, but it doesn't seem likely that we can ever approach the effectiveness of vision. It is possible that the saying "A single picture worth a thousand words" is a reasonable approximation of the ratio of information rates between vision and hearing.

    However, hearing is still much closer to vision than to taste and smell: The information rates in these sensory systems is so low, that we cannot describe at all the world around us using these senses. In principle you can think about a system in which different tastes signify different letters in some alphabet, but by the time you finish to communicate even the simplest message, the 'listener's taste buds will be saturated, and the same is true about smells. The extremely low rate of information in taste and smell is quite often missed, because people confuse information rate with emotional effects. A nice smell can make you feel good, and a bad smell is even more effective in changing your mood, but neither can tell you how far and which directions are objects around you, and certainly nothing about their shape.

    The huge gap in information rates between vision and hearing on one side and taste and smell on the other hand means that it is not reasonable to try to project features from one of these pair of senses to the other pair. Features that are true in the high (cortical) level of the hearing system have a reasonable chance to be true in vision (and vice versa), but not in taste or smell. Similarly finding something about taste or smell gives some indication on the other sense, but not about hearing or vision.

    The high emotional significance of tastes or smells has more features that make it likely that it is genetically programmed: It seem to be true at birth to some extent, and more similar across individuals than visual and auditorial effects. It also has obvious evolutionary advantage, by making it easier to learn what to eat and what not.

    I have ignored the tactile sense. This is more complex, because there are areas in which there is a possibility of high rates of information (tips of the fingers), which may approach the rates in hearing, and other areas with much lower information rates. The emotional significance also vary widely between parts of the body. It is therefore more difficult to predict anything about this system, and it may make more sense to regard it as a combination of several systems.

    Genetic analysis of human evolution

    [7 Jul 2002]

    There are various genetic studies that try to infer various things about the evolution of humans in the last few hundreds of thousands years. Many of studies are much less convincing that they seem to be, because they don't mention several basic facts about genetics and evolution. The authors of studies may be aware of these facts, but the readers not necessarily. Here I point some of these facts.

    First let's look at a single individual today, and ask how many of the individuals in previous generations are this individual's ancestors. In the previous generation it was two. In the generation before it was 4, etc. If we assume that a generation time is 25 years, that gives for 100,000 years ago, which is 4000 generations ago:

    2 ** 4000 ~= 10 ** 1200 ~= infinity

    The reason that the real number is not so large is because of in-breeding, i.e. the fact that an ancestor through one line may be an ancestor through another line. In the extreme case, where a full brother and sister mate, both of the grandparents are ancestors of the child both through the father and the mother. In the more common case of cousins mating, one pair of the grand-grandparents are ancestors both through the mother and the father, so when we look from the child three generation backwards, we get only 6 ancestors instead of 8 ancestors.

    However, even with this consideration the numbers don't add up. For example, let us assume that instead of doubling the number of ancestors each generation, the number increase only by 1.1. That means that 10 generations ago, instead of having 2**10 = 1024 ancestors, each individual has 1.1**10 ~= 2.6 ancestors. That may look too small, but when we look 100,000 years ago, we get 1.1 ** 4000 ~= 10 ** 165, which is still ridiculously large. Assuming longer generation time doesn't help: if we assume generation time of 30 years, we get 3333 generations, and hence 1.1 ** 3333 ~= 10 ** 148. With an increase of 1.01 per generation, we still get 1.01 ** 3333 ~= 10**14, which is still too large (there are less than 10**10 humans today).

    Thus the actual geometrical increase in number of ancestors per each generation must be smaller than 1.01, and over 1000 years it has to be less than 1.2. This small increase in number of ancestors is possible only if the population is close to saturation, i.e. almost all the population are ancestors of the individual, and hence almost any mating involves "inbreeding" (i.e. two individuals that are closely related to other ancestors of the individual). We thus reached the conclusion that each individual today is a descendant of all the population 100,000 years ago.

    There are two qualifications to this conclusion: (a) extinction of lines, (b) barriers for mating.

    Extinction: Not all the population that lived 100,000 years ago left descendants until today. Some of them died without children, some of them had children but not grandchildren, etc. However, the extinction is unlikely to continue for long, and we probably can assume it is 0 after 1000 years. That is because the successful lines in the population are expanding fast. In a static population, an individual has on average 2 children. Among the individuals that do leave descendants, the average is larger. Thus after 1000 years == 40 generations, an individual will have, ignoring in-breeding, 2**40 ~= 10 ** 12 descendants. With in-breeding the number is smaller. But even we assume average number of children per generation of 1.5, we get 1.5 ** 40 == 1,100,000. The actual number must be much smaller, which will happen once the population is close to saturation, i.e. when most of the population are all descendants of the individual. Thus individuals that leave descendants for 1000 years have probably already saturated the population in the region in which they live. In addition, it is likely that a significant number of their descendants have migrated and established branches elsewhere. At this stage it is extremely unlikely that both all the local population and all the further branches will go extinct.

    How many individuals leave descendants after 1000 years? I am not going to try to reach a precise estimation, but I think that we can say with confidence that it is in the range 5-80%, except if there is a population crash which reduces the population to a fraction of its original size. The reason is that, as I wrote above, lines that don't go extinct tend to increase their number fairly fast, and as the number of descendants increases, the probability of a line going extinct is diminishing.

    It may be thought that a complete destruction of some local population, i.e. a household or a tribe, may cause large number of extinction. However, this rarely affect large fraction of the population in each generation (few percent at most). If it happens several generations after the individuals that we are interested in, many of these will already have descendants elsewhere, so the effect will become smaller and smaller in time.

    The other qualification is barriers for mating. For example, if we use 8,000 years instead of 100,000, the numbers still suggest that we are today all descendants of all the successful individuals from 8,000 ago. However, the populations of the Americas were almost totally separated from the rest of the world most of this time until 500 years ago. Thus it seems that at the end of the 15th Century very few, if any, native Americans were descendants of natives of Asia of 8,000 years ago. However, all native Americans are believed to be descendants of some Asiatic groups that migrated to North America 10,000-20,000 ago. Within the old world, there are no such strong barriers as the oceans, but there are large distances. However, from the way that humans have spread all over of the Americas, it doesn't seem that the distances are enough to stop human migrations over periods of thousands of years. We can therefore assume that a complete saturation (i.e. all the population being descendants of all the successful individuals in some previous time) will take only few tens of thousands years. We therefore can reach the conclusion that all humans today are descendants of all the successful individuals (5-80% of the population) that lived 100,000 ago.

    Once we know this conclusion, finding by analysis that some gene(s) seem to originated for all humans in a single allele 200,000 years ago become much less exciting than it seems otherwise. What it shows that this allele was advantageous enough (compared to other alleles) to take over from the other alleles. It doesn't tell us much about human ancestry.

    Some cases are a little more interesting. For example, mitochondria are believed to be inherited only from the mother (though there is some doubt lately whether this is 100% true). If it is found that all mitochondria came from a single individual 200,000 year ago, that shows not only that we all are descendants of the same individual (we already know that), but also that we are all descendent of this woman through the female line. That does make this woman unique, and suggests that her mitochondria were significantly better than the other mitochondria (which are all by now extinct), but is still cannot tell us much about human ancestry. In particular, it does not tell us that other women from her time didn't leave descendants to our time. Many of them did, through at least one male.

    An important point to realize is that if an individual living 100,000 years ago is the ancestor of all the individuals living today, it does not follow that any of this individual's genetic material survived too. The recombination and selection over time eliminates most of the copies of the genetic material, and there is no reason to believe that the result will be fairly distributed. What we should expect to find is continuous distribution with many individuals that left nothing, many individuals that left small amount of genetic material (fraction of genes or stretches of DNA that don't code for genes at all), fewer individuals that actually left more than a whole gene, and a small number of individuals that left several genes or more. Mitochondria and the Y chromosome are exceptions, because they don't recombine, so when a significant improvement arises in one individual, the whole unit (Mitochondria or Y chromosome) may spread to all the population.

    This article is an example of the confusion around the genetic studies. The author actually describes the expansion of ancestry which I discussed above, but later he says:

    It is based on such analyses that it can be calculated that all modern human DNA is derived from something like 86,000 individuals, living in Africa, of whom the hypothetical Eve and Adam were the two whose lineages made it into the present day, all other lines having gone extinct.
    The "86,000 individuals" fits well in the "5-80%" that I concluded, assuming the population at the time these 86,000 lived were 0.2-1 million, which is a reasonable assumption. However, the second half of the sentence says that only two of these actually left descendants to this day. The language that is used is not that precise, so other interpretations may be claimed, but interpretation that only two individuals left descendants is the obvious one, and the one that lay readers will use. Thus lay readers, which are the intended target of this article, will be mislead by this sentence.

    The other not completely unlikely interpretation is that only two individuals left genetic material to today, and that maybe the way geneticists interpret it. However, as discussed above, this is also false.

    It is not obvious what the author of the article thinks (I sent him an e-mail asking about this [7Jul2002]). I didn't read the book that the article is about, but from this interview (search for "the royal we", the next three answers) it is clear that the author of the book gets it right.

    [24Nov2002] They now do a similar error for dogs, but take it even further. This article (Science, Volume 298, Number 5598, Issue of 22 Nov 2002, pp. 1610-1613) analyzes mitochondrial sequences, and concludes several female ancestors 15000-40000 years ago. In an amazing mental leap, for which I couldn't find any basis either in the article or in any of the comments on it, they identify these female ancestors with domestication events. They demagogically "support" the mental leap by blending the discussion of genetics with references to domestication. For example, they say:

    To determine whether dogs were domesticated in one or several places, and the approximate place and time of these events, we examined the structure of mtDNA sequence variation among domestic dogs worldwide.
    I.e. they already assume that their data will tell them "the approximate place and time of these [domestication] events". Later they say:
    In a domestication event with a subsequent population expansion, a starlike phylogeny, with the founder haplotype in the center and new haplotypes distributed radially, would be expected. Fu's Fs test (18) for clades A, B, and C in East Asia (-20.0, -6.6, and -0.50, respectively) showed a significant signal of population expansion for clades A and B (P < 0.01).
    As if starlike phylogeny and the Fu's Fs test can tell us anything about a domestication event, which they don't, because they just tell us about the behaviour of an allele in a population. And again in the same paragraph:
    The approximate age of clade A, assuming a single origin from wolf and a subsequent population expansion, is calculated from the mean pairwise distance between East Asian sequences (3.39 substitutions, SD = 0.13) and the mutation rate to 41,000 ± 4,000 years.
    As if the calculation is based on a domestication event, which is not (it is based on the assumption of a single maternal ancestor).

    It is interesting that the reviewers and editor and all the commentaries that I read did not seem to be bothered by this kind of demagoguery.

    [ 15 Dec 2003] Steve Jones in his book "In the blood" ( HarperCollins Publishers 1996 ISBN 0 00 2555115) goes through similar argument to my argument above (but with much less numbers), but he believes the mixing was much faster than I wrote (p.42):

    "However, all those whose family line did persist for the necessary three millenia or so, whoever they were and wherever they lived, have an unbroken link with everyone (or almost everyone) alive today."
    That is clearly false for the American population of three thousand years ago, which doesn't have such links to most of the old-world population, even though they did persist. For the old-world population he may be right, but I doubt it, and he doesn't seem to offer any evidence to support it, either in the book or in his e-mail response to my query. Interestingly, in the e-mail he writes "clearly this is a speculation", but in the book he introduces this as "a simple and unavoidable conclusion" ( p.42, third paragraph, fifth line).

    Surprisingly, he still makes on p.94 the mistake of assuming that only females that their mitochondria survived left descendents, and similarly for males and Y chromosomes. In his e-mail response he claims that he didn't mean it and that this is a "fairly stringent reading" of his text, but I don't see how these statements on p. 94 can be interpreted otherwise:

    [talking about females that their mitochondria persist]: ".. their line survived while those of their fellows did not."
    [talking about males that their Y chromosome persist] :"The genetic legacy of the others has disappeared."
    In particular, there is no way that non-experts (for which the book is written) can interpret it in any other way than saying that the others didn't leave descendents, or at least didn't leave any genetic material.

    In his e-mail response the author makes it clear that when he thinks about it, he gets it right. Yet when he writes a book for non-experts, he doesn't.

    [30 Sep 2004] There is now an article in Nature (Nature 431, 562 - 566 (30 September 2004); doi:10.1038/nature02842) that claims to compute that the point where everybody is either ancestor of all of us or left nobody is 5400 years ago. They assume a continuous movement over the Bering straits, so the Americas are not isolated. I don't know how realistic are their assumptions, but it does show that when you actually computes you get much shorter periods of time than what I estimated above, and hence that finding common genetic material is even less interesting that I stated. In the commentary the distinction between being ancestor of all the population and actually leaving genetic material in current genomes is made explicit.

    [ 20 May 2005] In the latest Science there is an article (Macaulay et al, Science 13 May 2005: V. 308, 1034-1036) and "perspective" (Forster and Matsumura, Science 13 May 2005: V. 308, 965-966) both of which discuss the issue as if spread of mitochondria is equivalent to spread of population. They seem to be completely oblivious to the fact that these are not the same thing. (There is also in the same issue a short "brevia" (Thangaraj et al, Science 13 May 2005:V.8 996), which does not contain much discussion).

    The complexity of the immune system

    [7 Jul 2002]

    It is quite common to hear people, even experienced researchers, expressing some kind of surprise by the complexity of the immune system. However, a little bit of thinking shows that the immune system must be complex.

    The main problem that the immune system needs to cope with is the speed in which pathogens can evolve. If there is some way of deflecting the immune system, pathogens will find it fairly fast. They will produce some protein that will use the weakness, and increase its production until they completely derail the system.

    That, however, will work only if the system is simple. If the system is complex, and any kind of interaction may affect several processes, this kind of strategy is less likely to work. When a small amount of protein causes some effect which is beneficial to the pathogen, larger amount of it may trigger other processes that will destroy the pathogen.

    Thus the complexity of the immune system is an essential feature of it. When "pressing a lever" in the immune system generates some effect, "pressing it more" must generate a different effect. If it generated the same effect, pathogens will find it, "press the lever" to the end and destroy the system.

    Cultural Evolution and Memes

    [28 Jul 2002]

    "Memes" is one of the stupid ideas that are at the moment catching up. The concept was introduced by Dawkins, with "meme" being analog to "gene" and hence "memetics" in analogy to "genetics".

    First, a distinction must be made between cultural evolution and memetics. Cultural evolution is an old concept, and the basic idea is pretty obvious: any mental traits that can be transferred among individuals by any mechanism may spread in the population if individuals that have it are more successful in passing on mental traits. The complexity arises from the possible ways in which mental traits are passed on, and in what make individuals more successful in passing traits.

    The simplest case is the idea that you should have many children. In this case, people that think that you should have many children have (on average) more children than people that don't. Since these children tend to learn from their parents, the number of people that think that you should have many children increases in the next generation, i.e. the mental trait of "you should have many children" is spreading.

    However, mental traits can also spread in other ways. For example, if in a village of farmers somebody (the "model farmer") uses a better rotation of crops, their crop will be better and they will be become more successful. Other farmers will then try to emulate the success, by copying what the model farmer does. Those of them who will copy the crop rotation will become more successful, and hence a model for copying too. Thus the better crop rotation spreads in the population.

    Several points need to be noticed:

    1. The spread is not dependent on the model farmer or the copiers knowing that it is the crop rotation that makes the difference. The copiers may copy many traits from the model, and those that happen to copy the right one will become more successful and spread the trait. In this specific case, it is likely that the farmers will work out what makes the difference fairly fast. However, in more complex cases people may not able to work out the important factor(s) for a long time. Hence a useful trait may spread and reach fixation (i.e. everybody accepts it) even if nobody understands its usefulness. Thus cultural evolution can be the reason for mental traits even when nobody understands the causal relations between the trait and its effects.

    2. The "copying" of traits is not done only by plain imitation. In the case of crop rotation, it may be regarded as imitation, though I think that most of people will not include this process in their definition of "imitation". In more complex cases, like acquiring moral values, it is further from being plain imitation. The child, as he grows up, accept judgements by model figures (typically parents, but can be others) as true, and then abstract rules that "summarize" these judgements, and these are his moral values. There is nothing that could be called "imitation" in this process, but it is a propagation of a trait.

    3. The person acquiring a trait needs not be aware that he is acquiring it. Certainly young children are not aware of the process, and older children and adults are sometimes aware and sometimes not of the process of acquiring new mental traits.

    4. The process of cultural evolution is much faster than biological evolution (in multi-cellular organisms) because of two reasons:
      • It doesn't have to be transmitted only "vertically" (i.e. to descendants) like biological evolution. It can be transmitted "horizontally" (i.e. to people of the same generation) and "diagonally".
      • It is partly Lamarckian, because people can think on and affect what they are doing. As a result, the "mutations" can be large. In biological evolution, large mutations are fatal, so biological evolution is restricted to small changes.

    As I wrote above, nothing of this is really new. The concept "meme" (and "memetics") is new, and was first introduced by Dawkins less than 30 years ago. The idea is that the cultural evolution is done by "memes" (which are analog to genes), which are ideas and concepts that spread across people. "Memetics" (in analogy to genetics), is the science of memes.

    The interpretation of "meme" and "memetics" can be made broad enough to make them equivalent to cultural evolution and sociology. With this interpretation, they are just new words. However, most of people interpret the words in a narrower way, by incorporating several assumptions about memes and memetics. Normally it is one or more of the following assumptions:

    1. There are some entities underlying all (or at least most) of cultural evolution which are well-defined enough to give them a name ("meme").
    2. The behaviour of these entities is regular enough to have a science of them.
    3. The science of memes can gain from insights in the science of genes.
    The last assumption is almost trivially false. The spread of "memes" (i.e. ideas, concepts, mental traits etc.) is so different from the spread of genes, that none of the insights from genetics is useful in understanding them. Sometimes people try to pretend that you can use insights from genetics, but it is always ridiculous ideas. For example, since ideas can spread "horizontally", they obviously are not analogous to genes. Therefore, people say that "memes" are like viruses (because these also spread from person to person). However, viruses do not become part of the functioning of their host, while ideas do, so they are completely different things.

    The first two assumptions are also false, but it requires more knowledge to know that. The problem is that the only place where the entities can reside is in the brain, and there clearly isn't in the brain anything that corresponds to the entities that are typically regarded as memes (ideas, concepts). To actually know that, though, you need to have some knowledge in neuroscience and brain anatomy, and most of the promoters of the ideas don't bother to learn anything about neuroscience.

    For example, here is a list of "People working in memetics". None of these is a neuroscientist. If memes where real in any sense, the majority of them should have been neuroscientists.

    The latest Nature gives a nice example. It contains a review of a book about memetics (Nature 418, 370 - 371 (2002); doi:10.1038/418370a Replication at the speed of thought, EÖRS SZATHMÁRY). According to the review, "He [the author of the book] identifies memes as dynamic neural activity patterns (states) that can be replicated, primarily within the brain." Clearly, because of the stochastic connectivity in the cortex, patterns of neural activity cannot be replicated in the brain. The author, however, is an anthropologist, so he doesn't know that, and hence feels free to speculate against the data. The funny thing is that the reviewer himself is not a neuroscientist either. With a book that presents a theory based on speculation of things in the brain, it would be natural to select a neuroscientist to comment about it, but that isn't what Nature has done. Instead, they took a theoretical biologist, who himself doesn't know neuroscience (which is clear from the fact that he refers to Calvin's fantasies as if they are serious ideas), and hence cannot actually comment on the theory itself. The readers are left with the impression that the book's ideas are serious, rather than ignorant speculations.

    The reviewer may have a feeling that something is wrong with assuming replication of some brain entity, because he seems to think that this isn't part of what memes are, and that these are defined by their phenotype only (my interpretation of "the criterion being sufficient phenotypic convergence"). This differs from saying that a meme is whatever it is that causes some behavioral trait (which is a useless concept) by the implied assumption (1) above.

    The requirement for evidence

    [17 Aug 2002]

    In argument between two positions (outside court room), it is normally expected for both sides to bring evidence to support their position. Failure to bring such evidence by either side is typically regarded as damaging equally for both sides.

    This, however, is a mistake, because it doesn't take into account the effort that is required to bring evidence. If we call the positions A and B, it is possible that it is very difficult to bring evidence for A but easy to bring evidence for B, and in this case the failure to bring evidence on both sides should be much more damaging to position B and than for position A.

    For example, take the question if people have a coherent interpretation of the word "consciousness". Some people believe that there is such coherent interpretation, others (including me) believe there isn't. Both sides cannot bring supporting evidence, so is it stalemate? No, because of the difference in difficulty of bringing evidence. If there is a coherent interpretation, it should be reasonably easy to explicitly articulate it (By "reasonably easy" I don't mean that every person should be able to give it immediately, but that when people seriously think about it they should have an answer in matter of hours).

    On the other hand, if people don't have a coherent interpretation of the word "consciousness", it is not easy to find any evidence for it. If people normally gave their definition of the word in texts that use it, it would be possible to compare the definitions between texts, and to compare the way the word is used to the definitions, and to show the discrepancies. But people don't give definitions, and when they do it is always in terms that are also ambiguous, so this route is not possible. The other route is to analyze texts that use the word, deduce the interpretations that the different texts use, and show the discrepancies. The problem with this kind of work is that nobody is going to read it. Thus there is no practical way to show the lack of coherent interpretation of the word "consciousness".

    Hence the failure to bring evidence is much more damaging to the position that people have a coherent interpretation of the word. The situation is similar whenever the argument is about lack of something. For example, the lack of genetic specification of connectivity inside the cortex (Stochastic connectivity) is something that many people don't accept, and typically argue that there is no evidence for it. But if it is true, there will not be an evidence for it, so this is not a valid argument. On the other hand, if there was genetic specification of the connectivity, by now there should have been evidence for it from comparing connectivities across individuals.

    Thus, when considering the strength of positions, it is important not only to consider the evidence that exists and the evidence that is lacking, but also if the lacking evidence could exist, and how easy it is to bring it if it does exist.

    The lucky-prejudice effect

    [17 Aug 2002]

    The lucky-prejudice effect happens when a person has a prejudice (i.e. a belief that is not based on supporting evidence) that happens by chance to be useful. Here I am mainly interested in the effect on breakthroughs in science.

    In any field of research there will be typically several researchers that are better than the rest, in that they have a better grip on the current understanding in the field. Note that these people are not necessarily the senior people in the field. When a breakthrough comes, it is probably achieved by one of these people. But which one?

    In most of the cases the difference between the top people in ability is too small to explain who is the successful person. However, all these people are working on the edge of current knowledge, so the best direction forward is not clear from the current evidence. Under these circumstances, these people will select their line of research based on their own prejudices. One of them, by chance, will have a prejudice that leads him/her in the best direction, and that is the one that will make the next breakthrough.

    Several points to note:

  • The lucky-prejudice selects among the top people in the field. It will rarely, if ever, cause a mediocre researcher to come up with a breakthrough.

  • Since the effect is by chance, it may or may not happen. There may be situations in which the prejudices of the top people in the field don't give any one of them an advantage.

  • In some rare cases the top people are not actually equal, and one of them is really better than the rest, and then the lucky-prejudice effect is less likely to be effective.

    Since the effect is by chance, it is not useful for generating predictions of future breakthroughs in science, but it is useful in explaining some phenomena:

  • The fact that most breakthroughs are achieved by scientists in their thirties, and these scientists don't achieve further breakthroughs later. It would be natural to expect that a scientist that achieve a breakthrough as a junior scientist would succeed to achieve further breakthroughs once he has higher status, more experience, and much more resources. That doesn't seem to be the case, and the lucky-prejudice effect explains it: the first breakthrough was a result of lucky-prejudice. Once this breakthrough (and others) occurred, the field of research changed, and now the same prejudice is not so useful. Since the lucky-prejudice was never based on evidence, it is difficult for the scientist to change it. If he/she is aware of it, they will tend cling to it because it was successful in their first breakthrough (and will call it "intuition"). In many cases, they are not aware of it.

  • It explains why many people that look extremely promising when young never achieve any breakthrough. They happen not to have any lucky-prejudice.

    False "understanding".


    By 'false "understanding"' I mean a situation when people think they understand something, though they don't. The primary example I am thinking about is when a person reads some popular science book, which distorts the actual situation in the field to make it looks "better". To be popular, the book must give the readers the impression that they understood a lot, and normally (though not always) this is achieved by severe distortion of the actual situation in the field. The result is that the readers, who don't possess the expertise to correct the distortion, get the impression that they understand things that they don't.

    False "understanding" is extremely common, much more common than most (maybe all) of people think. Popular science books create significant part of it, but newspapers and magazines probably account for more, and people develop a lot on their own.

    The second point about false "understanding" that most people don't realise is that it is very difficult to undo its effects. The main reason is that to unlearn a false "understanding", the person needs to break existing associations. This is much much more difficult than learning new ideas, which involves creating new associations. The difference in difficulty is so large, that it offsets the advantage of learning unless false "understandings" are really rare. For example, if a person learns 10 propositions of equal significance, and one of them is false, he is actually worse off than a person that didn't learn any of these propositions, because the effort required to unlearn the wrong proposition is much larger than the effort to learn all the ten propositions from scratch.

    The third point that people miss is that false "understandings" causes people to misjudge evidence. In general, people have a strong tendency to pay attention to and regard as significant evidence that fits their beliefs, and dismiss or misinterpret evidence that does not. Once a person has misunderstood something, it will cause him to misjudge evidence in the related field, and to "learn" more false ideas.

    And the last point to note is that even once the false "understanding" has been corrected, i.e. the person learns that what he thought he understood is wrong, the damage of the false "understanding" lingers on in derived false "understandings" (next section).

    False chains of reasoning

    [26 Jan 2003]

    Lets assume that a person learns that P1 (Px stands for an individual proposition) is true, and therefore that P2 is true, and therefore that P3 is, and therefore P4 is true. Thus the person has a reason to believe that P4 is supported. Lets also assume that P4 is a significant proposition, i.e. it is used often in the thinking processes of the person. The learning of P1 and making the inferences from it may be done consciously or unconsciouly, by listening to some other person or independently.

    Then the person learns that P1 is false. Logically, at that point P2 become suspicious: It may still be true, but it has less support than the person believes it does. Because P2 is suspicious, P3 also become suspicious, and hence also P4. In principle, the person needs to go through these propositions and evaluate the level of support they have now that P1 is known to be false. I think it is clear that people rarely do this update. In most of the cases, people cannot actually recall the lines of inferences that they have used in establishing their beliefs, so they couldn't do the update even if they tried. Even when they can recall them, and actually try to update their belifes, it will tend to be biased. for example they will tend to assume that P2 is still true anyway. Therefore, after a person finds that one of his beliefs are wrong, he will still be left with some unjustified beliefs.

    A potential mechanism to get over this problem is whenever using a proposition, check its status, e.g. whenever using P4, check if P3 is still supported, which will mean checking P2 and hence P1. However, this mechanism also require to be able to recall the chain of inferences, and requires large amount of work whenever the person uses a proposition. Together, these make this mechanism extremely unlikely. Therefore, when a person learns something through a false reasoning chain, even after they correct the false link, they will have unjustified beliefs, which can live on for a long time, in many cases for the lifetime of the person. Note that this is not the result of irrational or emotional response: it is simply a reflection of the fact that the human thinking is not perfect.

    Politicians and other users of demagogy use the principle pretty often, even if they don't actually understand it. It is quite common to see somebody presenting a false chain of reasoning which is based on a link that the listeners/readers are likely to realize that it is false sometimes later. However, by that time the damage has already been done, and the listeners/readers are stuck with unjustified beliefs. A common manifestation of using this trick in speech is when a speaker says "let me finish" (or similar phrase). Normally that means he needs to finish a false chain of reasoning.

    Conclusions from this discussion include:

    1. When you find that any of you beliefs is wrong, it is worth spending time performing the update, i.e. checking which other beliefs are based on the false belief, re-think them, and then re-think anything that depends on them. It is important to realize that at this point you will have a bias to keep the affected beliefs.
    2. Avoid being convinced by suspicious chains of reasoning. That may sound obvious, but it isn't. For example, journalistic commentary is unreliable and contains many false chains of reasoning, yet many people allow themselves to be convinced by it. Most of them realize that the commentary is unreliable, but don't realize how damageful it is to follow such chains of reasoning. That is true about many of 'popular science' books too.
    3. When you hear somebody says "let me finish", stop listening. They are probably using a false chain of reasoning.
    4. Try to make sure that you can reproduce the chain of reasoning behind your beliefs, at least in the areas in which you want to reach good understanding.

    Bird and human brains

    [21 Feb 2003] In this Nature Sceience Update, some researchers describe a study that shows similar distribution of some genes in the brains of species of birds that learn new songs. This is quite interesting finding, though it is not obvious how significant it is.

    The funny bit is in the end of the Update, where the author of the Update says about the author of the study:

    He now plans to see if the human brain has similar patterns of molecules. "These ancient receptors could help us identify the entire system of brain regions for vocal learning and language in humans in a [new] way," he says.
    That is really nonsense, because of the difference in brain anantomy between birds and mammals. Mammals, and especially humans, think in the cerebral cortex, including the main functions of language interpretation and generation. In birds, the cortex is undeveloped, and any thinking functions happen in other parts of the brain. Thus there cannot be a relation between the distribution of genes in birds and human thinking and language learning, because the latter happens in a brain part that are small and unimportnat in birds.

    It is not surprising that the author of the Update didn't know that, but what about the author of the study? If you look at the author's home page, you can see that the main structure he thinks about is something in the basal ganglia. Since damage to the basal ganglia seems never to affect perception or generation of language, it seems highly unlikely to be involve in language. That doesn't seem to bother him.

    [ 4 Mar 2004] In this page they say about the author:
    Jarvis believes that the forebrains of humans have similar loops, although they have not yet been discovered.
    This is an attribution, rather than a quote, but the context suggests that it really came from Jarvis himself.

    The authors of this article also quite confused. They start by saying:

    Birdsong is considered a model of human speech development at behavioral and neural levels.
    In the behavioural level, you can always find matches between birdsong and human speech develoment, so it is not wrong to say "birdsong is a model of human speech development", it is just useless, because it doesn't give you any useful information. But at the neural level birdsong is clearly not a model for human speech development, because it is done in different brain parts (as discussed above).

    The discussion of the similarity in this paper is all about behavioural similarities, because there aren't any neural similarities. The similarities at the behavioural level are mostly similarities of stages in learning processes, and could also match any other learning process, e.g. a person learning to drive or a cat learning to catch mice.

    The only additional feature above general pattern of learning is the influence of "social partners". This feature doesn't appear in all the cases of learning, but in many of them, in particular in almost all the cases of learning by humans of activities that are not strongly constrained by anatomy and the laws of physics. Thus this similarity is also not very illuminating.

    It is also interesting that the novel finding of the research is that babies respond positively to mother interactions, even though for most of people that will look obvious. The reason for that is that the various innatists theories of language acquisition don't really fit with this fact, so many researchers ignore it.

    [5Apr2004] The latest issue of the jorunal of neuroscience contains two articles that propagate the myth of human-speech-like-bird-song, and conflate it with the myth of the language gene (Haesler et al and Teramitsu et al). In the articles themselves they don't push the point too strong, but they do it much more bluntly in the press releases. For example, this university release is titled "A Bird’s Song May Be Key to Understanding Human Speech Disorders, UCLA Scientists Report." They are even more blatant in this university news release:

    Neurobiologists have discovered that a nearly identical version of a gene whose mutation produces an inherited language deficit in humans is a key component of the song-learning machinery in birds.
    Notice that this statement not only mislead about the relation between human speech and birdsong, but also about the deficit that the gene causes in human (discussion here) and about what they actually found in birds, because they haven't actually showed that it is "a key component of the song-learning mechanism". The other principle author of Haesler et al say that explicitly in the penultimate sentence of her press release by Max Planck Society. It is worth noticing that all these press releases, which are aimed at non-experts, do not mention the issue of cerebral cortex in humans and that birds don't have it, even though their audience is unlikely to be aware of this point. Even when they mention various caveates, the fundamental anatomical differences are left hidden.

    See here for their responses when I asked about it.

    In this Research interest description, it says:

    Songbirds are the preeminent animal model for human vocal learning, and they represent the only model system that allows investigation of vocal learning at cellular and molecular levels.
    Which suggests that this guy genuinely believes that he can learn about human vocal learning by studying birds.

    [29Aug2005] In this article (Visually Inexperienced Chicks Exhibit Spontaneous Preference for Biological Motion Patterns, Giorgio Vallortigara, Lucia Regoli, Fabio Marconato, PLoS Biology, Volume 3 | Issue 7 | JULY 2005) they do a similar logical error about recognizing animated movement. They conclude that the fact that chicks prefer biological movement from hatching, and that human babies, after several months, also recognize biological movement, "".. suggest that a preference for biological motion may be predisposed in the brain of vertebrates." That would make sense only if we assume that the underlying mechanism is homologous between the chicks and humans, and it clearly isn't, because of the anatomical differences.

    Creativity and madness

    The thinking system that humans have (whether learned or innate) is obviously not perfect. Each individuals have many thoughts that are a-normal, by which I mean thoughts that do not fit the norms in the society or disagree with the laws of nature. Expressing these thoughts in any way (by expression of a thought I mean any behaviour, including verbal behaviour, that derive from it) will tend to have undesirable effects, and hence individuals will try to suppress such expression.

    Individuals that are successful in this suppression are 'normal'. For individuals that do not succeed, we can think of too extreme possibiliies:

  • Some individuals succeed to suppress all a-normal thoughts except some thoughts that has some value (e.g. aesthetic value). These individuals are regarded as having creative ideas.
  • Some individuals don't succeed to suppress their a-normal thoughts at all. These individuals are what we call 'mad'.

    Most of the a-normal individuals are somewhere between these points. However, for their creative ideas to be recognized, an individual needs also the skill to convert them to something that can be appreciated by other people. Thus most of people that are somewhere between creative and mad are simply considered mad, because they are not able to convert their creative ideas to something interesting. The minority of them, though, do acquire the required skills, and then we have the combination of a creative person that is also somewhat mad.

    Madness is not really useful for acquiring skills, specially if these skills include some kind of cooperation with other individuals. So the most successful creative persons will tend to be those that are quite successful in suppressing expression of their a-normal thoughts. They express only the ones that are actually useful, i.e. the ones that they can convert to something that other people appreciate, and a small range of other a-normal thoughts, and will be what we call eccentric. The range of other (not useful) a-normal thoughts that they express is restricted by what is acceptable to the people that they cooperate with, and in the case of mass-artists (e.g. singers) also what is acceptable to their audience (unless they can keep it secret).

    In short, artistic creativity can be described as "madness constrained to a useful aspect", rather than being magical attribute, and it will tend to be strongly associated with mild madness, i.e. eccentricity.

    All this looks to me quite obvious, but it seem not to be so for many people. I suspect that is because many people believe that the human thinking system is rational, and hence normal, unless it is damaged in some way. The stochastic connectivity of the cortex makes it clear that this is false, but most of people are not aware of this, and even neurscientists don't follow the argument properly.

    "cortical parcellation"

    The most interesting thing about this article (PNAS , February 17, 2004 , vol. 101 , no. 7 , 2167-2172) is how uninteresting its conclusion is. What they found is exactly what you would expect from a random network: the sensitivity to the input in the cortex is strongest in the regions where the input enters the cortex (e.g. V1 for visual input), and becomes less pronounced as you move away from the region of input. It is an 'interesting' result only in the context of believers in modularity, which expect sharp boundaries between the modalities.

    Amusingly, these authors seem not to be able to get rid of modularity-oriented thinking. The second half of the last sentence of the abstract reads:

    and they suggest a parcellation scheme in which modality-specific cortical domains are separated from one another by transitional multisensory zones.
    What their data shows is that the "modality-specific cortical domains" are not separated: they merge into each other in a continuous way. But for these authors they are still "separated".

    "Why should neural activity produce any kind of feeling at all?"

    The quote is from a book review in a respectable journal (Seeing Through the Steam of Consciousness, Science, Vol 304, Issue 5667, 52-53 , 2 April 2004), and is quite typical. The problem with this question is that it is just non-real: neural activity (in some specific pattern) doesn't produce feeling, in the same sense that pieces of wood (in some specific arrangement) don't produce a chair. The actual relation is identity, i.e. pieces of wood in some specific arrangement are a chair, and neural activity of some pattern is a feeling (of some kind). By now this is actually an obvious fact.

    The problem with this fact is that, apart from making large number of philosophers redundant, it seems to most of people to diminish the value of feelings. People simply don't want to believe that their feelings are 'just' neural activity. Therefore, they insist on considering feelings (more generally, qualia) as an entities which are not identical to neural activity. Hence the need for neural activity to "produce the feeling". Since we already know that the (physical) brain is all about neural activity, these extra entities must be non-physical.

    The remarkable thing about this quote is that it comes from a seroius scientific journal, rather than some "popular science" writing. It shows how much dualism is alive and well even in the scientific community, though only an implicit form.

    [ 5 Nov 2004 ] Another example again from Science: In this book review (Christof Koch, Thinking About the Conscious Mind, Science, Vol 306, Issue 5698, 979-980 , 5 November 2004) the reviewer says:

    (As anybody who has ever suffered from a tooth pain knows only too well; the sodium, potassium, calcium, and other ions sloshing around the brain that are sufficient for the pain are not the same as the awful feeling itself.)
    Obviously he takes for granted that the pain and the "ion sloshing", i.e. neural activity, are not the same thing. But science tells us that they are. That they aren't is (presumably) based on on the subjective "knowledge" (something like: but it doesn't feel like neural activity), but this "knowledge" is simply useless and that is all there is to it.

    The language that the reviewer uses (maybe quoting from the book he is reviewing), i.e. talking about "ion sloshing around" rather than about neural activity, is clearly intended to make the identity seems to be less plausible. This suggests that whoever wrote the quote above felt that saying that the pain is not neural activity doesn't sound that obvious, so he decided to use a more reductionist language rather than admit the problem. An analogy for this manuever is for someone to argue that a chair is not the same as large number of atoms connected by covalent bonds in some configuration, rather than arguing that a chair is not some pieces of wood in some configuration.

    "... nature has optimized the structure and function of cortical networks with design principles similar to those used in electronic networks."

    That is taken from the abstract of an article in Science (Laughlin and Sejnowski, Science, Vol 301, Issue 5641, 1870-1874 , 26 September 2003 full text online). The article does not point to any kind of evidence to support this statement, which is trivially nonsense. Electronic networks don't have the complexity and plasticity of the "cortical networks", they are physically made from very different materials, and are "designed" by completely different kind of agents (human engineers vs. evolution + learning). To just assume that they share design principles, as the authors do here, is a kind of religious belief, rather than scientific opinion. That this rubbish finds its way into the magazine Science shows how clueleass are researchers in this field.


    Yehouda Harpaz