This is a criticism of the book At home in the universe by Stuart Kauffman (1995, Penguin Group, ISBN 0-14-017414-1). I am not a great fan of popular science, but I was still surprised by the amount of garbage in this book. It displays contempt to complete branches of science, using mathematical tricks, jargon which hides lack of content, and sometimes simply plain nonsense. I stopped writing this in the middle, because I don't think this book worth more effort.
It is worth noting that kauffman doesn't challange those branches of science (e.g. cell bilogy, thermodynamics) which contradict his ideas. He simply ignores them completely, not only their theories but also their accumulated experimental knowledge.
The most amazing thing about this book is the kind of praise it got, even from people that could easily see that at least part of it is junk, because it is in their own field. In fact, I haven't seen anybody pointing to the problems of Kauffman's theory.
Kauffman repeats all along the idea of edge of chaos, and puts a lot of emphasis on it. However, the system he talks about clearly have some order, and clearly are not fixed, and behave in a complex way. Thus saying that they are 'on the edge of chaos' is interesting only if it means something more than 'ordered but can vary in a complex way'. It doesn't. There is nothing that you can deduce from knowing that a system is 'on the edge of chaos', except that it is ordered but may change in complex ways.
What Kauffman is doing here is to use the lack of understanding of mathematical concepts by most of readers. He writes as if it is an important and very interesting proposition, and relies on the reader to accept it on trust.
Kauffman attaches significance to the possibility of approximating some phenomena by the power law. This is ridiculous because any correlation which is not very jumpy can be approximated by the power law, simply because the power law is very versatile (if you don't believe it, try to draw a line that represents a correlation between x and y and cannot be approximated by the power law). It is not obvious whether Kauffman is aware of this and intentionally mislead the reader, or whether he is not aware of the versatility of the power law.
It would be fair to say that Kauffman is not the only author that uses this idea of power law, but the fact that many authors write nonsense doesn't make it more sensible.
Before presenting his ideas of the beginning of life, Kauffman goes into great length to discredit the RNA hypothesis. While the RNA hypothesis is still problematic (e.g on thermodynamic grounds), the arguments that Kauffman brings against it are daft, to be mild.
First, Kauffman claims that there is a problem with tangled RNA (p.40, top). That is nonsense, because the tangles are not permanent. They slow reactions, but don't prevent them completely. The slowdown is a serious problem if you try to make RNA in short time, but the origin of life may have taken hundreds of millions of years. Kauffman himself concedes this point later (p. 48).
Secondly, he argues against the idea of an RNA-RNA-polymerase by claiming that it will suffer an error-catastrophe, because of the rise of mutants that copy incorrectly(p.42, top). That is simply nonsense. All that is required is that the polymerase will copy itself better than it copies mutants (an obvious assumption). Then mutants that are not efficient copiers will be eliminated, because they are not copied efficiently, and mutants which are efficient copiers will survive. Obviously, that is a simple natural selection, and that is why people favour this hypothesis, but it seems too complicated for Kauffman.
It is possible in principle that Kauffman thinks here about Eigen's error-catstrophe, which he discussses later (p. 186). However, on page 42 he says (end of first large paragraph):"I do not know of a detailed analysis of this specific problem...". since he obviously knows about Eigen's work, it seems on p.42 he refers to something else.
The worst argument is about what Kauffman considers "the most insurmountable problem": the minimum of complexity (p. 42-43). This is trivial nonsense, and Kauffman knows part of the answer (p.43): RNA replicators evolved into cellular organism. The way he argues against it is quite amazing. First he says (p.43):
But there is nothing deep about this explanation.So what? An explanation does not have to be deep. It has to be correct. This comment is simply idiotic. He then continues(p.42):
We have instead another just-so story, plausible but not convincing, and, as with all just-so stories, the implication is that things could easily be another way.This one is only slightly better than the previous one. The explanation that is needed is a plausible explanation, and Kauffman admits it is. Whether it is convincing or not is a subjective matter, and has no relevance to the plausibility of the explanation. Sticking the label of just-so story is a good demagogy, but not a serious argument.
The most bizarre argument is left to the end: that if this explanation (start RNA and evolve to cells) is correct, then things could have been different. Kauffman continues this argument in the following several sentences. Apparently, he believes that it is impossible for things to be different, e.g. that in another route "we might indeed have horny protrusions on our foreheads." Note that this belief is used as a postulate rather than being deduced from anything.
This belief probably stems from a strong anthropocentric drive, which seems to underlies large parts of the book, including the book's title motif. It is made more visible by the way Kauffman describes the other route: a better description would be that we would not exist, and instead there would be other intelligent species with horns. The way Kauffman describes it suggests he cannot even imagine a world without humans.
I wrote above that Kauffman knows part of the answer. The part that he misses is that in the pre-biotic time, the proto-life molecules (whatever they were) did not have any competition, so quite inefficient "forms of life" could survive. Once cellular organisms arose, they eliminated any other form of life. Thus it is quite possible that there is now a threshold of complexity, below which there isn't any form of life that can compete with cellular life, but this threshold is irrelevant to the question of origin of life.
In closing the chapter, Kauffman presents a statistical argument from Hoyle and Wickramasinghe who claimed (according to Kauffman) to show that life is extremely improbable. I am not sure that Kauffman represents Hoyle and Wickramasinghe correctly, because the argument is another piece of nonsense. They (according to Kauffman) select a quite complex bacteria (E. Coli) as their example, which is clearly nonsense. Since the leading hypothesis is RNA-world, a better choice would be a typical-length of RNA (100-200 nucleotides), and then the probability becomes much more reasonable.
Kauffman makes his anthropocentrism (or maybe better describe it as life-centrism) explicit in the top of p.48. He considers a world in which RNA and DNA do not work, and whether life would be possible then. Then he writes:
I do not want to think that we were quite so lucky. I hope to we can find a basis for life that lies deeper than template self-complementarily.This is clearly a 'religious' belief, without any logical or experimental base. For some reason, Kauffman believes that being the result of chance events is inappropriate (in some sense), and hence we need to find other solutions.
On pp.53-54 Kauffman presents with favourable light an analogy that Winfree is doing between the belosov-Zhabotinski reaction and the heart. Just in case the stupidity of the analogy between a reaction of few molecules and the complexity of the control of heart-beat is not obvious, he writes(p.54):
Thus Winfree has suggested that simple perturbations can switch a normal heart to the spiral chaotic pattern and lead to sudden death.Really? So why does the normal heart beats more than 2 billions times before stopping? this is a typical example of theoretical scientists unable to see the limits of their theories. On the same page Kauffman claims that the same reaction "may foretell the stripes of the zebra," a somewhat less stupid idea, but still very unrealistic.
On p. 62 Kauffman proves that as the number of different molecules increases, the number of reactions to make them grows faster. He uses this to argue that as the number of different molecules grows, the chance of some molecule to be catalysed grows very fast. The problem with this argument is that it does not take into account the fact that the additional reactions require more steps, and each step needs to be catalysed. As an example Kauffman gives the formation of ABBB from ABBBA, which means at least one unique reaction (ABBBA => ABBB), and also assumes that ABBBA already exists in the set.
A more realistic consideration would be that each molecule requires at least one step that is unique. Assuming that the set is made of 'neighbours', i.e. each molecule can be made in one step from other molecules in the set, that means that the number of reactions that need to be catalysed is the number of molecules in the set minus the number of simple building blocks that are available from the start. That means on average a little less than one reaction per molecule. However, the molecules in a network where each molecule is one reaction from at least one other molecule in the set are not going to catalyse a large number of different reactions, so this kind of network is unrealistic.
The possibility of a network which is more sparse, i.e. where there are molecules that require more than one step to make, become even worse, because by now molecules in the set will have to catalyse more than one reaction, and molecules that do that are rare enough that we can simply ignore this possibility.
The problem becomes obvious when looking at the example that Kauffman gives in figure 3.7 (p. 65). This contain 21 molecules, made of two available building blocks, and is a 'neighbouring' network, i.e. each molecule can be made by one reaction from other molecules in the set. This set require 19 reactions, and clearly there is no way that 21 such real molecules will catalyse 19 different reactions. The example is made even more ridiculous by the fact that Kauffman actually makes each of the two building blocks catalyse 2 reactions. It may be fun to play with this models on a computer, but they are obviously nothing to do with the real world.
On p. 63, Kauffman discusses the 'match catalysis rule', and mentions a chance of 1 in million for a molecule catalysing a reaction. Is this realistic? Consider this experiment: select 1000 reactions. Now take all polypeptides of 20 amino acids, of which there are ~ 10**26. Now, for each reaction, check how many of the polypeptides will catalyse it. If the probability is 1 in million, then we should expect for each reaction to find around 10**20 catalysts. This is obviously nonsense. It is more likely that most of the reactions will not be catalysed at all. Of course, maybe peptides are a bad example, but there isn't any better one. In general, something like 10**-30 to 10**-100 seems much more likely to be correct.
Kauffman suggests that with RNA catalysing RNA ligation we have a better chance, but that is not based on any evidence, and is unlikely to be correct. With chains of, say, 50 nucleotides, there are around 10**30 possible chains. How many of these are going to catalyse ligation? The difficulty of finding RNA catalysts suggests that not many.
A hint of where the number million comes from is given on p. 120, where Kauffman claims that a random antibody has a chance of 1 in million of catalysing a random reaction. If this is what the number in p.63 is based on, it is a double cheat: (1) the reactions on which Kauffman based his estimation are all relatively simple, and were selected because they were easy to catalyse. (2) It is based on having antibodies, which were evolved specifically to good framework for binding random shapes.
Kauffman continues of p.63 to claim that:
No matter which of these "catalyst" rules we use, when the set of model molecules reaches a critical diversity, a giant "red" component of catalysed reactions crystallises, and so collectively autocatalytic sets emerge.But can it reach critical diversity? According to Kauffman, if we assume 1 in million chance, we need million different molecules to get high probability of such reaction. Even this diversity is problematic, considering that each molecule has to find by chance the right components in the reaction it catalyses. With probability of 10**-30, it is obviously not possible to put 10**30 different molecules together and still expect each one of them to find the right match to react with during the lifetime of the universe (which existed for less than 10 ** 18 seconds). .
Kauffman writes that they did computer-simulation of the process, but does not tell us what was the assumed probability of catalysis, rates of diffusion and concentration. It seems most likely they simply assume that any molecule is in contact with any other molecule. He does not give exact details of the number of molecules in their sets, but says in the bottom of p.64: "More complex autocatalytic sets have hundreds or thousands of molecular components". Obviously, these networks cannot work even with the optimistic assumption of 1 in million probability of catalysis, so they must have used even more optimistic assumptions. Thus the simulations are useless.
The most appalling aspect of Kauffman presentation of his idea is the way he mislead the reader about thermodynamics, and that is also where he is most definitely wrong. During most of the discussion, he simply ignores the thermodynamics of the system, even though thermodynamics is the most serious problem in the question of the origin of life, because starting life requires moving a large distance from the equilibrium. Only after he finishes the description of the model, and concludes victoriously (p.66) "A self-reproducing chemical system, alive by these criteria, springs into existence", does he start to discuss thermodynamics. He clearly relies on the exhausted reader to skip this section, or at least read it without paying attention.
The first thing Kauffman does in this section is to mislead the reader about the equilibrium between a polymer and its constituents. He claims that this is dependent on the energy of the bond, and for peptide to amino acid this is 1:10 (pp. 67-68). That is, of course, nonsense. For a reaction where the number of molecules changes, the equilibrium is also dependent on the absolute concentration of the reactants and the results. In a realistic concentration, this shifts the equilibrium strongly towards the separate units, because they have much more entropy (another way to think about this is that the di-peptide can fall apart on its own whenever it feels like, while to form it the two amino-acids need to come together first in the right conformation, which is a low probability event). The result is that in a solution of amino acids plus a catalyser of formation/opening of peptide bonds, there will not even be any tri-peptides molecules, let alone anything longer. It is an interesting question whether Kauffman knows this and intentionally mislead the reader, or is he ignorant of thermodynamics.
Kauffman proceeds to suggest that increasing the concentration of the building blocks can get over of the problem of difference in energy, but of course this actually gets over the problem of entropy (which he ignored before), and does not affect the energy difference. For this, the polymer has to be made somehow lower in energy, which requires even more fantasy chemistry. Kauffman mentions on page 68 that the polymer can be removed from the solution, but this is not useful if we want it to catalyse anything in the solution, and require some mechanism to achieve it.
While increasing the concentration of the building blocks shifts the equilibrium towards longer polymers, it does not help in getting over the main obstacle, which is moving away from the equilibrium. Without some additional factors, the system will still be in equilibrium, with long repetitive polymers and no diversity. Again, it is interesting question if Kauffman knows this or not.
Of course, live organisms do keep itself away from equilibrium, and we know how they do it: by coupling the reactions away from equilibrium to reactions towards equilibrium. The latter reactions are ultimately reactions in the processing of the source of energy (absorbing light, oxidation of organic molecules etc.). Kauffman does reach this point, but he presents it as just another mechanism (p.68), rather than The mechanism. Maybe he does it because he feels that his ideas about coupling are weak. In reality, they are nonsense. He says (p.69)"all that is required is a sufficient diversity of molecules". However, coupling two reactions together requires much more complex molecule than doing each reaction on its own, and therefore the probability of 'couplers' is much lower than the probability of 'non-couplers'. Hence for each 'coupler' there are going to be many 'non-couplers', which will catalyse the reaction in the 'wrong' direction (towards equilibrium), and the system will stay in equilibrium. In living organism, coupling works because the components of the system are defined by the genome, which code only for the right enzymes. In this case, it seems clear that Kauffman failed to see this problem, probably because of his contempt of thermodynamics.
Kauffman wants to impress us that there is a lot of 'order for free' in living organisms. For example, he explains on p.106 that the 'networks of genes' lies in the ordered regime because of the 'canalising' nature of the enzymes, and then he says (p.106):
Vast order for free abounds for selection's further sifting.That is simply ridiculous. To function, living systems have to have many orders of magnitudes more order than is required for a system to be in the ordered regime. Even single-cell organism needs to maintain all the cellular organelles, collect food and reproduce, and it needs to generate all the molecules to do all these. Compare to the 'amount of order' that is required to implement all these functions, putting the system in the ordered regime is a negligible contribution, and can be achieved easily by the system that implements the other functions (i.e. the genome and the expression system). Kauffman simply ignores the complexity of life to make it looks plausible that his ideas actually contribute anything.
Kauffman is so keen to offer insights from his idealised models, that on page 107 he actually suggesst that a cell type corresponds to a state cycle with about 317 states of cell activity, and that the cell cycle (of duplication) corresponds to going through a state cycle. In other words, he thinks that any cell the human body goes through a cycle of about 317 states of activity, independently of external effects. Every cell biologists could have told Kauffman that this is obviously false, but obviously Kauffman did not bother to ask any cell biologist. Once again, Kauffman displays his contempt to the branch of science that is most relevant to the point he discusses.
Rather than consulting cell biologists, he calculates that the time to go through the cycle is (p. 107) "precisely in the plausible range for cell behaviour!" - which is even more ridiculous considering that the time he computes is 317 to 3170 minutes, which is not 'precise' by any standard. On the next two pages (pp. 108-109) he uses the power law trick (see section 2 above) with selected data to present some correlations, and then claims that (p.110) "It is hard not to be impress."
On pp.119-122 Kauffman suggests the possibility of making a 'Supracritical soup', by using antibodies. First, he calculates that there is huge number of possible reaction between pairs of molecules. He suggest using proteins as enzymes, and then restrict the choice to antibodies. He then claims that based on experiments with catalysis with antibodies, it makes sense to assumes a probability of 1 in billion for catalysis. He then computes that it is quite easy to achieve a 'supracritical soup', which keeps generating new molecules.
This, of course, is nonsense, both because of the thermodynamics of the system, which will keep it in equilibrium (section 7 above), and because the probability of an antibody catalysing any random two-molecules interaction is far below 1 in billion ( (1) in section 6). As far as I know, catalytic antibodies do not catalyse di-molecular reactions yet.
This example is much worse than the autocatalytic set in chapter 3, because it can be easily tested experimentally. Libraries of more than 10 ** 9 different antibodies were already available at the time the book was written, and people did (and still do) all kinds of interesting things with them. According to Kauffman, such library should go supracritical with less than 10 molecules (figure 6.2, p. 121). However, none of the researchers saw anything like the creation of 'supracritical soup', and if you ask any of them about it they will probably think that you are mad. Again, Kauffman shows total contempt to complete branches of science, in this case both theoretical and experimental.
On p.123 Kauffman considers why we don't eat by cell fusion. He completely ignores the many obvious reasons (To be used, the material needs to be cut to small molecules, and if this is done inside the cell it will digest the cell itself; It is a bad idea to allow foreign DNA into the cell; Transportation to all parts of the body). Instead, he believes that this is because such fusion "would unleash a cataclysmic supracritical explosion" (p.123). Nice idea, but cell fusion is common technique these days, and was since the 70's, and nobody ever seen a cataclysmic supracritical explosion, or anything similar. Again Kauffman shows total lack of touch with reality.
Kauffman continues to suggest that if mix all the proteins in the world (his estimation 10**12) with all the molecules (his estimation 10**7), and calculate that as a result "A vast explosion of diversity would carom off the perplexed walls of Noah's groaning vessel" (p. 124). That is just a repeat of the autocatalytic set idea from chapter 3, and suffers all the deficiencies that were listed in section 5-7 above.
Kauffman uses his ideas on supracritical soup to suggest that local ecosystems will tend to be on the boundary between supracritical/subcritical state (pp. 128-129). This is nonsense because it is based on the nonsense idea of autocatalytic sets, but it is also clearly false experimentally. If it was true, we should expect that in many cases, introducing new organism into an ecosystem would cause it to into a supracritical state, and cause generation of many new molecules, until some of the local species will die. This (the generation of new molecules) have never been seen. Again, Kauffman lives is his fantasy world, completely out of touch with reality.
On p. 143 Kauffman presents an argument by Perelson and Oster that 100 millions different antibody molecules would saturate the 'shape state'. This argument is mathematically odd and biologically nonsense.
Mathematically, if we tossed balls randomly, then on average each will reduce the uncovered area by a fraction (F) equal to the volume a ball expressed using the whole space as a unit(Vb). For example, if each ball is 0.1 of the volume (Vb = 0.1), each ball (on average) will reduce the free volume by 0.1, i.e. to 0.9 of its previous volume. That is because the volume of the new ball would be divided equally (on average) between the free space and the space that is already occupied. Thus the free volume be reduced exponentially with the number of balls N as [(1 - Vb)** N]. It is known that this exponential is equal to ( 1/e ~= 0.37) when N = 1/Vb. Perelson and Oster (according to Kauffman) assume that Vb = 0.000037, so for N = 1/0.000037 ~= 27000, the volume will be 0.37. For N=270,000, the free volume will be 0.37**10 ~= 0.0005, and for N =2,700,000 the free volume will 0.37**100 ~= 10** -44. In other words, the shape space will be covered long before 10 ** 8.
The biological error is much more serious. It is the assumption that the immune system has to cover 1/e of the total shape space before it become useful. To become significantly beneficial, all that the immune system needs is to cover a significant fraction of the shape state of a single part of a single molecule of a single serious pathogen. Obviously, some of these has a shape space is that many many orders of magnitudes smaller than the total shape space.
On p. 171, Kauffman tries to convince the reader that his NK model, where the connectivity is random, is a good model for the interactions between genes. He advances three arguments(p.171):
First, we may as well admit our ignorance in the biological cases at the present moment.
This is a blatant nonsense, on two accounts. First, even if we don't know anything about some phenomenon, it does not mean it can usefully be modelled by random models. Secondly, while there are many things we don't know about genes and their interactions, there are many things that we do know, and one of them is that it is not random. As usual, Kauffman shows total contempt to a complete field of research. He continues (p.171):
Second, we are trying to build rather general models of rugged but correlated landscapes to begin to understand what such landscapes look like and what organismal features bear on landscape ruggedness. If we model the fitness effects of epistatic coupling "at random," we will obtain the kind of general models of landscapes we seek.Another piece of blatant nonsense. Building random models gives you random models, not general models. Kauffman's idea can be compared to learning about house building by making random heaps of bricks and investigating their properties. And then (p. 171):
Third, if we really lucky, we will find that real landscapes in some cases look very much like our model landscape.It is nice to be an optimist, but Kauffman never bother to consider the possibility that he is not 'lucky'.