"We behold the face of nature bright with gladness, we often see the superabundance of food; we do not see or we forget, that the birds which are idly singing around us mostly live on insects or seeds, and are thus constantly destroying life; or we forget how largely these songsters, or their eggs, or their nestlings, are destroyed by birds and beasts of prey; we do not always bear in mind, that, though food may be now superabundant, it is not so at all seasons of each recurring year." (1970, 40)
"Brian Goodwin, a developmental biologist who has studied the spatial and temporal structures and patterns resulting from self-organization in biological systems, has observed:
What counts in the production of spatial and temporal patterns is not the nature of the molecules and other components involved, such as cells, but the way they interact with one another in time (their kinetics) and space (their relational order—how the state of one region depends on the state of neighboring regions). These two prop- erties together define a field....What exists in the field is a set of relationships among the components of the system. (1996, 51)
This field is sometimes referred to as an excitable medium because a collection of potentially interacting components may start out in a homogeneous state. (It may exhibit spatial and temporal symmetry, so one part looks pretty much like any other part.) This homo- geneous condition will remain as the system is taken away from equilibrium by an input of usable energy. But the resulting non- equilibrium system is then poised to generate spatial and temporal patterns. It is said to be excitable. Excitation of the system, through the introduction of a local inhomogeneity, can break the initial spatial and temporal symmetry by inducing (through coupling of parts) excitations in adjacent parts of the medium, which in turn induce further excitations. By amplifying small fluctuations in the environment, positive feedback mechanisms can break the initial homogeneity of the excitable medium.
The result is that the initial disturbance propagates through the system, driving complex global behaviors of the system as a whole, because the behavior of any part of the system is constrained by the neighbors to which it is coupled (and their behavior in turn is similarly constrained). Recall, for example, that when the energetic conditions are right, a region of low pressure—an environmental inhomogeneity—can form the seed for the emergence of a hurricane. A hurricane, as we have seen, is a complex, self-organizing dynamical structure involving coherent motions of matter on an enormous scale.
The spatial and temporal order, patterns, and structure we can see in the behavior of self-organizing systems is not imposed from outside, nor does it arise from centralized control from within. The environment merely provides the energy to run the process, and environmental fluctuations are the usual sources of the initial local inhomogeneity that acts as a seed for the formation of the system in an initially homogeneous excitable medium. The patterns result from dynamical interactions internal to the system.
That there is evidently energy-driven interactive complexity in nature, giving rise to organized systems without intelligent design, there can be no doubt. And it is in this context that it is worth mentioning once again the distinction between appearance and reality that we discussed in the last chapter. As Seeley has recently noted, self- organization can give rise to the appearance of intelligence:
We often find that biological systems function with mechanisms of decentralized control in which the numerous subunits of the system— the molecules of a cell, the cells of an organism, or the organisms of a group—adjust their activities by themselves on the basis of limited, local information. An apple tree, for example, ‘‘wisely’’ allocates its resources among woody growth, leaves, and fruits without a central manager. Likewise, an ant colony ‘‘intelligently’’ distributes its work force among such needs as brood rearing, colony defense, and nest construction without an omniscient overseer of its workers. (2002, 314)
Self-organization is not merely a process whereby complex organized systems can emerge and sustain themselves without intelligent design; it is a process that can generate problem-solving systems out of dumb components, or out of components whose limited cognitive abilities are not up to the task of coordinating systemwide behaviors.
A good example here is afforded by the study of social insects. Colonies of social insects are open-dissipative systems. The compo- nent insects are dumb, yet by their mutual interactions they are capable of generating global, colony-level, problem-solving collective behaviors, with enormous implications for their survival and repro- duction. The broader implications of these matters have recently been discussed under the heading of swarm intelligence. Thus Bonabeau, Dorigo, and Theraulaz observe:
The discovery that SO (self-organization) may be at work in social insects not only has consequences on the study of social insects, but also provides us with powerful tools to transfer knowledge about social insects to the field of intelligent system design. In effect a social insect colony is undoubtedly a decentralized problem-solving system, com- prised of many relatively simple interacting entities. The daily prob- lems solved by the colony include finding food, building or extending a nest, efficiently dividing labor among individuals, efficiently feeding the brood, responding to external challenges, spreading alarm, etc. Many of these problems have counterparts in engineering and com- puter science. One of the most important features of social insects is that they can solve these problems in a very flexible and robust way: flexibility allows adaptation to changing environments, while robust- ness endows the colony with the ability to function even though some individuals may fail to perform their tasks. Finally, social insects have limited cognitive abilities: it is, therefore, simple to design agents, including robotic agents, that mimic their behavior at some level of description. (1999, 6–7)
Self-organizing systems made of unintelligent components can thus exhibit global, adaptive, purposive behaviors as a consequence of the effects of the collective interactions of their parts.
Moreover, these naturally occurring systems can serve as models that enable us to intelligently design artificial, soulless systems that will exhibit similar sorts of problem-solving activity. No ghost is needed in the collective machine, just interactions powered by usable energy in accord with mechanisms operating by the laws of nature. Prior to the study of self-organization, it used to be supposed either that social insects had some sort of collective ‘‘group-mind’’ that intelligently guided their behavior, or, alternatively, as Bonabeau, Dorigo, and Theraulaz have noted, that individual insects possessed internal representations of nest structure, like human architects.
Neither assumption is warranted. The appearance of intelligent group behavior is the result of interaction dynamics internal to the colony of insects, duly modulated by environmental influences. Appearances can thus be deceiving. As Seeley has observed:
No species of social insect has evolved anything like a colony-wide communication network that would enable information to flow rapidly and efficiently to and from a central manager. Moreover, no individual within a social insect colony is capable of processing huge amounts of information. (Contrary to popular belief, the queen of a colony is not an omniscient individual that issues orders; rather she is an oversized individual that lays eggs.) The biblical King Solomon was correct when he noted, in reference to ant colonies, there is ‘‘no guide, overseer or ruler’’ (Proverbs 6:7). (2002, 315)
We should not let our natural propensities for anthropomorphic thinking lead us into seeing intelligence and intelligent design where it does not exist.
Karsai and Penzes (1998, 2000), for example, have shown that the adaptive nest shapes of certain species of wasps emerge from simple rules governing the purely local interactions of individual wasps with each other and with the emerging nest structure. To build a compact nest, the wasps, unlike intelligent human architects, do not need to know the global shape of the nests, they do not need to measure the compactness of the structure, and they do not build the nest in such a way that the final shape is the end or goal of their behavior, either singly or collectively.
In other words, they do not build with a goal in mind. As a matter of fact, the emerging nest organizes its own construction as part of a self-organizing process in which the present state of the nest provides local cues to the dumb wasps about where to apply the next dollop of pulp. After the pulp is applied, this will change the local config- uration of a given site on the nest, and this in turn changes the pattern of attractive local building positions on the developing nest. Karsai and Penzes have demonstrated that a wide variety of nest shapes, from complex twiglike structures to more spherical structures (depending on environmental circumstances), can be explained in this way.
Self-organization is not the only way to get complex structures. The simpler phenomenon of self-assembly is important, too. It is a process capable of producing organized three-dimensional structures, and its fruits may be of use to more sophisticated self-organizing systems. For example, proteins are made up of chains of amino acids. Which protein you get depends on the sequence of its component amino acids. But proteins achieve their biological functions, perhaps enhancing chemical reactions or inhibiting them, by virtue of their three-dimensional structure.
These three-dimensional structures result from elaborate and intricate folding. The folding is achieved through physical and chemical interactions between the amino acids in the sequence constitutive of the protein. Once the amino acids are present in sequence, the protein self-assembles its three-dimensional configuration. The folding does not require intervention by external mechanisms or agents. Systems of self-assembled proteins may then go on to interact among them- selves either to form protein complexes or to self-assemble into more complex, nucleoprotein structures such as viruses (Gerhart and Kirschner 1997, 146). They may even participate in self-organizing biochemical systems. A good introduction to molecular self-assembly— in soap bubbles and proteins—can be found in Cairns-Smith (1986, 69–73)." [Niall Shanks, Richard Dawkins,, God, The Devil, and Darwin]
"Darwin showed how Paley’s argument from design failed to establish its conclusion, and Behe’s response has been to claim that there were things Darwin was unaware of but which show that in biology there is evidence of intelligent design. The evidence concerns a type of biochemical complexity that Behe has termed irreducible complexity. Behe’s claim is that systems exhibiting irreducible complexity do not admit of a plausible Darwinian explanation and are better explained as the fruits of supernatural intelligent design.
Behe initially characterized an irreducibly complex system as follows: ‘‘a single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning’’ (1996, 39). In a later work, Behe put it this way: ‘‘But what type of biological system could not be formed by ‘numerous, successive, slight modifications’? A system that is irreducibly complex. Irreducible complexity is just a fancy phrase I use to mean a single system that is composed of several interacting parts, where removal of any one of the parts causes the system to cease functioning’’ (2001b, 93).
Notice that in this latter formulation, the requirement that the parts of irreducibly complex systems be well-matched has been dropped. Such irreducibly complex systems are alleged to pose a difficulty for Darwinian theory because: ‘‘An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional’’ (1996, 39). Later, Behe put it this way: ‘‘An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then, if a biological system cannot be produced gradually, it would have to arise as an integrated unit for natural selection to have anything to act on’’ (2001b, 94). Put this way, the problem is indeed like the old puzzle about the eye: How could something whose function (and value to the organism) requires the coordinated action of many parts arise gradually from the action of unguided natural processes that have no view to future utility or purpose?
Is it absolutely impossible for Darwinian mechanisms to explain irreducible complexity? Behe observes:
Demonstration that a system is irreducibly complex is not a proof that there is absolutely no gradual route to its production. Although an irreducibly complex system cannot be produced directly, one can’t definitively rule out the possibility of an indirect circuitous route. However, as the complexity of an interacting system increases, the likelihood of such an indirect route drops precipitously. (2001b, 94)
Given the weight that irreducible complexity carries with proponents of intelligent design, this is a significant admission. Whether indirect pathways to irreducible complexity are unlikely is nowhere demon- strated by Behe. So we must pay careful attention in what follows to unsubstantiated claims about lack of likelihood, lest they turn out to be mere indicators of ignorance or prejudice on the part of Behe, and not real measures of what can and cannot happen in the world.
A decade before Behe laid out his biochemical challenge to evolution in terms of irreducible complexity, another biochemist, A. G. Cairns-Smith, had laid out the same problem, albeit in slightly different language. Cairns-Smith observed:
The bit that is not so clear about the eye—and a favorite challenge to Darwin—is how its components evolved when the whole machine will only work when all the components are in place and working.
Not that this problem is peculiar to the eye. Organisms are full of such machinery, and it is a widely held view that this appearance of having been designed is the key feature of living things. . . . How can a complex collaboration between components evolve in small steps? (1986, 58)
Cairns-Smith, comparing the pathways of the central biochemistry of organisms with stone arches in which all the components of the arch depend on each other, observed:
Nowhere is a collaboration of components tighter than in central biochemistry. Pull out a molecule—any molecule . . . you will find that every molecule is required in some way or other by every other molecule. . . . Nothing can be touched or the whole edifice will collapse. Looking at the structure of interdependencies in central biochemistry it is not at all difficult to see why central biochemistry is now so fixed and has been for so long. The difficult question is how such a com- plexity of arching evolved stone by stone. (1986, 60)
Behe’s puzzle about irreducible complexity was evidently being bounced around in the popular biochemistry literature at least ten years before he wrote about it.
More than this, Cairns-Smith even speculated about the possi- bility of intelligent design. Indeed, chapter 3 of his book is devoted to an exploration of the problems confronting a would-be intelligent designer of an E. coli bacterium—a discussion that brings to the fore the importance of thermodynamic considerations. Later in the same book, Cairns-Smith remarked: ‘‘We may make a machine by first designing it, then drawing up a list of components that will be needed, then acquiring the components, and then building the machine. But that can never be the way evolution works. It has no plan. It has no view of the finished system. It would not know in advance which pieces would be relevant. . . . It is the whole machine that makes sense of its components’’ (1986, 39). The very difference between evolu- tionary explanations and explanations in terms of intelligent causes is already there in Cairns-Smith’s work. But Cairns-Smith wisely rejected appeals to the supernatural simply to fill in gaps in our knowledge:
It is a sterile stratagem to insert miracles to bridge the unknown. Soluble problems often seem to be baffling to begin with. Who would have thought a thousand years ago that the size of an atom or the age of the Earth would ever be discovered?. . . It is silly to say that because we cannot see a natural explanation for a phenomenon that we must look for a supernatural explanation. (It is usually silly anyway.) With so many past scientific puzzles now cleared up there have to be very clear reasons not to presume natural causes. (1986, 6)
William Dembski in particular has devoted much effort to the construction of what he terms the complexity-specification criterion, which is designed to help us detect design (see Dembski 1999, 2001b). The criterion is important in the present context because a crucial application singles out Behe’s irreducibly complex systems as the fruits of intelligent design. Dembski observes:
The connection between Behe’s notion of irreducible complexity and my complexity-specification criterion is now straightforward. The irreducibly complex systems Behe considers require numerous com- ponents specifically adapted to each other and each necessary for function. On any formal complexity-theoretic analysis, they are com- plex in the sense required by the complexity-specification criterion. Moreover, in virtue of their function, these systems embody patterns independent of the actual living systems. Hence these systems are also specified in the sense required by the complexity-specification criterion. (1999, 149)
Because the complexity-specification criterion underlies the design inferences that intelligent design theorists wish to foist on us, and because the criterion singles out irreducibly complex biochemical systems as intelligently designed systems, a critical examination of Behe’s claims about irreducible complexity will have broad implica- tions for the intelligent design movement as a whole, especially if it should turn out that irreducibly complex systems could plausibly have evolved without intelligent design, for this would show that not only is there something defective in the account intelligent design theorists offer of biological phenomena but also there is something profoundly deceptive about the methods they use to detect intelligent design.
The reason for this is that scientists themselves have employed mechanical metaphors repeatedly in their explanations of how bio- logical systems work. Here is Bruce Alberts, president of the National Academy of Sciences:
The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of large protein machines....Why do we call the large protein assem- blies that underlie cell function machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. (quoted in Dembski 1999, 146–147)
For Alberts, the mechanical metaphors are a fa ̧con de parler, a con- venient way of talking for the purposes of explanation that should not be taken too literally. Behe takes mechanical metaphors more literally as being indicative of the nature of reality. Machines are designed systems, and biochemical systems, by comparison, don’t just look as if they are designed, they are designed. Behe cannot be faulted for drawing on our cultural experiences with mechanical contrivances of varying degrees of complexity. It is something all of us do. There are more important issues, however, and we must now examine them.
A special case here concerns the supernatural designer itself. Responding to objections in which critics ask about the design of the intelligent designer itself, Dembski has observed:
The who-designed-the-designer question invites a regress that is readily declined. The reason this regress can be declined is because such a regress arises whenever scientists introduce a novel theoretical entity. For instance, when Ludwig Boltzmann introduced his kinetic theory of heat back in the late 1800s and invoked the motion of unobservable particles (what we now call atoms and molecules) to explain heat, one might just as well have argued that such unobserv- able particles do not explain anything because they themselves need to be explained. (2002, 354)
The other side of this tale from the history of science (with the devil lurking in the details) is that there was a century-long controversy about the very existence of atoms and molecules, which was initiated by Dalton’s use of the atomic hypothesis in chemistry in 1808 and Avogadro’s differentiation between atoms and molecules in 1811.
This debate continued throughout the nineteenth century and culminated in a bitter dispute between Boltzmann and Mach over evidence for the very existence of unobservable atoms and mole- cules. This debate was not settled until after the turn of the twen- tieth century, when evidence began to accumulate from many independent quarters, such as the study of radioactive decay, the quantum theory, and early experimental efforts to determine a value for the Avogadro number (see Mason 1962, ch. 39). Independent evidential warrant, not Boltzmann’s say-so or even mere explanatory utility, was what settled the issue in favor of atoms and molecules.
In the case of supernatural intelligent designers of unknown con- stitution using unknown methods and materials to unknown ends, we have neither independent evidential warrant nor even mere explanatory utility. Dembski’s suggestion that we stop and content ourselves with the progress we have made is utterly fatuous.
Following St. Thomas Aquinas, Dembski directs our attention to lessons derivable from archery. Suppose I am alone and shooting arrows at the side of a large barn. Suppose I wish to impress my friends with my skill at archery. I shoot from a distance of fifty feet. Having hit the wall of the barn with all my arrows (even I can’t miss the side of a barn at fifty feet), I then go and paint bull’s-eyes around them and call my friends over to have a look. What can my friends conclude when they arrive at the barn? Dembski tells us:
Absolutely nothing about the archer’s ability as an archer. Yes a pattern is being matched, but it is a pattern fixed only after the arrow has been shot. The pattern is thus purely ad hoc.
But suppose instead the archer paints a fixed target on the wall and then shoots at it. Suppose the archer shoots a hundred arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? Confronted with this second scenario we are obligated to infer that here is a world class archer, one whose shots cannot legitimately be referred to luck, but must rather be referred to the archer’s skill and mastery. Skill and mastery are of course instances of design. (2001b, 180, my italics)
This is all well and good if we see the archer do the trick. The trouble here should be obvious. We have to ask what happens if we do not see the archer shoot and we do not see the arrows in flight. We have to ask how we would react to being presented simply with a target whose bull’s-eye was chock full of arrows. My friends, who were not impressed with the first case described, would be even less impressed with my ability as an archer if I simply confronted them with a straw target whose bull’s-eye was chock full of arrows and asked them what they thought of my skill and mastery as an archer.
They would no doubt say that they could not see evidence of skill and mastery at archery. My friends would want to see me hit the bull’s-eye several times from fifty feet; they would want appropriate analogs of the intelligent design questions answered through the pro- vision of high-quality evidence of intelligent, skillful archery. The bull’s-eye full of arrows would not, in and of itself, be enough. This is why the origins of pattern, not simply the pattern itself, are important. Without this, all you have is anomalous data. Behe is simply wrong when he says, ‘‘The inference to design can be held with all the firmness that is possible in this world, without knowing anything about the designer’’ (1996, 107, my italics).
Here is an example borrowed from Marian Dawkins’s discussion of these issues. A bird visits the territories of several males and selects one of them as a mate. It might be that she is visiting each male, consciously assessing him for genetic worth, and comparing him with others she has observed. Television shows about animals routinely describe their amazing behaviors in similar terms. But it might be that she is simply disposed (perhaps by inheritance) to mate with the male with the longest tail or the loudest call. As Dawkins herself observes:
Faced with the complexity of animal behavior (and animals are gen- uinely far more complex than any man-made machine so far devised), we have a tendency to jump to the conclusion that it is much more complicated and mysterious than it really is. Because we don’t understand fully how animal bodies function, we tend to assume that they achieve their complexity by thinking and working things out. But before we are entitled to conclude that that is really what they are doing . . . we must be sure that what we are looking at could not be explained much more simply with a rule of thumb. A switch in a hormone level or a greater response to a long tail than a short one is a much simpler explanation of why a female mates with one male rather than another than would be implied by saying that she ‘‘assesses every male in turn.’’ These rules of thumb can be very difficult to spot and very deceptive in leading us to think that some- thing is complex when it really turns out not to be after all. (1998, 86–87)
Cairns-Smith’s complexity problem was discussed under the heading of the unity of biochemistry, but Behe’s problem, stated a decade later, is clearly very similar. Thus Cairns-Smith comments:
For example, proteins are needed to make catalysts, yet catalysts are needed to make proteins. Nucleic acids are needed to make proteins, yet proteins are needed to make nucleic acids. Proteins and lipids are needed to make membranes, yet membranes are needed to provide protection for all the chemical processes going on in the cell. . . . The interlocking is tight and critical. At the center everything depends on everything. (1986, 39)
Cairns-Smith thinks this complexity must be explained. However, unlike Behe, Cairns-Smith thinks a natural, rather than a supernatural, explanation will suffice.
It is at this point that we are invited to consider a free-standing arch of stones. It manifests irreducible complexity in that the key- stone at the top of the arch is supported by all the other stones in the arch, yet these stones themselves cannot stand without the keystone. In other words, the arch stands because all the component stones depend on each other. Take away a stone, and the arch collapses. However, Cairns-Smith notes, not all the stones, nor all the func- tional biological structures, must be there from the beginning:
It is clear that not all such functions were hit on at once. Some would have been later discoveries. If new uses may be found for old struc- tures, so, too, can old needs be met by more recently evolved struc- tures. There is plenty of scope for the accidental discovery of new ways of doing things that depend on two or more structures that are already there....This is typical at all levels of organization, from organs to molecules. (1986, 59)
He adds: ‘‘There is plenty of scope for accidental discoveries of effective new combinations of subsystems. It seems inevitable that every so often an older way of doing things will be displaced by a newer way that depends on a new set of subsystems. It is then that seemingly paradoxical collaborations may come about’’ (1986, 59). Why does he think these collaborations are paradoxical?
Referring back to the stone arch, Cairns-Smith anticipates Behe by observing, ‘‘This might seem to be a paradoxical structure if you had been told that it arose from a succession of small modifications, that it had been built one stone at a time’’ (1986, 59, my italics). This is espe- cially true if, as in biochemistry, the arch is multidimensional, with central ‘‘stones’’ each touching more than the two stones touched by the keystone in our arch (1986, 60).
Nevertheless, it is possible to construct an arch in gradual stages. You cannot, of course, gradually build a self-supporting, free-standing arch by using only the component stones, piling them up, one at a time. But if you have scaffolding—and a pile of rocks will suffice to support the growing structure—you can build the arch one stone at a time until the keystone is in place, and the structure becomes self- supporting. When this occurs, the (now redundant) scaffolding can be removed to leave the irreducibly complex, free-standing structure. In this way, the redundant complexity of biochemical systems, whose existence Behe concedes, can be employed to explain the origins of irreducibly complex systems.
Natural, mindless evolutionary processes give rise to the redun- dant complexity we observe in biochemical systems. These redun- dancies then provide, in concert with extant functional systems and structures, the biochemical and molecular scaffolding to support the gradual evolution of systems that ultimately manifest irreducible complexity when the scaffolding is removed or reduced. The resulting biochemical arches may then achieve functions as integrated wholes that could not be achieved by the parts acting independently. Natural selection will result in some of these biochemical arches being retained for further evolutionary elaboration, while others will be eliminated by the same mechanism. Irreducibly complex systems can simply be viewed as limiting cases of redundantly complex systems. Reduce redundancy to the point where further reduction results in loss of function, and the system is now irreducibly complex.
Irreducible complexity was supposed to be something that could not, even in principle, be explained by Darwinian methods. It is now clear that it can indeed be explained in principle by using Darwinian methods. And there matters would rest, had it not been for the recent intervention of Dembski, who has deftly tried to move the target that Darwinians are supposed to hit with their scientific arrows: The scaffolding objection has yet to demonstrate causal specificity when applied to actual irreducibly complex biochemical systems. The absence of detailed models in the biological literature that employ scaf- foldings to generate irreducibly complex biochemical systems is there- fore reason to be skeptical of such models. If they were the answer, then one would expect to see them in the relevant literature, or to run across them in the laboratory. But we do not. That, Behe argues, is good reason to think they are not the answer. (2002, 254, my italics)
The study of developmental processes suggests that an important biological role is indeed played by removable scaffolding, in the formation of all manner of elaborate structures, including body parts and neural pathways. For example, developmental scaffolding, in the form of an initial super- abundance of cells, can be removed by apoptosis, the biochemical activation of self-destruction genes, and this process plays a crucial role in the developmental sculpting of such structures as fingers and toes (Campbell 1996, 980; Lewis 1995, 15).
Developmental biologists are very interested in characterizing developmental pathways by which an organism gets from one devel- opmental state to another. In such a pathway, we might have a gene- mediated step as follows:
 A - a --> B;
where A and B are successive developmental states and a is the gene whose biochemical product mediates the step. But we can also have redundancy:
 A - (a + b) --> B:
Of this case, Wilkins has recently observed:
If, however, two gene products contribute to the same step, and their activities are similar and additive at this step, then mutational inac- tivation of one gene will often be masked by the continued activity of the other....The consequence is that mutational inactivation of either gene is frequently insufficient to block the sequence, and cor- respondingly, activity of both genes must be eliminated to prevent step B from occurring. In general, pathway steps with dual, or multiple, inputs of this kind will be missed in conventional mutant hunts, since, in general, only a single gene of the pathway is affected in each mutant line. (2002, 114)
So not only do we learn here that redundant scaffolding is important but also we learn that scientists in the laboratory do look for dis- ruption of function in otherwise irreducibly complex systems by performing mutant hunts on carefully inbred strains of research ani- mals. These hunts are complicated by the existence of redundancy. Wilkins (2002, 114–116) provides some examples. The point is that irreducibly complex developmental pathways can be viewed simply as limiting cases of redundantly complex pathways. Reduce redun- dancy to the point where further reduction results in loss of function, and the pathway is now irreducibly complex.
As noted previously, gene duplication is one route to redundant complexity, but how could redundancy be reduced to give rise to irreducible complexity? One way is through the transformation of functional genes into pseudogenes—nonfunctional members of gene families. Molecular biologist H.-S. Li observes, ‘‘Pseudogenes are DNA sequences that were derived from functional genes but have been rendered nonfunctional by mutations that prevent their proper expression. Since they are subject to no functional constraints, they are expected to evolve at a high rate’’ (1997, 187).
If a functional gene becomes a pseudogene, its product will no longer be available to the biochemical pathways in which it formerly participated. The transformation of a gene to a pseudogene will not have catastrophic consequences if the biochemical pathways in which its product formerly participated are redundantly complex— other products can take over the role of the missing product. Perhaps not as efficiently, but efficiency is something that can be improved by selection. In this way, redundant scaffolding can be reduced, ulti- mately to the point where a system or pathway is irreducibly complex."[Shanks, Dawkins,, God, The Devil, and Darwin]
"The big bang was not like this. Weinberg puts it this way: ‘‘In the beginning there was an explosion. Not an explosion like those familiar on Earth, starting from a definite center and spreading out to engulf more and more of the circumambient air, but an explosion that occurred simulta- neously everywhere, filling all space from the beginning, with every particle of matter rushing apart from every other particle’’ (1988, 5). How could this happen? Robert Wald, in his textbook on the general theory of relativity, observes that in the primal singularity state in which the universe began, the distance between all points of space was zero. It is not merely that the distance between objects was zero, but the very distance between the points of space was zero.
Einstein’s famous equation, E=mc2, to be discussed shortly, tells us that energy and matter are different sides of the same coin, and because of this, physicists sometimes speak of mass-energy. Roughly speaking, the greater the concentration of mass-energy in a region of spacetime, the greater the curvature in that region. Locally, spacetime in the vicinity of the mass-energy of the Earth is curved, and the motion of the moon around the Earth reflects this curvature. On a larger scale, the motion of the earth around the sun reflects the greater curvature of spacetime brought about by the much larger localized concentra- tion of mass-energy constitutive of the Sun.
In like manner, the distribution of mass-energy throughout the universe has global implications for the curvature of spacetime for the universe as a whole. Evidence from several quarters has indicated that our universe is expanding. It is now getting bigger, and thus it must have been smaller at earlier times. In turn, this means that mass- energy must thus have been more concentrated at earlier times. The curvature of spacetime must thus have been greater at earlier times.
Our universe has structure (planets orbit stars, stars belong to galaxies, and so on), and matter appears to be clumped on the small scales where you can, so to speak, see the individual trees in the wood. Nevertheless, if you take a bigger, global view of the universe and look at large scales, so you can see the wood as a whole, the distribution of mass-energy is very nearly uniform. Hogan has recently commented on the significance of these observations as follows:
A long time ago, in the early Big Bang, the universe was uniform on much smaller scales, and even very small bits of it were flying apart; very early, even things a few inches apart were flying away from each other. Today, matter on small scales has congealed into stable systems that no longer expand, because over small regions, where the expansion is not too fast, forces have reversed the expansion. On these small scales, things are no longer uniform; matter is in stable ‘‘lumps’’ (galaxies and their contents), which are flying apart from each other but are themselves not expanding. (1998, 48–49)
In the singularity state from which the big bang began, space itself would have had zero size, and so, as Wald has pointed out:
The big bang does not represent an explosion of matter concentrated at a point of a preexisting, nonsingular spacetime, as it is sometimes depicted and as its name may suggest. Since spacetime structure itself is singular at the big bang, it does not make sense, either physically or mathematically, to ask about the state of the universe ‘‘before’’ the big bang: there is no natural way to extend the spacetime manifold and metric beyond the big bang singularity. Thus general relativity leads to the viewpoint that the universe began at the big bang. (1984, 99)
What unfolds and emerges from the primal singularity is spacetime itself—the very spacetime in which our Sun, our planet, and we ourselves would eventually come to be located. Until spacetime unfolds, there are literally no places (identified by coordinates x, y, z, t) for events and happenings to be located. There can be no history because there are no stretches of time for historical events to occur in." [Shanks, Dawkins,, God, The Devil, and Darwin]
"What now of chance? Suppose I take a deck of cards that has been shuffled very well. I tell you I am going to lay out the cards one after the other, dealing from the top of the deck. I ask you to guess the sequence in which I will lay them out. You write down your guess, and then I deal out the cards. No doubt we soon find that your guess and the actual sequence differ. There are 52 cards in the deck, and the chance that you will guess the exact sequence is 1 in 52! [52 factorial = 52 X 51 X 50 X... 1]. This is approximately equal to 1.2 X 10^68 — a very small chance indeed. Though you almost cer- tainly couldn’t guess the sequence in advance, once I have laid all the cards out, the probability that the sequence is that particular sequence is 1 (or 100%). The deck was shuffled fairly; any one sequence was as likely as any other. And it is worth adding that merely knowing that the probability of getting a given sequence of cards in a given trial, for example, is 1 in 52! tells you nothing about where, in a run of many trials, you will get that particular sequence. (If you roll a die, the probability of getting a 3 is 1 chance in 6 or 1/6. This doesn’t convey information about which particular roll of the die, in a run of several rolls of the die, will yield a 3.)
The first lesson is that we should not confuse probabilities of events that have already occurred (we are here in the universe today) with probabilities of events before they occur (who could have guessed at the moment the big bang took place, before the cosmic deck is dealt, that we would be here today?). Unlikely events do happen, just by chance. This is one of the reasons why the best laid, intelligently designed plans of mice and men so often go astray, especially when they turn probabilities reflective of their ignorance of how various states of affairs may be realized into probabilities for the actual occurrence of those states of affairs.
You and I are here today. Never mind the universe and the anthropic coincidences, for the moment. Just consider how improb- able it is that you are situated where you currently are. With respect to the universe, we are told that the probability for anything other than intelligent design is vanishingly small. If the analysis is rooted in data, a different conclusion emerges. As noted by Stenger (1998, 2000), the number of observed universes, No, is 1. The number of observed universes with life, Nl, is 1. The probability that any universe has life, based on observed data, is given by Nl / No = 1 (i.e., 100%). The statistical error is, needless to say, large.
There are other probabilities that need to be carefully differ- entiated. Stenger (1998) observes that we should not confuse the probability that one universe selected at random from a set of all possible universes would be our particular universe with the very different probability that a universe selected from this set of all pos- sible universes could sustain some form of life (maybe very different from our own carbon-based forms). While the value of the first prob- ability is no doubt vanishingly small, for all we know, the value of the second might be close to 1 (i.e., 100%).
But let’s play a game known as misanthropic roulette. Like the Russian version, it is played with a revolver; unlike the Russian version, all but one of the six chambers are loaded. When you play the game, you spin the cylinder on the revolver, put the barrel to your head, and pull the trigger. What is your chance of surviving if you play the game? You have a chance of 1/6 (i.e., 16.7%) of surviving and 5/6 (i.e., 83.3%) of dying. What is your chance of playing twice and surviving to tell the tale?
Your chance of surviving the second round is independent of your chance of surviving the first round. If the second round is played, your chance of survival in that round of the game is 1/6. In this case, to calculate your chance of surviving both rounds of the game, we must multiply the probabilities of surviving each round separately. The probability of surviving both rounds is thus 1/6 x 1/6 = .028 (i.e., a little less than 3%). Suppose you decide to play the game five times. Your chance of surviving all five rounds is 1/6 x 1/6 x 1/6 x 1/6 x 1/6 =.00013 (i.e., about 1 chance in 10,000). While this is a small number indeed, and you might decide not to play the game in the light of this information, this sequence of outcomes is as likely as any other sequence of outcomes, all of which involve at least one bullet being fired. The only difference is that a sequence of five clicks on an empty chamber is a sequence in which you survive to tell the tale. That you might, anthropically, care about this sequence of five clicks does not change its likelihood of occurrence, nor, if you play the game five times and survive, does it mean that the hand of providence had intervened (notwithstanding all the rabbits’ feet, four-leaf clovers, and other assorted charms you may have placed into your pockets before playing the game).
One hard-headed response to the fine-tuning arguments—and I think it is the right response, for what that is worth—is that it was just a matter of luck, and maybe not so hard-headed, if the only alter- native should turn out to be incoherent. We know unlikely events do happen. We have no reliable evidence for the existence of a super- natural cosmic universe-tuner, except as an explanation for what might be attributed to luck. This latter, however, seems to be little more than a cosmological version of the gambler’s fallacy, manifesting itself here in the urge to offer causal explanations for the lucky streak of coincidences we had with the values of the cosmological variables.
Yet the explanation of coincidences is a big part of the issue. Thus intelligent design theorist Bruce Gordon has recently written, ‘‘The most intuitive explanation for this incredible string of coincidences, is, of course, design.’’ He adds, ‘‘So far, the scientific community has rejected such a response as being outside the pale of science because it is interpreted as violating the canons of methodological nat- uralism’’ (2001, 203). It is a fact that many scientists (though not all, by any means) reject design because it is perceived as involving a craven appeal to the supernatural without adequate supporting evidence (other than to explain coincidences).
Gordon goes on to suggest a naturalistic design hypothesis that he thinks might enable us to overcome these worries:
Perhaps our universe is embedded in another, much larger, physical universe and exists as the result of an experiment conducted by highly intelligent embodied beings who live in this larger universe. Of course, this only pushes the design problem back one step: where did their universe come from, and what are the conditions that made it possible? To avoid the specter of design altogether, a thoroughly natur- alistic account of the origin of all possible physical universes, and of our own in particular, would have to be devised. (2001, 203)
This is highly reminiscent of Behe’s willingness to consider alien designers (as opposed to supernatural designers) in the context of biochemistry, and so is Gordon’s problem as to who designed the designers.
Luckily, design, be it natural or supernatural, is not needed—that is, mandated by the available evidence. Some cosmologists have indeed considered the possibility that our universe might be part of a larger structure—but they have done so without appeals to intel- ligent designers. These theoretical views need to be discussed not to show that design hypotheses are false but to show that there are other naturalistic explanations of the same facts that lead design theorists to postulate a supernatural designer. The choice may not be simply a matter of one universe by chance or one universe by design.
To see what is going on here, consider what happens when, instead of having just one person play five rounds of misanthropic roulette, we have many millions or billions of people trying to play five rounds. On the first round, we will lose 5/6 of the players, but 1/6 will go on to play the second round. Of these survivors, 1/6 will survive to play the third round, and so on. The more players who start out, the more players there will be who, by chance alone, survive five rounds of the game. This leads us to a consideration of Barrow and Tipler’s option (C): the multiple-universes option (see Gale 1990 for a taxonomy of theories about multiple universes).
Some modern cosmologists are currently investigating the con- cept of the multiverse. This is the idea that our universe may just be one of many universes. Rees explains the idea as follows:
There may be many ‘‘universes’’ of which ours is just one. In the others some laws and physical constants would be different. But our universe would not be just a random one. It would belong to an unusual subset that offered a habitat conducive to the emergence of complexity and consciousness. The analogy of the watchmaker could be off the mark. Instead, the cosmos may have something in common with an off-the-rack clothes shop: if the shop has a large stock, we are not surprised to find one suit that fits. Likewise, if our universe is selected from a multiverse, its seemingly designed or fine-tuned features would not be surprising. (2001, 164–165)
On this view, our universe, which we have traditionally thought of as everything, is really a mere part of something much, perhaps infi- nitely, bigger.
The big bang, far from being a special and unique creative event in which everything begins, might only be an event of relative insignificance in a much larger structure. Our laws of physics, far from being universal, might be mere local bylaws. The multiverse hypothesis does to the anthropic universe what Copernicus’s heliocentric hypoth- esis did to the cosmological vision of the Earth as a fixed center of the universe. Since the time of Copernicus, it has emerged that we live on one planet orbiting an insignificant star in an insignificant galaxy. The hypothesis before us is that our universe is one (potentially insig- nificant) universe among many. Perhaps we can out-Copernicus Copernicus!
The hypothesis of a multiverse, which would offer a naturalistic explanation of the anthropic coincidences, is certainly worthy of consideration. For one thing, it suggests that the old dichotomy of chance or design is not quite right. An obvious objection springs from something known as Occam’s razor, named after the medieval philosopher, William of Occam. Occam’s razor, as it appears in sci- entific debates, is the principle that we should not multiply entities unnecessarily. In these terms, the introduction of a multiverse to explain the anthropic coincidences smacks of a liberal application of what might be termed Occam’s hair-restorer.
It might be argued, however, that the design hypothesis (assuming it even makes sense) is at least as hairy as the multiverse hypothesis, since both hypotheses explain the anthropic coincidences through the invocation of additional entities. Explanation by the hypothesis of one universe and blind chance or luck then emerges as the smart and clean-shaven hypothesis. This is certainly my view. However, the multiverse hypothesis does have one advantage over the design hypothesis in that it at least accords with our intuitions about the sorts of things—physical objects—that exist. Postulating a super- natural designer (assuming it even makes sense to do so) requires the introduction of a new type of object and causality into science: supernatural causes and supernatural objects. These latter are matters we know absolutely nothing about.
We may as well be clear about this: There is no independent evidential warrant for the postulation of nonphysical, supernatural objects whose only role is to serve as components of an (as yet unformulated) alternative explanation of phenomena for which a (religiously unpalatable) naturalistic explanation is possible. At least this latter kind of explanation manages to appeal to the kinds of objects—physical objects—for which there is independent evidential warrant.
Thus Rees observes:
Our Earth traces out one ellipse among an infinity of possibilities, its orbit being constrained only by the requirement that it allows an environment conducive for evolution (not getting too close to the Sun, not too far away). Likewise our universe may be just one of an ensemble of all possible universes, constrained only by the requirement that it allows our emergence. So I’m inclined to go easy with Occam’s razor: a bias in favor of ‘‘simple’’ cosmologies may be as short- sighted as was Galileo’s infatuation with circles. (1999, 156)
If there was indeed an ensemble of universes, described by different ‘‘cosmic numbers,’’ then we would find ourselves in one of the small and atypical subsets where the six numbers permitted complex evo- lution. The seemingly designed features of our universe shouldn’t surprise us, any more than we are surprised at our particular location within our universe. We find ourselves on a planet with an atmosphere, orbiting at a particular distance from its parent star, even though this is a very ‘‘special’’ atypical place. A randomly chosen location in space would be far from any star—indeed, it would be in the inter- galactic void millions of light-years from the nearest galaxy. (1999, 156–157)
As things stand, our universe has a beginning in the big bang. If there is a multiverse, the beginning of our universe, which we call its creation, might really be merely a process of change occurring in part of this preexisting, larger structure (where our ideas of space and time would not apply). This structure might have always been there, undesigned and with no beginning, undergoing complex processes of change.
But in the end, there is always the question of evidence. Intelli- gent design theorists tell us nothing about the designer, save that they think it ought to be the God of Christianity. The methods and materials employed by the designer and any account of supernatural objects themselves (how they differ from physical objects, how they bring about effects in the physical world) are apparently beyond the scope of human knowledge. Though highly speculative, the multiverse hypothesis is an alter- native to the hypothesis of one designed universe and the hypothesis of one universe simply due to chance alone (my preferred view). It is at least conceivable that there are features in our own universe that might carry information about the multiverse itself (or hypotheses that are formulated concerning its characteristics). In this sense, the idea of a multiverse may not be entirely beyond the scope of human ken. It frankly remains to be seen whether the idea of a multiverse will do useful work in science.
At rock bottom, however, it is a fact that the cosmological num- bers are not fixed by current physical theory and have to be empiri- cally determined. Perhaps other values of these variables are possible. This is not currently known. One certainly cannot infer from the fact that the values of these variables are not fixed by current physical theory that they could have taken any value whatsoever, other than those they did take as a matter of fact in our universe. Maybe they could; maybe they couldn’t. We don’t know. Nothing follows about the nature of reality (and the ways it could have been different) from our ignorance of it.
It is possible that one day we will discover a theory that predicts the values taken by these cosmological variables in our universe. Of this possibility, Rees comments:
If the underlying laws determine all the key numbers uniquely, so that no other universe is mathematically consistent with those laws, then we would have to accept that the ‘‘tuning’’ was a brute fact, or prov- idence. On the other hand, the ultimate theory might permit a multiverse whose evolution is punctuated by repeated Big Bangs; the underlying physical laws, applying throughout the multiverse, may then permit diversity in the individual universes. (1999, 157) Either way, we have no such theory now.
And of course, even if the universe was designed—let us suppose, contrary to fact, that the cosmological design arguments actually worked—it would not follow that the universe was a cozy, warm, fuzzy place, a sort of intelligently designed Disneyland writ large. It could be designed and indifferent to human life. There is, moreover, no reason to believe that the alleged cosmological designer was a moral being. For all we know (and the problem of evil that worried Darwin— the existence of extensive suffering and misery in the world—is surely relevant here), it is a cruel joke or a malicious experiment. These sorts of design arguments, even if they were adequate in showing the bare fact of design, would leave us light years away from Michael Behe’s optimistic hopes for a cozy cosmos." [Shanks, Dawkins,, God, The Devil, and Darwin]
"In his predilection for God, Whitehead follows in the footsteps of Leibniz. Of course the God of Leibniz is a species of Aristotelian unmoved Mover. Contrastingly for Whitehead, God is not a meta-physical principle standing outside actual entities and controlling them.
... God is not to be treated as an exception to all metaphysical principles, invoked to save their collapse. He is their chief exemplification ... . he is not before all creation but with all creation.
In Leibniz’s time, such a conception of God as created would have seemed blas- phemous.
More fundamental than God on Whitehead’s view is creativity.
‘Creativity’ is the universal of universals, characterizing ultimate matter of fact. It is that ultimate principle by which the many, which are the universe disjunctively, become the one actual occasion, which is the universe conjunctively. It lies in the nature of things that the many enter into complex unity.
Creativity is holonomic. In the creative process conceived by Whitehead there is the becoming of actual entities/events. God’s role in this is two-fold: “primordial” and “consequential.”
The primordial nature of God is the constraint within the creative process and has no Being. God primordially is pre-Being, pre-actual, pre-spacetime, pre-conscious, pre-existential.
...we must ascribe to him neither fullness of feeling, nor consciousness.
God imposes order on the ontologically primary process of creation. He makes certain eternal potentials relevant. God’s conception is “a free creative act,” an act “deflected neither by love nor hatred”. God is the principle that initiates a particular concretion from the infinite wealth of potentiality. God as primordial principle expresses what for Bohm is “the law of overall necessity.”
God for Whitehead is on one aspect both primordial and highly abstract. His feelings are only conceptual and so lack the fullness of actuality.
Conceptual feelings are “devoid of consciousness in their subjective forms”. In stark contrast to Berkeley, nothing appears to a primordial God “who” is insen- tient and unconscious, “who” creates a particular outcome from the infinite wealth of potentiality in an unceasing creative advance.
For both Whitehead and Bohm God constrains the dynamics. Whitehead’s “creative advance” with its continual flow of concretions does the same theoretical work as Bohm’s continual explications from the holomovement under the law of overall necessity. However the idea of interpenetration of “eternal objects” is not available to Whitehead. Platonic values in Whitehead are physically unlocated whereas Bohm’s way of thinking provides a potential physical home for them, as implicate tendencies inherent to the law of overall necessity. Platonic values enfolded a priori to the holomovement participate in structuring the flow of explicate or- ders, or in Whitehead’s terms, God constrains the flow of concretions by weighting the eternal objects. The divine ordering conditions creative advance.
What has just been discussed is the “primordial” side of God, but Whitehead conceives a “consequent” side to God as well.
... the nature of God is dipolar. He has a primordial nature and a consequent nature.
Every concrescence is objectified in God. The world reacts on God and conse- quently completes his nature. If God were only primordial, then Whitehead’s ontology would consist in unchanging abstract eternal objects and a pure flux in which nothing lasts, so another aspect of God is called for which is more than conceptual — “completing the deficiency of his mere conceptual actuality” — an aspect that is everlasting. This derivative nature of God “is consequent upon the creative advance of the world”.
The property of combining creative advance with the retention of mutual immediacy is what ... is meant by the term ‘everlasting’.
... the image under which this operative growth of God’s nature is best conceived, is that of a tender care that nothing be lost.
The consequent nature of God is devoid of “perpetual perishing”; concretions are never lost. In a contemporary word, the consequent nature of God is trace, a trace that saves.
He does not create the world, he saves it.
Trace is a “perfected actuality” in which the many of the world become one ever-lastingly, that is, the permanent trace of the world’s many is continually and seamlessly unified and traced in God’s consequent nature. “The many become one, and are increased by one”. This is an “objective immortality ... everlasting in the Being of God”.
So the primordial nature of God is to constrain the relentless creative advance by weighting the eternal objects. The consequent nature of God is to retain traces of the concretions achieved in creative advance. Without the primordial God, there would be no concretions that comprise world. Without the consequent God there would be only the forgetting of incessant flux.
Whitehead thus distinguishes a cycle in creative advance “in which the uni- verse accomplishes its actuality”.
There is first the phase of conceptual origination, deficient in actuality, but infinite in its adjustment of valuation.
This phase of the cycle lacks consciousness and is entirely conceptual. It provides “an order in the relevance of eternal objects to the process of creation”, tuning (determining the relative relevance of) the eternal objects. The next phase brings an ununified multiplicity of full actualities in accordance with the primor- dial God’s valuation of the eternal objects. The third phase is holonomic such that multiplicity is both unified and everlastingly traced. Then the cycle is begun again in retuning (“valuating”) the eternal objects.
The introduction of God is not conceptually required by the cycle of creative advance, as noted in the introduction to this section. It is Whitehead’s predilection to bring in God, to delineate the phases of creative advance in God’s name, to recognize God’s patience, fellow-suffering and love in the process of creative advance. In this regime Whitehead follows in Leibniz’s footsteps. But we might also speak Godlessly, with Bohm, of a “law of overall necessity” that includes Platonic values, of explication and re-implication. What is important for process thought is the dynamical ontology, and here Whitehead and Bohm appear to converge.
Whitehead’s God has strongly religious connotations, whereas the equivalent concept in Bohm is Nature.
The consequent nature of God is his judgment on the world. He saves the world as it passes into the immediacy of his own life. It is the judgment of a tenderness which loses nothing that can be saved. It is also the judgment of a wisdom...
God is patient and tenderly saving.
He does not create the world, he saves it: or, more accurately, he is the poet of the world, with tender patience leading it by his vision of truth, beauty and goodness.
In the apotheosis of Process and Reality Whitehead writes that “the kingdom of heaven is with us today,” writes of “the love of God for the world” and that “God is the great companion — the fellow-sufferer who understands”." [Gordon Globus, The Transparent Becoming of World]
"If evolution is to work smoothly, consciousness in some shape must have been present at the very origin of things. Accordingly we find that the more clear-sighted evolutionary philosophers are beginning to posit it there. Each atom of the nebula, they suppose, must have had an aboriginal atom of consciousness linked with it. . . . Some such doctrine . . . is an indispensable part of a thorough-going philosophy of evolution. According to it there must be an infinite number of degrees of consciousness, following the degrees of complication and aggregation of the primordial mind-dust."
"Panpsychism as a philosophical doctrine does not attribute any specific experiences to members of this or that species. Its claim is instead that mentality in general, that is, having a point of view, a perspective on things with qualitative and spontaneous as- pects, can be attributed to all natural forms having an appropriate level of unified structural organization that maintain themselves over a period of time against their environments. The basis for this extended claim would seem to be an analogical inference generalized beyond applications to creatures such as dogs and fish for which there are behavioral and anatomical similarities to ourselves. Insects such as beetles, wasps, and bees have sense receptors and exhibit exploratory, communicative, and aggressive behavior. Even amoebas and protozoa exhibit learning behavior that we seem to be able to use as the basis for attributing sensitivity in the form of primitive tactile sensations. But for extensions of mentality to the molecular and atomic level we have only unity of structural orga- nization and homeostasis as a feature shared by our bodies, those of infrahuman species, including mammals, fish, insects, and pro- tozoa, and finally the suborganic forms to which unrestricted panpsychism attributes mentality. The persisting unity of these nat- ural bodies constitutes by itself the base for the analogical inference to the presence of mentality.
How strong is this inference? Rather weak is the quite obvious reply.
Aristotle thus advances a form of restricted panpsychism that extends to plants, but not to any natural bodies more primitive than they. The soul is defined as the principle of life, and all other natural forms other than plants, lower animals, and humans are relegated to the realm of inanimate nature as material elements of their combi- nation.4 His Metaphysics expresses a generalized contrast between form and matter that is applied to all substances, and this might seem to suggest a generalization of panpsychism to more primitive forms.
The form of a substance, or its sub- stantial form, is the actuality of the substance, as contrasted with the potentiality of matter, and because the soul is identified with a thing’s actuality, it would seem to follow that every substance with a form has a soul. But the examples Aristotle gives clearly indicate that he would want to withhold the term psyche from substances that are neither plants nor animals. One commonly used example of a substance is a statue with a form as its actuality that is distinguishable from the bronze as the matter from which it is made. But this actuality is not of a thing “with life potentially in it,” nor is it in- trinsic to the substance itself. Rather, as for all artifacts, it is a form externally imposed by the artisan who created the statue out of the bronze, what Aristotle describes as the definition or formula we give to a thing. There are thus for Aristotle substances with forms, namely artifacts, which lack souls.
So far we have been describing Aristotle as advocating a form of restricted panpsychism. But even for this restricted version, we must be careful not to attribute to him the modern doctrine that dates from Leibniz. The modern doctrine insists in applying mental attributions to various natural forms. Aristotle, however, only states that we can attribute the capacity for characteristic forms of activity to selected natural forms. For animals, this in- cludes the capacity for having sensations, which are clearly mental, and for self-locomotion as a form of spontaneity.
For plants, however, the description of the characteristic “second grade of actuality” is not formulated in mental terms. The capacity for ingesting selective nutrients and excreting wastes may require mental sensations of some kind, but there is no suggestion in De Anima that Aristotle thought that this is essential to plants’ level of soul. Exercise of the capacity for nutrition is necessary for a plant to stay alive, that is, to maintain itself in equilibrium with its environ- ment against potentially destructive elements. Loss of this capacity brings about the withering and death of the plant. But for a plant to exhibit such self-maintenance is only for it to demonstrate a capacity for homeostasis, a capacity that is at least logically independent of a mental capacity of any kind. Of course, Aristotle was unaware of the invisible elements involved in nutrition—the cells that compose a plant, their cell walls, the transport of molecules through these cell walls, the conversion of these molecules into usable forms of energy. If he had shared our knowledge of these elements, would he then have attributed mentality to cells in their selection of molecules to in- gest? Perhaps, but this is only the wildest speculation. The macrobehavior of homeostasis of plants that he was aware of provided no grounds for the attribution of mentality in any form.
The version of panpsychism put forward by Aristotle we can refer to as classical panpsychism, as contrasted with the later modern form. For classical panpsychism, as we have just seen, things with souls have a capacity for homeostasis and/or mental activity. In animals both are combined; in plants only the capacity for home- ostasis is present. Of the two capacities, homeostasis is the more basic because it is shared at all levels of life, and we can perhaps re- gard the advanced capacities for sensation, self-locomotion, and ra- tional thought as the means by which homeostasis is maintained at the higher levels, what for later evolutionary theory was to be un- derstood as their adaptation necessary for survival. Understanding nutrition as a form of homeostasis allows us to extend classical panpsychism in a way not found in Aristotle’s writings. No matter how primitive the organization of a natural body may be, if homeostasis is present, it can be said to have a soul in a sense generalized from what we find in Aristotle’s De Anima.
In the Monadology we find a dogmatic statement of panpsychism as understood in modern times, free from the qualifi- cations and doubts expressed in his earlier writings. The soul of a thing is alternatively referred to in the latter work as its “monad,” its principle of indivisible unity, and its “entelechy,” its vital, self-sus- taining principle. There is, he writes,
a world of created things, or living beings, of animals, of ente- lechies, of souls, in the minutest particle of matter.
Every portion of matter may be conceived as like a garden full of plants and like a pond full of fish. But every branch of a plant, every member of an animal, and every drop of the fluids within it, is also such a garden or such a pond.
Every appropriately organized body has what Leibniz calls a “dominant monad” or “dominating entelechy,” and the parts of this body in turn have their dominant monads: “It is evident, then, that every living body has a dominating entelechy, which in animals is the soul. The parts, however, of this living body are full of other living beings, plants and animals, which in turn have each one its entelechy or dominating soul.”
Leibniz thus claims a regress of parts within wholes, which in turn are wholes relative to other parts that are themselves composed of parts, as an animal is described in modern biology as composed of cells, which are composed of molecules, which are composed of atoms, which are composed of particles. If continued, this obviously leads to an infinite regress. For Leibniz, matter constitutes a contin- uum, and is infinitely divisible; he denies the existence of absolutely simple material elements without parts, what in his day were called “atoms,” what we refer to now as “fundamental particles.” To avoid the irrationality of an infinite regress, he resorts to the concept of a monad as a “simple substance,” a constituent of other substances but itself without parts and hence not a “composite.” These simple monads are immaterial elements, the most primitive of souls, without extension or form, and are described as the “true atoms of na- ture” that can neither be created nor destroyed.
This is obviously an impossible solution to the problem of an infinite regress, for it is utterly mysterious how monads without extension can combine to form an extended body.
Indeed, in letters to Arnauld in defense of his earlier Discourse, Leibniz indicates doubts about the truth of unrestricted panpsy- chism. Here only bodies with dominant monads or souls are said to be true substances, that is, bodies that constitute unities. Regarding bodies such as blocks of marble and machines, in a draft of a letter to Arnauld he writes,
they might perhaps be units by aggregation, like a pile of stones, but . . . they are not substances. The same can be said of the sun, of the earth, of machines; and with the exception of man, there is no body, of which I can be sure that it is a substance rather than an aggregate of several substances.
In the final version of this letter he expresses doubts only about at- tributing mentality to nonliving things, saying “I cannot tell exactly whether there are other true corporeal substances beside those which have life. But souls serve to give us a certain knowledge of others at least by analogy.” 10
In the Discourse and accompanying correspondence, the prob- lem of mental attribution is stated in terms of a concept of substan- tial form originating with Aristotle. What distinguishes an individual substance such as a particular human or a cow from a mere aggre- gate such as a pile of rocks, block of marble, or machine is the pres- ence of a substantial form. Such a form provides the individuating criteria that distinguish the man or cow from all others of the same kinds. It also seems to provide a principle of organization that pre- serves the identity of a thing through changes in its parts. A human body can replace its cells and still remain the same body, but if the individual rocks of a pile were to be replaced, we would seem to have a different pile. Leibniz seems to be saying that it is only to organized bodies with substantial forms that mentality can be attributed, or in his terms, it is only to them that we can assign a dominant monad. But if the substantial form of a substance is its soul—its actuality— then he is claiming, in effect, that what has a soul can be said to have a soul.11 This is not very helpful in providing us with a guide in iden- tifying those things to which we should attribute mentality.
At this primitive level, appetition accompanies perception, but his descriptions are exclusively directed toward the special type of perception that is present. It is said to be like those we have when we are in a dream- less sleep or in a stupor, and he suggests that our experience of such states makes it possible to conceive the experiences characteristic of this level. When we have perceptions in which there is “nothing dis- tinctive, or so to speak prominent, and of a higher flavor of our per- ceptions, we should be in a continual state of stupor. This is the condition of Monads which are wholly bare [toutes nues].” Although our perceptions are typically distinct and accompanied by self-awareness and memory, in special situations they are not, and from these we are led to conceive of “wholly bare” monads that con- stitute the most basic level of mentality.
Leibniz thus suggests a method for describing the forms of men- tality present in primitive natural forms. We can describe the basic features of our own mental life: the fact that we can reason discur- sively by means of language, reach conclusions through reflection on our mental operations, perceive objects and be aware of what we are perceiving, and remember what has been perceived in the past. By successively excluding certain of these features, we can reach conclusions about the mentality characteristic of forms more prim- itive than ourselves. By excluding discursive reasoning, reflection, and self-awareness, we are able to isolate certain features of our own experience, for instance, sensations of which we may not have been aware when we had them, and use these as a means of describing the mentality of lower animals. By further excluding memory, re- membering the nature of the vague stupor of just awakening, and conceiving of a type of experiencing had during dreamless sleep, we are able to describe still more primitive forms of mentality at a pro- toexperiential level, although as noted previously, for nonliving forms he seems to concede this is a conjecture by means of a remote analogy. At all stages, our own experience is thus the starting point for mental ascriptions.
In Science and the Modern World, Whitehead states that events constitute the final termination.
The organisms of biology include as ingredients the smaller organ- isms of physics; but there is at present no evidence that the smaller of the physical organisms can be analyzed into component organ- isms. It may be so. But anyhow we are faced with the question as to whether there are not primary organisms which are incapable of further analysis. It seems very unlikely that there should be any infinite regress in nature. Accordingly, a theory of science which discards materialism must answer the question as to the character of these primary entities. There can be only one answer on this basis. We must start with the event as the ultimate unity of natural occurrence.
To deny materialism for Whitehead requires postulating psychic events as the “primary entities,” events that were later to become the “actual entities” of Process and Reality. The “root doctrine of mate- rialism,” he tells us in this later work, is the view that the material enduring substance “is the ultimate actual entity.” The panpsychist alternative is to postulate mental events as the ultimate parts. But just as for Leibniz, it is difficult to conceive how such parts can serve as “ingredients” in the formation of physical bodies such as atoms or in organisms such as cells as the constituents of plants and animals. How can mental elements, whether events or monads, combine to form the natural forms we observe around us?
Perhaps more tractable is the problem of determining which things we are justified in ascribing mentality to, or in Whitehead’s terms, which material bodies that we observe will have an associated causally ordered sequence of actual entities. An electromagnetic field by itself, he says, is not the proper object of such ascription. What is required is that there be structured wholes with a degree of persisting organization, or what he refers to as “structured societies.” He gives some examples: “Molecules are structured societies, and so in all probability are separate electrons and protons. Crystals are structured societies. But gases are not structured societies in any important sense of that term; although their individual molecules are structured soci- eties.”26 What of the physicists’ fundamental particles, those elements that by definition lack parts and structure? Whitehead may be claim- ing that what is substantialized by the noun “particle” should be con- ceived instead as what he refers to as “electromagnetic occasions,” events that simply occur in an electromagnetic field without any as- sociation of an ordered sequence of actual entities.
His inclusion of crystals as structured societies suggests that al- though organization may be a necessary condition for mental ascription for him, it is not sufficient by itself. Indeed, he claims that even at the level of multicellular organisms, there can be organized wholes that lack what he calls a “center of experience” of a dominating sequence of actual entities.
There are centres of reaction and control which cannot be identi- fied with the centre of experience. . . . For example, worms and jellyfish seem to be merely harmonized cells, very little central- ized; when cut in two, their parts go on performing their functions independently.
Insects are said to have “some central control,” but clearly plants, like worms and jellyfish, do not, and for such organisms we would attribute mentality to their cells, but not to the wholes of which these cells are parts. In Leibniz’s terms, for the purposes of ascribing mentality they are “mere aggregates,” though with a minimal form of organization. It is even more obvious that crystals as ordered ar- rays of atoms would not qualify as appropriate objects of mental ascription, although their constituent atoms might.
Finally, in this brief survey of Whitehead’s version of panpsy- chism, mention should be made of his very cursory descriptions of primitive levels of mentality. All actual entities, he says, are charac- terized by subjective aims driven by ideal “lures of feeling” for which there is a striving for satisfaction. At different levels of complexity of organization, there are “gradations of intensity in the satisfactions of actual entities,” ranging (at least on this planet during this current epoch) from near-zero intensity at the most primitive levels to the joys and sorrows, the senses of fulfillment and frustration, experi- enced by humans. The “lure of feeling” is said to be “the germ of mind,” with the suggestion being made that this “germ” might be present at very primitive levels in the absence of a capacity for any- thing remotely related to animal perception.
How are we to conceive of such primitive feeling? For Whitehead it is a type of experiencing not accompanied by consciousness—“consciousness presupposes experience, and not experience consciousness.”
In the sense of consciousness as wakefulness, Whitehead could be comparing primitive experiences without consciousness to those he apparently thinks we have in dreamless sleep, as Leibniz did in his Monadology. In the sense of consciousness as a focus of attention, he may be comparing preconscious experiences to subliminal experiences. But because Whitehead fails to make explicit the analogies from our own experi- ence he is appealing to, we can only guess at his intentions. His most detailed attempts to analogically extend such aesthetic experiences to infrahuman species can be found in his studies of bird songs. Such songs, he argues, illustrate basic aesthetic princi- ples that are evident in our aesthetic experiences. Experiences with positive affective tone are marked by a balance between order and variety. Too much order is dull and monotonous; too much variety is chaotic and discordant. The successful work of art, whether literary, pictorial, or musical achieves a delicate balance between the two, a balance that can change with the sensibilities of the century or decade in which the works are created. Hartshorne finds a similar balance between order and variety in bird songs, and reasons by analogy that they experience enjoyment similar to our own in response to such balance, and frustrations when it fails to occur.
Let’s say that we agree to attribute mentality to single-celled organisms such as amoebas, which have been shown to learn to reject pieces of glass that they have previously ingested, and protozoa, which learn to orient themselves away from a toxic liquid environ- ment. With reference to this learning behavior we then agree to jus- tify the attribution of tactile feelings that serve to attract and repel. Then by analogy we extend mental attributions to the cells of a plant whose location is fixed, unlike the mobile amoeba and protozoa. Perhaps we attribute this mentality to stationary cells in the form of tactile feelings of nutrients and some selection among those passing through their cell walls. So far we have specific analogies as the basis for attribution. But now we proceed to attribute mentality to macromolecules such as viruses that exhibit no learning behavior, or at least none that we are presently aware of, and then to their mole- cular constituents, to their atomic constituents, and finally to some bodies that terminate the regress. At these progressively more prim- itive stages, we attribute feelings of a more attenuated kind that asymptotically approach zero as we reach the terminating stage. But what now is the basis for such attribution?
More plausible is Hartshorne’s appeal to unity of organization as a basis of mental attribution, an appeal that has the effect of excluding fundamental particles. “From man to molecules and atoms we have a series of modes of organization,” he says, and for each level in the series there is a distinctive “mode of experiencing.” Like Leibniz, he distinguishes between aggregates such as a heap of stones or a crystal as an ordered array of atoms and unified wholes with specialization of parts. Only to the latter can mentality be attributed. All individuals, he claims, have a degree of mentality in the form of what he sometimes refers to as “sentience,” but “of course aggregates of individuals need not themselves be sentient individu- als.” The presence of internal organization thus seems to provide, for Hartshorne, the principal grounds for attributing mentality to primitive natural bodies. They may fail to exhibit behavior similar to our own and may lack sense receptors. Because of this, we have no basis for attributing sensations to them, which we regard as specific to animal, reptile, and insect species. Nevertheless, they at least have—in common with these organisms and ourselves—an internal organization with diversified parts that persists through environ- mental changes, and this similarity becomes the basis for attributing “sentience” to them and analogically extending feelings to them as what Whitehead had earlier called the “germ of mind.”
Such, then, is a brief summary of the version of panpsychism formulated by its most forceful advocate in recent American philosophy. How persuasive is it? There does seem to be much merit in his use of feelings with affective tone as the basis for analogical exten- sions to primitive natural forms. The amygdala as the portion of the brain associated with emotions is located at the base of the brain and close to the centers of proprioceptive sensation. At more peripheral locations in the cortex are the centers of perception in the various modalities of vision, hearing, taste, touch, and smell, and even more remote from the brain’s base in the neocortex are areas of memory and cognition. In embryonic development the cells for these emotional centers are laid down earlier than for portions of the brain associated with these other capacities. There is a measure of truth in the saying, “Ontogeny recapitulates phylogeny,” that from the progressive stages in the development of an organism, both physiological and behavioral, we can trace its evolutionary origins. Where mentality of the protoexperiential variety exists, we would ex- pect it to be in the form of feelings associated with systems of cells in the brain’s base in us. This is perhaps the wisdom behind the pop- ular notion that the “heart,” located more to the center of the human body and associated with the emotions, is more central to us than the “head” with its more peripheral location because such location is the means by which we trace our evolutionary lineage. In this sense, then, Hartshorne seems correct in singling out “feeling” as the most appropriate term for attributing mentality to natural bodies existing in the early stages of evolution.
Despite the apparent appropriateness of his selection of a term for widest extension, however, Hartshorne, like his predecessors, leaves us with a speculative doctrine whose rational grounds for acceptance seem very uncertain.
Observed uniform correlations between events occurring in some whole, Nagel notes, are regarded by us as evidence of underlying processes within component parts of this whole that provide a necessary explanation of the correlations. When we light the fire under the kettle, the water in it regularly boils within a restricted temporal interval. This correlation calls for an explanation, and physics provides it by citing the motions of H2O molecules as the constituent parts of the water. Given the increase of kinetic energy imparted by the lighting of the fire under the kettle, the water must as a matter of natural necessity exhibit the observed motions that we describe as boiling. An underlying process within water molecules as the liquid’s constituent parts thus provides an explanation of the necessity of increase in heat being followed by boiling in the water as the whole.
Nagel reasons that correlations we observe between physical and mental events or states should also be regarded as evidence of some underlying necessity. Psychologists observe a uniform correla- tion between pains experienced by us and patterns of neurophysio- logical activity in our brains. We tend to accept these correlations as a matter of brute fact, and hence content ourselves with the exis- tence of a contingent relation between brain event and mental event. But as for the case of boiling water, we should regard the physical–mental correlations as presupposing, Nagel contends, some underlying processes that confer on them necessity. Thus, there is a requirement to postulate some explanation in terms of component parts that will convert the contingent relation between the physical and mental into a necessary one.
Nagel argues that such an explanation cannot be provided sim- ply in terms of physical processes within component parts. The brain is composed of neurons as its constituent brain cells. Processes within these cells may explain the particular pattern of brain activity that we correlate with the experience of pain. But they cannot explain the correlation of this activity with the experienced pain, and thus the physical–mental correlation remains a contingent one. The only explanation of the correlation that can convey neces- sity, he argues, must be one that ascribes mentality to the brain’s component neurons, for this alone avoids an inexplicable physical–mental correlation. Nagel declines to tell us what form this explanation takes, but at least we know that the mentality of parts must be somehow included in whatever explanation that might be given, and this is sufficient to establish a form of restricted panpsy- chism that extends mentality to component cells.
An amoeba presented with small fragments of glass will ingest the fragments and then eject them. As this is repeated, the temporal interval between ingestion and ejection is reduced, until finally there is no ingestion. Such behavior seems similar to how more complex organisms, including ourselves, learn to reject irritating substances, and on the basis of this comparison it seems plausible to attribute irritation to the amoeba. Now cells within multicellular organisms don’t exhibit learning behavior be- cause of their lack of mobility. But their structures are very similar to single-celled organisms such as amoebas and protozoa that do ex- hibit such behavior, and this similarity of structure leads us by an- other analogical inference to conclude that in some manner they can select for ingestion or reject substances as potential nutrients to which they are exposed in their environments. Where there is rejection, we can now analogically extend the term “irritation” as originally applied to the amoeba.
Using such reasoning, we can arrive at the conclusion that we can attribute mentality to the neurons of the brain as fixed cells with specialized functions. But however this mentality is described, it seems restricted to those specific to the level of cells, namely sensi- tivity to the potential nutrients and toxic substances that may pass through cell walls. If this is correct, it would seem totally indepen- dent of any pains or pleasures of those persons as wholes for which the neurons are constituent parts, contrary to the views of Nagel, Hartshorne, and Griffin. The sensitivity to potential nutrients surely cannot be used as part of the explanation of why brain processes are correlated with certain kinds of experiences. This is precluded by the independence between the two levels with respect to their characteristic forms of mentality.
The earlier version can be found in Arthur Schopenhauer’s declaration that “we shall judge all objects which are not our own body according to the analogy to this body.” The analogy used by Schopenhauer begins with the direct awareness we all have of our own experiencing with its qualitative aspects. We know also that we have a body. We know, therefore, that we have two aspects: the experiential one of which we are aware and the bodily one that both we and others can observe. Each individual can then infer by analogy that because he or she has these double aspects, every nat- ural body that can be observed has them also. The aspect of which we are aware in our own case is described as the “phenomenal,” “inner,” or “subjective” aspect. In contrast, we observe only the “outer” or “objective” form of natural bodies; their “inner” experi- ences are for us their inaccessible “noumenal” mental aspect that we know of only indirectly by means of an analogical inference. On the basis of this inference, we conclude that all natural bodies have, in addition to that “outer” aspect that we observe, a mental aspect, and this conclusion is now understood to be the thesis of panpsychism. Among those holding this double aspect theory, there was disagreement over what these aspects are aspects of. Idealists held that it is the mental that has inner and outer aspects. Materialists, on the other hand, regarded material bodies as having an inner aspect as a kind of “epiphenomenon.” Experiences were regarded by them as epiphenomenal in the sense that they were caused by bodily processes, as when a person feels a pain when stuck with a pin, but these experiences themselves exerted no causal powers. Others, labeled “neutral monists,” thought that some “neutral” stuff, neither mental nor physical, constituted the underlying “reality.”
The psychologist Gustav Fechner endorsed what is called the “double aspect” view of the relation between the mental and physi- cal. “What will appear to you as your mind from the internal stand- point, where you yourself are this mind,” he says in his early Elements of Psychophysics, “will, on the other hand, appear from the outside point of view as the material basis of this mind.”
Fechner assumes that analogical inferences are used for the minds of other humans, for animals, and for extensions beyond animal species. We are certain of our own minds, he thinks, by virtue of Descartes’ cogito, ergo sum argument, but it is only by what he describes as an act of “faith” that we conclude that other humans and animals have minds or souls. This faith is based on the similarity be- tween our bodies and behavior when we have certain experiences and thoughts and the bodies and behavior of others: “My brother is very much like me and expresses himself like me; I therefore believe most firmly that he is animate.” Observed similarities are in this way evidence for what is unseen and in principle incapable of observation.
The existence of those souls in which we are compelled to be- lieve by reasons which lie nearest to hand will have to serve us as examples, foundation, and support for the further extension of the realm of souls.
Fechner uses analogical reasoning to then infer that plants have souls, though they are dissimilar in many respects from ourselves and other animals. Animals differ from us, he argues, and yet we are confident they have souls.
It is true that the animals are quite different from us in appearance; yet like us they move about, seek their food, generate offspring, even utter cries upon similar provocation—or if all of them do not do all of these things, they do some of them. Consequently we as- cribe to them a somewhat similar soul, subtracting only reason in view of the differences we observe. But in the case of plants we sub- tract definitely the whole soul; and if we have a right to do this, we can justify it only by alleging that in build and behavior the plants are too dissimilar to us and to the animals analogous to us.
But this denial of souls to plants is mistaken, he concludes, because of the many similarities they have to animals—the fact that, like an- imals, they live and die and exhibit regular behavior in maintaining themselves.
But in addition to the souls [of animals] which run about and cry and devour might there not be souls which bloom in stillness, which exhale their fragrance, which satisfy their thirst with the dew and their impulses by burgeoning? I cannot conceive how run- ning and crying have a peculiar right, as against blooming and the emission of fragrance, to be regarded as indications of psychic activity and sensibility, nor why the finely constructed and graceful form of the cleanly plant should be thought less worthy to contain a soul than the unshapely form of a dirty earthworm.
How should we reply to such exuberant affirmation? The difficulty lies in the fact that all things are in some respects both similar and different from others. The analogical inference to the conclusion that mentality is present in plants requires selecting those shared features that are relevant to this conclusion. But how do we decide what is relevant or irrelevant? Plants live and die, and exhibit tropistic behavior. There is some similarity between the means they have for transporting nutrients to their parts and nutrient transport in animal circulatory systems. Are such features to be relevant simi- larities? Most of us would probably agree with Whitehead and Hartshorne that there is no justification for attributing mentality to trees as wholes, because they lack the requisite central organization, although we would be willing to concede that plant cells are apt can- didates. But are such judgments matters of taste? On what basis do we decide to extend mentality to plant cells but not to plants as wholes? Why is lack of central organization a relevant dissimilarity, while transition from life to death and tropistic behavior are judged irrelevant?
Like Fechner, Chalmers uses a double-aspect theory of the mental and the physical. Each of us has an observable body that processes informa- tion in the form of sensory inputs, including social inputs from speech and writing. This is our fundamental “extrinsic” aspect. We also have an “intrinsic” aspect that is described in terms of phenom- enal properties such as having qualitative experiences. This relation between information processing and experiencing in our own case is the basis for what Chalmers describes as the “grander metaphysical speculation concerning the nature of the world.” According to this speculation, “information is truly fundamental, and . . . has two basic aspects, corresponding to the physical and phenomenal features of the world.” There arises, then, the question whether all information has a phenomenal aspect. One possibility is that we need a further constraint on the fundamental theory, in- dicating just what sort of information has a phenomenal aspect. The other possibility is that there is no such constraint. If not, then experience is much more widespread than we might have believed, as information is everywhere. This is counterintuitive at first, but on reflection I think the position gains a certain plausibility and el- egance. Where there is complex information processing, there is complex experience. A mouse has a simpler information-process- ing structure than a human, and has correspondingly simpler experience; perhaps a thermostat, a maximally simple information- processing structure, might have maximally simple experience?
Besides the possibility of attributing experience to thermostats, Chalmers envisions attributing it to particles described in quantum mechanics on the grounds that the states of these particles can be described in the terminology of information theory.
Chalmers’s extension of mentality to mechanical information- processing devices seems to be derived from ambiguities in the term “information,” which has three basic senses. “Information” can be understood as syntactic information as understood in engineering applications. Here there are sequential or synchronous arrays defined within information spaces for which there are alternative possibilities. For example, a 4-bit information space with two alter- native values 1 and 0 defines 24=16 possibilities. A specific sequence such as 1001 would represent a selection from these possibilities, and hence convey information in this syntactic sense. In an 8-bit space with 28=256 possibilities, the sequence 11001100 would be a selection from the greater number of possibilities. It is therefore more improbable and would convey more information than would the 1001 of the 4-bit system. Our auditory system is trained to dis- criminate the vowels and consonants of our system of spoken lan- guage. Within this system there is a certain range of phonemes from which any given sequence of sounds is a selection. Thus, the word “red” is discriminated from “yellow” and conveys less information than the latter, which as the longer sequence of sounds is the more improbable. Information in this syntactic sense is processed within the body at the levels of both sense receptors and the central nervous system, and this information processing is similar to that for me- chanical artifacts such as thermostats and computers. There is no evidence of such information processing at the atomic and sub- atomic levels.
“Information” is also used in the sense of semantic information. For this use there are also semantic spaces or fields defining a range of possibilities as meanings in relation to an interpreter of a sign.
Thus, the word “red” has a meaning for a native speaker of English that excludes the meanings of alternative color words such as “yellow,” “green,” “blue,” and so on. The semantic information conveyed by a word is a function of the number of such alternatives it excludes. Semantic information is not confined to human language systems: the signs interpreted by organisms capable of learning would seem to exclude alternatives and thus convey this type of information for their interpreters. For bodies such as mechanical artifacts, in contrast, such interpretation does not seem present. Ther- mostats and computers may process information, just as we do, but unlike ourselves and a wide range of animal species, they do not interpret what they process.
Finally, there is a sense of “information” applied to systems that is defined in terms of their degrees of organization. Entropy is the tendency within systems toward increasing disorganization in accor- dance with the Second Law of Thermodynamics. Any persisting or- ganization is an offsetting of this tendency, and thus we can refer to this organization as negative entropy or negentropic information. The constituent molecules within a container of gas tend over time to dis- tribute themselves equally within the container in such a way that there is an equal probability of finding a molecule within any given volume within the container. This equal distribution is the system’s most probable state. In contrast, the constituent molecules of a cell have a complex form of organization. We can therefore contrast the high degree of the cell’s negentropic information, or alternatively, its highly improbable state, with the absence of this information in the gas container. When the term “information” is applied to the level of subatomic particles, it would seem to be in this third sense. Because of the tendency of signals to lose their cohesion over time, information in the negentropic sense has engineering applications, and these lead to its sometimes being confused with syntactic information. Ne- gentropic information is, of course, not processed by either organ- isms or machines; it simply exists as a measure of a system’s degree of organization.
Chalmers seems to be led to his conclusion that thermostats might be conscious by confusions between these syntactic, semantic, and negentropic varieties of information. We are aware of our ability to discriminate speech sounds, and we can regard this as a form of syntactic information processing. We also understand certain sequences of these sounds in the form of morphemes, sen- tences, and blocks of discourse, and thus interpret signs with semantic information. We do have bodies that process sensory inputs and that maintain a degree of organization. Having persistent organization, our bodies realize a certain form of negentropic information, and our activities function to maintain this against environmental forces." [Clarke, Panpsychism and the Religious Attitude]
How Panpsychism is being justified on the claim of insufficient natural evolutionary models:
"The mentality we attribute to ourselves and other forms of life are thus accounted for by slow accretions in mental capacities at these successive stages through the mechanisms of evolution over hundreds of millions of years.
Fred Dretske provides an evolutionary explanation of the origi- nation of mentality, which he describes as “consciousness.” We don’t “have to start with consciousness,” he says, to understand how, through a process of natural selection, it comes into being. What natural selection starts with as raw material are organisms with assorted needs and variable resources for satisfying these needs. You don’t have to be conscious to have needs. Even plants have needs. They need sunlight and water. . . . For creatures capable of behaving in need-satisfying ways, . . . the benefits of information are clear. Information about external (and internal) affairs is necessary in order to coordinate behavior with the circumstances in which it has need-fulfilling consequences. . . . What natural selection does with this raw material is to develop and harness information-carrying systems to the effector mecha- nisms capable of using information to satisfy needs by appropriate directed and timed behavior. Once an indicator system is selected to provide the needed information it has the function of providing it. . .
As a result, the organisms in which these states occur are aware of the objects and properties their internal representations represent. They see, hear, and smell things. Through a process of selection they have become perceptually conscious of what is going on around them.
Such an account seems to render obsolete Locke’s problem of mind originating out of matter. For universal mechanism, mentality is the causal effect of material processes that are described by physics, chemistry, and evolutionary biology. To know the laws of these sci- ences and the chemistry of our planet from about four billion years ago is to know exactly how mentality originated from material ele- ments and their combinations in the law-governed progression from molecular compounds, to replication of these compounds, and then to those organisms that Dretske calls “indicator systems” that became aware of representing their environments as a means of fulfilling their needs.
But is this really a solution to Locke’s problem? The capacity to differentially respond to objects in their environments certainly did convey an adaptive advantage to some organisms. A system capable of differential responses can be said to “process information.” But there is no necessary relation between this capacity and having the qualitative perspective that is characteristic of mentality. Mechanical devices can be constructed, after all, that differentially respond to environmental changes, such as for servomechanisms like ther- mostats that respond to temperature changes by sending signals to furnaces or signaling devices that turn on lights outside when a guest is detected. Mental attributes have application to such devices only in metaphorical senses.
Why did not evolve robot-like combinations of molecules with the capacity for differential responses that also lacked mentality? One answer may be that mechanical devices are relatively inflexible, and this inflexibility would pose an adaptive disadvantage. But this answer is unconvincing because inflexibility is characteristic only of primitive devices, not of more sophisticated devices recently devel- oped that can adapt to changes over time. Evolutionary theory should predict the emergence of organic zombie-like equivalents to these sophisticated devices as a means for coping with a more de- manding environment. The conclusion we seem forced to draw is that the laws of physics, chemistry, and biology can perhaps explain the evolution of some form of complex organization enabling differ- ential responses to environmental conditions. But in themselves they do not explain how anything similar to the sensations and feelings that we are aware of came into existence.
Dretske contends that organisms with needs existed before the advent of mentality (for him, consciousness) and that mentality arose as a way of satisfying these needs. If this were so, then these needs must have been satisfied prior to the introduction of mental- ity, for if not satisfied, these organisms would have been eliminated by the forces of natural selection. But then we can ask what advan- tage mentality itself could have conferred, if the needs were already being met. Further, any naturalistic, nondualistic account of the mental must deny causal efficacy to it. The explanation of why an animal responds to a given stimulus in a certain way is to be traced to the nature of this stimulus and the structures of the animal’s sense receptors, central nervous system, and motor response mechanisms. That the animal has a certain experience, say pain, is thus causally irrelevant to a given response, such as withdrawal from a pain-in- ducing object. But if mentality per se is causally irrelevant, it is again difficult to see how its presence conveys any evolutionary advantage. When introduced it would simply be an adventitious addition without consequences for survival.
Under discussion so far have been primitive feelings and sensations as means of providing information about an environment. But similar considerations also apply to the evolution of learning and communication through signals and language, all of which convey an adaptive advantage. Learning behavior, both conditioned reflex and instrumental, can be simulated with computer and robotic technology to produce machines that change their responses as the environment changes. Yet we do not attribute either consciousness or mentality in general to these machines (although we can imagine them having mentality, as in Stanley Kubrick’s film 2001). That communication in the form of signals and even linguistic expressions can be simulated is shown by the construction of computers that, at least for restricted questions, pass the Turing test of providing answers to questions that can’t be distinguished from those given by human sub- jects. Again, applying simply the laws of physics, chemistry, and biol- ogy may explain the evolution of organic zombie-like equivalents of these machines, but this again fails to provide us with an explanation of the origination of intentionality as a form of mentality. As for differential responses to the environment, to explain the evolution of communicative capacities conveying an adaptive advantage is to fall short of an explanation of the origination of mentality from matter, because mechanical simulation of these capacities demonstrates they can be present without the presence of mentality.
That genetic variation within a population and natural selection provide an implausible explanation of the origination of mentality becomes clearer when we attempt to conceive how the first form of mentality came into existence. Let’s suppose that mentality originated with the emergence of eukaryotes as single-celled organisms capable of exploiting the energy resources of their environment. One such organism reproduced asexually a number of replicas of itself among which there was variation, and one of the variant replicas, let’s suppose, was endowed with a faint trace of feeling and hence could be said to have its own perspective, or as some prefer to say, its unique “subjectivity.” Let’s label this variant Alpha. Would the feeling perspective of Alpha confer on it a competitive advantage? It is difficult to see why, because the primitive feeling would be suffi- cient unto itself and without instrumental value, in this respect like the sunset we enjoy for its own sake or the taste of a good wine we savor. And if this original primitive feeling were an isolated occur- rence and self-sufficient, it would not confer an adaptive advantage on the organism experiencing it. It may have been the precursor to the experience of signs, which, when interpreted cognitively and dynamically, could be used to anticipate and to differentially respond to the environment. This successor experience and its attendant in- tentionality might be thought to have adaptive value, although I have just argued it is the behavioral capacity rather than the experi- ence itself that is adaptive, and this capacity can be mechanically simulated. But the original experience itself would seem to have conferred absolutely no adaptive advantage, and thus its continuance and later elaboration remains unexplained.
Some may want to reply that this shows simply that Alpha’s ex- perience must have had intentional aspects, that simultaneous with that original feeling there were anticipation and differential re- sponses conferring adaptive advantage. But it is most implausible that full-fledged intentionality would have suddenly appeared in Alpha as a mysterious novelty without precedent. Such sudden transitions certainly do not seem characteristic of other stages of evolu- tion, and it seems arbitrary to postulate it for Alpha.
It is, of course, a familiar problem for evolutionary theory to ex- plain how traits evolved whose early antecedents have no adaptive value. Wings convey an adaptive advantage to birds, but the same cannot be said of the stumps we see on young chicks, which we can suppose are the wings’ more primitive antecedents. Indeed, the stumps seem to be an impediment, and of negative value. Here evo- lutionists have a ready explanation in terms of what is called “exaptation;” they can claim that a feature with adaptive value at one stage can be co-opted and utilized later. Thus, fins on fish or balanc- ing appendages on amphibians can be hypothesized as the materials from which wings gradually evolved. The situation is different, however, when we attempt to explain the origination of mentality. So unlike is this from what preceded that there is no question of utilizing what went before. We are confronted with a totally new aspect apparently without any biological explanation for its origination.
Critics of panpsychism have attempted to reply to various versions of the argument just outlined. They grant that the introduction of mentality represents a novelty. They insist, however, that this does not preclude a causal explanation of this introduction. With sufficient knowledge of the laws of nature and circumstances prevailing at the time of introduction, they claim, we could provide this expla- nation. But they never specify the form this explanation could take. It is the great merit of Dennett’s attempt at an explanation that it clearly identifies random variation and natural selection as providing the only possible explanation. Everyone concedes, including the most convinced mechanists, that a satisfactory explanation has not actually been produced. But the failure of Dennett’s attempt goes further. It enables us to see that no such explanation is possible.
The arguments just given against universal mechanism’s expla- nations of the origination of mentality are an essential first stage to the strongest positive argument available to panpsychism, the Origination Argument. If the laws of physics, chemistry, and evolutionary biology cannot explain the origination of mentality, how can we account for its existence?
The most plausible alternative open to us would be to endorse panpsychism’s claim that mentality has always been present at levels predating those at which life arose, that is, at the molecular, atomic, and particle levels where there is the appropri- ate form of organization. No Alpha could ever have existed, be- cause mentality was never generated from matter, just as matter itself was never generated from nothing. And without such gener- ation, there is no need to continue the futile search for an explanation of mentality’s origination.
Some may be willing to concede that spontaneity is present in animal courtship and fleeing behavior, but will balk at panpsy- chism’s extension of this spontaneity to all natural forms. Again, the reply must be that, having made the concession at the aviary and mammal levels, it is arbitrary to assign within nature a demarcation between what occurs by either chance or causal determination and what occurs when mental spontaneity is present. Just as for mental- ity, we lack a biological explanation for the origination of spontaneity at any stage in evolution. This is because spontaneity in itself seems to have no adaptive value. Flexibility of response certainly has adaptive value: stereotyped behavior within a changing environment can lead to species extinction. But flexibility can be simulated by mechanical devices of increasing complexity, and we lack an expla- nation of why such complexity rather than spontaneity evolved. Just as for mentality, genetic variation and natural selection cannot explain the appearance of spontaneity rather than simply the appear- ance of traits capable of mechanical simulation. The conclusion drawn by panpsychism is that there was never a time when spon- taneity was entirely absent.
The Origination Argument presents panpsychism as a solution to the problem arising from the impossibility of ever providing an explanation of how mentality arose from bare matter.
As for the analogical inference to the most primitive of natural bodies, we must emphasize again that the inference is only to the conclusion that there is some form of perspective with some qualitative aspect, however intermittent, and some spontaneity, although perhaps only infinitesimal. So removed from our own forms of experience is this primitive form of mentality that it is impossible to imagine what it would be like to have such a perspective. But although it is beyond our powers to imagine, we can conceive its presence, and reason to this presence from premisses on which there is agreement. The objection that “mentality” as used in the analogical inference becomes too indefinite to be useful only represents an undue reliance on what is imaginable." [Clarke, Panpsychism and the Religious Attitude]
How Process Philosophy and Panpsychism come together:
David Skrbina wrote:
"Qualities relocated in physical processes lead to a panpsychic view
There are two conflicting views which I will briefly summarize.
On one side, the Galilean view assumed that 1. reality is made of self-consisting and autonomous individuals; 2. such individuals can be known only by means of relational/quantitative proper- ties; 3. intrinsic properties of objects are beyond our grasp; 4. qualities and relations ‘emerge’ inside the subject.
On the other side, the process view suggests that 1. reality has a relational nature based on processes singling out portions of reality; 2. such processes are known because they are part of subjects; 3. qualitative and relations are identical with these processes; 4. qualities and relations are not inside the subject but rather in the world.
It is a view that can be considered a kind of panpsychism, at least according to the broad definition suggested by Skrbina (2005: 15–22), since it suggests that they are not located inside the nervous system but rather take place in the environment. Yet pan- experientialism is perhaps a better term, albeit with some minor modifications from Griffin (1998). According to Griffin’s definition, panexperientialism means that ‘every- thing experiences’. I maintain that ‘everything is an experience’ in the sense suggested by James (1909/1996) or Mach (1886). The difference is that I emphasize a neutral ontological framework in which there is no need to bring out the qualities that, af- ter Galileo, have been localized inside. The world is made of occurrences that, when part of the experience of a certain subject, are described either in relation with other occurrences (as in objective knowledge) or directly (as in phenomenal experience). There are occurrences. These occurrences sometimes coalesce in a whole that is the subject. When they are part of the subject they are not different from what they are when they are taking place individually.
To experience something means that that something is part of the subject. Therefore qualities are no longer phenomenal or subjective, they are part of the physical structure of the world. Inside the subject, each occurrence can be experienced directly (as an intrinsic quality or content) or by means of comparisons and relations with other occurrences (as in the objective/quantitative/relational description).
I see a certain shade of some color while Sabrina sees a different one. How is that possible under the suggested view? The classic answer would suggest that my brain concocts a different phenomenal quality from that concocted by Sabrina’s brain. My answer is that I single out a certain relational structure in light reflectances while Sabrina singles out a different relational structure. In both cases, the color we see has not been created inside our brain, but it is a physical process taking place partially inside our body and partially in the environment.
To recap the defended view: – There is no difference between a pattern/object and the mental representation of that pattern; – The two are incomplete and partial perspectives on the process by which that pattern could take place – the process being identical with the pattern itself; – The pattern would not exist independently of the process; – The pattern does not exist out of the relation/process that allows the pattern to take place; – The observer does not exist out of the relation/process that allows the observer of that pattern to take place.
Thus, everything could unify, and everything could be externalized ‘mind’ in this sense. Human subjects are just the greater unifiers that we know of. A human brain is what it is because it is the center of a hurricane of a very huge number of unifying processes, and the mind is the part of the universe which is taking place due to them. Hence it should be clear that the view presented here is a kind of externalism grounded in process philosophy – in other words, a process externalism. Qualities and relations are not a product of the internal activity of neural systems; they are processes taking place in the world. It is equally plain that this view endorses a panpsychic stance." [The Mind that Abides]
"These martyrs of ontology want to pull off the trick of dissolving the non-idiocy of the human condition in the idiocy of pure being. If philosophy has its own form of piety, it is found in such sacrifices. Heidegger's well-known statement against the god of the philosophers – namely that, being the fetish of the self-spawning substance, it is a god to whom one cannot pray – omits the possibility of dissolving oneself in this very god. It is furthermore, with all due respect, an objection of limited wisdom, for the feeling of belonging to a great whole and the anticipation of returning to it are the natural prayer of contemplative intelligence." [God's Zeal]
"Even though all of human life (and the social status of evangelicals) is going to hell in a handbasket in the confrontation of humanity with modernity, true believers have an escape clause, a secret out available only to God’s elite that permits them to escape the hell on earth and advance directly to some sacred realm by means of the literal bodily “Rapture” into heaven in the first stage of Christ’s return.
The later twentieth century witnessed the doctrinal reorientation of a major segment of American Protestantism. A new and influential category in popular culture of great significance emerged: end-time prophecy belief as a central tenet of faith for millions of Americans. This was doubtless fed by the belief that religion and society were corrupt beyond imagining, but it also was based on the special significance of the concept of dispensations.
This term implies a great intensification of classic Christian thinking about a series of divine covenants or agreements between God and humanity. Specifically, dispensationalism signifies time-limited eras of divine dealing with humanity according to a specific principle (for instance, divine law), predominant in that time block but potentially superseded in later periods. Moreover, it now appeared, the age of the normal existence of Christianity and the church was drawing to an end. Soon the hidden mysteries of the book of Revelation would be unfolded. God would soon dispense end-time judgment to unbelievers and salvation to believers.
These beliefs gained wide currency in the last three decades of the twentieth century, overleaping the denominational confines of yesteryear and (at the local level) infiltrating huge segments of denominations that formally repudiated them (for example, the Southern Baptist Convention, which resists dispensational thinking at the official level, but obviously cannot rein in all of the popular beliefs of the “people in the pews”). That the spread of this outlook reflected a larger pessimism about the future is axiomatic.
Further, this Protestant reorientation toward cultural pessimism took on even more distinct characteristics. Its prophecy-believing adherents tended to hold to doctrines that had traditionally been out of favor. It was not opti- mistic by early nineteenth-century standards. The new outlook emphasized not just the near approach of the end and the heretical and moral depravity of most forms of Christianity along with the corruption of most institutions, but also the near approach of a lengthy (seven-year) and horrendous period of foretold Tribulations. Most notably it stressed the bodily removal of believers from the world immediately before the onset of these horrors (a rare opinion until recently, and a view that immediately transports contemporary dispensationalists into an expectation of security from suffering utterly at odds with the dominant mentality of ancient and medieval Christians). That is, in accord with one verse of the New Testament, the faithful were to be secretly taken up into the air to be with Christ—leaving doubters and unbelievers to continue the struggle below on extremely unfavorable terms. This extraction of true believers sealed with the Spirit of God came to be known as the Rapture.
The spread of interest in the Rapture, the impending Tribulation, and the near end of normal history meant, among other things, that a view of Protestant Christianity once stigmatized as bizarre and working-class now moved out of the assemblings of a small number of vehement adherents into paradenominational organizations flourishing within myriad host bodies (as well as within all manner of nonchurch-connected individuals). Some estimates put the number of dispensationalist evangelicals in the United States today at twenty million; others estimate that perhaps fifty percent of the American population has some rudimentary identification with the dispensa- tionalist or Rapture-oriented position. Both numbers seem unlikely, as they vary according to how one asks the question. We can approximate, but to insist upon exact figures is risky. For what it may be worth, the publishing industry estimates the total sales of the Tim F. LaHaye and Jerry B. Jenkins fictional rendering of the matter at near seventy million, if not more. This does not give us the number of adherents, of course, but it does give us a sense of the movement’s scale.
The New Age movement has become a byword for elusive, not surprisingly so since its currents of pessimism intermingle with fantastic optimism in a manner rendering easy diagnosis difficult—even as that mix makes it hard for the novice to assess the degree to which a particular New Age strand represents major fragmentation of society and its culture.
This elusiveness marks the key feature of expectation of a turn in the ages, a turn from old to new. Specialist Olav Hammer points out that the movement’s earlier unifying belief in a dawning new epoch keyed to the pre- cession of the equinox has abated, so the movement has somewhat changed from its original instantiation as a cultural prolongation of the political counterculture of the sixties—yet the foundation on the worldview of sixties critique remains. Hammer’s turn would mark a second attenuation—having turned from hardcore political protest, New Age currents then began to stray from the obsession with ages of the signs of the zodiac into ever more diffuse versions of worldview, yet always retaining the aura of “alternative” and “countercultural” that allows it an eternal character of being antibourgeois and anti-establishment, capable of giving off a vague sense of being against the System, whether in religion or medicine or in ecological issues at the ballot box. Elusive, yet not incapable of being read.
The same elusive impression characterizes the way varied positions intermingle with New Age phenomena. We find here antiinstitutional outlooks, strong interest in pantheism or Eastern mysticism, often with a reliance as well on both neoplatonic and quantum vibrational models. Yet finally it is a Western movement, stressing the assimilation of such outlooks to a modern mentality that recasts the entire business in the language of scientific paradigms. Running throughout this potpourri is the Western desire to retrieve personal identity in the face of death (reincarnation in a specifically Western or personality-conserving sense). This special blend constitutes a distinctive package of New Age movement markers that point to real changes in the movement’s character.
With regard to contemporary or recent versions of New Age outlook, moreover, researcher David Tacey affirms their kinship not only to the cultural pessimism discussed here in detail (pessimisms, but in particular its kinship to pessimism that finally fails to deliver real escape or rescue):
. . . the New Age movement, which offers glimpses of lost unity and harmony, will prosper and grow. . . . In discussing the various phenomena of the New Age we are not necessarily talking about a consciously adopted ideology or an intellectual philosophy, but about very basic forms of spiritual longing that are denied expression in the dominant secular culture and in the mainstream institutions of faith. . . .
The New Age is a living reminder that the intellectual Enlightenment no longer delivers its promises of liberation and freedom, that the secular humanist experiment has been found wanting, and that people are demanding a newer, more profound kind of existence.
Tacey shares the standard view that the New Age movement has become largely ineffectual politically, often ending in a kind of entrepreneurial capitalism unlikely to challenge real contemporary enemies in any sig- nificant way. Nonetheless for Tacey, New Age necessarily and by definition retains a key significance as it constitutes the New Age version of Left-wing cultural pessimism’s typical strategy, the strategy of pseudoescape in the face of diminished political effectiveness. He writes:
The New Age compensates our consciousness in different ways. It compensates our secular, materialist society by nakedly displaying the powerful longings of the human spirit, even if this compensation is unsuccessful, i.e. unacceptable to mainstream secular attitudes. It also compensates our established religious traditions by forcing us to attend to what has been repressed or ignored by Western religion: the sacred feminine, the body, nature, instincts, ecstasy and mysticism. . . .
The New Age, however, does not compensate our consumerist society but simply reproduces several of its features in its industry and enterprise, creating a spiritual consumerism. It turns the spiritual realm into a commodity, packaging ancient wisdoms, indigenous cosmologies and spiritual psychologies in order to satisfy our spiritual longing. But because the New Age operates in this consumerist mode, it rarely meets our spiritual needs, often providing a ‘fast food’ service, a kind of McSpirit that fails to satisfy. The human spirit calls for an authentic response, not simply for a symptomatic or artificial quick fix. Therefore the New Age itself is a kind of parody of the authentic spiritual life that longs to be realized in our time.
[Glenn Shuck, John Stroup; Escape into the Future]
"Contemporary New Age thinkers interest us here on account of the way their effort at some kind of “compensation” under current circumstances necessar- ily involves two elements. First comes the built-in respect for science-related paradigms in all New Age thought. Then comes, at the same time, the dis- tinct requirement of retrieving premodern versions of holistic ascription of overarching meaning or ultimate significance—precisely the kind of (un)holy union between the two cultures that has proven so problematically unattainable since the triumph of positivistic science in the nineteenth century.
The last truly great challenger of the clean separation of meaning from science was the German thinker Johann Wolfgang von Goethe, who died in 1832. Recent high-end representatives of this genre are scarce—and no wonder, for we speak here of a genre in New Age that, at its most demanding, seems to place impossible demands on anybody. For if properly carried out, a work in this genre would involve the self-conscious combining of science in an academic framework with New Age models and narratives both primal and personal that can supply higher meaning.
Now certainly there are any number of physicists and cosmologists who regularly make the cover of TIME by discovering vague traces of divine order in remote galaxies and in the unpredictably synchronized meandering of sub- atomic particles. But this kind of recombining meaning and science is so vague and so weak-toothed that it offers little payoff to anybody save the physicists and the science journalists intent on yet another rediscovery of God.
Here, however, we are interested in recombiners whose work has teeth that bite into the contemporary world and its worldviews. In this sense the list of current candidates is short. On the list would be the gracious and learned William A. Tiller, a lattice crystallographer at Stanford whose spiri- tual journey in its earlier phase involved meditation in the style of Edgar Cayce. But the title of most sweeping New Age theorist goes not to a gentleman at Stanford, but to one formerly in the philosophy department at Georgia State, Professor Mark Woodhouse. Dr. Woodhouse’s work brings together not only the effort at reintegrating science and superscientific taste for ultimate meaning, but as well ambiguous themes of cultural collapse and sources for renewal—the mix of pessimism and optimism that finally makes New Age materials important. At the same time, though, it makes them elusive and potentially the markers of a kind of cultural fragmentation that insists on donning the vestments of sociocultural reintegration. Woodhouse, in a word, takes up the same themes of decay and hope for dramatic escape as do pessimistic novels and scripts, but he does so in the nonfiction section of your local bookstore. His work constantly attempts to connect with real-world correlates yet in the end proposes kinds of escape that edge into the fantastic. In this way, it points as well to the increasingly ambiguous character of all countercultural protest these days, an ambiguity and lack of definition increasingly shared by Left-wing political groups deprived of clear identity given the ebbing of sixties protest and the collapse of Marxism.
New Age materials depend for their overall impact as a worldview on a sense of pessimism that its various outlooks and nostrums can help overcome, or at least compensate for — a grand theoretical vision that at the same time explicitly invokes this-worldly concern for practical issues of “empowerment” and reform… in a vein combining science and philosophy of science with grand narratives of cosmic significance and hints of intensely personal spiritual exploration.
Amid the speculation, the irony was that the new physics had generously supplied a metaphoric base that could lead some to the reconstitution of the mesmeric and Swedenborgian worlds. If matter and energy were moments in a continuous natural process, phases or appearances of an essential and dynamic substrate of the world, then—for some—body and mind, substance and spirit, could be construed as part of a single continuum. The invisible fluid [of Mesmer—JS] could emerge as highly visible, and its pulsing, wavelike motion in the old etheric world could be reborn in the vibrat- ing quanta of the twentieth century.
On this basis, concludes Albanese,
[the] stage was set for a latter-day synthesis that would make the past its prologue to a dawning millennium. In this synthesis, the blurring of matter and energy at the subatomic level would be linked in principle to the occult romanticism of the mesmeric-Swedenborgian habit of mind. The manipulative potential of minds that could control self and others would be joined to a matter that followed laws of harmony. Thus, acts of harmony would become, simultaneously, acts of power and control. And the world in which these things would happen by the late twentieth century would belong to the New Age.
From current scholarship it seems evident that the multitudinous currents of twentieth-century New Age share certain features, each supporting the other. First is the willingness to move from quantum physics as science to a metaphorically linked domain of strongly held worldviews—worldviews that tend toward a comprehensive overcoming of the troubles of conven- tional modernity.
There are over-statements and logical leaps in all this, for example, imputing to Descartes an attainment of certainty far greater than that of which he would have approved. Then there is willingness to rely on an appeal to an undifferentiated and problematic category of experience that is held up to transcend the religion- science gap so long as the religion is nonChristian, a view stemming it from an anticolonial ideology pioneered in so-called Neohinduism.
Second, it is evident that this outlook mixing spiritual and worldly attitudes readily gives rise to to what historian Christoph Bochinger terms a new professional group, “the group of secular mediators of religious contents.” Thus one encounters figures like physicist turned spiritual teacher Fritjof Capra, author of the Tao of Physics. Similar secular credentials and a similar role can describe the stance of Professor Woodhouse.
Third, New Age writing on science even at a high theoretical level is just as likely as its crasser relatives to contain myths of history, the millennium, the apocalypse, or some kind of big, overall, perceptible transformation or so-called “Turning Point”—both the stick and the carrot, hell and heaven, catastrophe and utopia, just as with any typical religious big narrative.
Whether accentuating the bad news or the good, New Agers can claim to offer a multifaceted account of what is really going on, an adequate truth. In a time of pluralism, relativism, postmodernism, distrust of institutions, voter disaffection, New Agers using variations on their theme of vibrational advance to universal harmony claim to offer scientifically grounded yet religiously satisfying (and comfortably antiinstitutional) explanations—a truth progressive and yet ancient, predicting eventual convergence and multidimensional transformation.
Obviously, the variability of the New Age message extends to its view of the difficulties believed to lie in the foreseeable future. While some scholarship suggests that most New Age scripts do not expect so grim a set of tribulations as do the grimmer of the American dispensationalists, there is no shortage of New Age scripts calling for shifts in plate tectonics, invasion by aliens with or without the assistance of human traitors high up in governments, and assorted calamities.
By this juncture, one point is evident: New Age is a term that denotes a variable category of world- view. It encompasses metaphysical, cosmological, historical, anthropological, medical, and metapolitical works, along with fiction and prophecy. New Age materials, though rooted in currents from antiquity, the Renaissance, the Enlightenment, and the nineteenth century, in their mature form represent a cultural or spiritual and lifestyle (rather than explicitly political) continuation of the activist counterculture of the 1960s and 1970s, a theme explored at length by historian Christoph Bochinger.
Thus, although New Age material is exceedingly varied and although it pays attention to astrological theories of time from the nineteenth and early twentieth century and to distinctive themes from Swedenborg and William Blake, in recent practice it constitutes a kind of attenuated prolongation of the imaginative and critical impulses of the Vietnam generation among the educated young. As such it provides an optimal field of potential examples of critical and alternative-seeking thought and diagnosis in and about real-world millennial America. In material from this genre, we can find as good a field as possible for sounding the depth and nature of American counterimaginings and dream relocatings today as anywhere else. The darker and lighter sides of contemporary cultural pessimism are as likely to emerge in this domain of investigation as anywhere.
New Age materials build on a foundational sense that official institutions and worldviews have failed in realizing the promise of modernity and the Enlightenment, so that therefore help needs to come from exotic quarters, such as Tibet, meditation, crystals, alternative science, and (with Immanuel Velikovsky or Zecharia Sitchin) the revival of officially rejected catastrophist hypotheses about prehistory.
The sense of deep crisis and catastrophe affecting the physical as well as the cultural and political world, and then the attendant need for extreme forms of hope deeply stamps New Age materials and of course connects it with cultural pessimism in general—though cultural pessimism certainly need not involve alien implants, mysterious rays, and impending sudden shifts in plate tectonics." [Shuck, Stroup; Escape into the Future]
"Typically these documents have portrayed the millennial American self as coming under strong and varied, foundationally significant attack or essential and persistent threat, a threat connected with an intensifying sense of impotence and disorientation on the part of individuals that never fully leaves even once the protagonist undergoes a kind of negative illumination. The nature of the danger here usually registers as a fatalistic, deterministic, mechanistic, or cyborg-information-technological, even the divinely scripted sense of utter powerlessness, a sense of impotence bordering on a sense of loss of meaning (or leading directly to it, as with Fight Club). And while some characters acclimate themselves and turn toward activism, the overall outlook for a reconstitution of society and identity along precrisis lines appears grim in the pessimistic genre as a whole.
We have hitherto uncovered a variety of perceived underlying sources for this impotence, this complex of popular cultural threats to any continuing American sense of morally responsible choice with regard to what passes for the human self as an agent increasingly deprived of traction. Portrayed is a self in crisis and plunged into crises. Typical threats include the conspiratorial manipulation of the population’s DNA by government scientists collaborating with powerful and exceedingly malignant aliens aiming at the enslavement or even extermination of the human race; the apocalyptic scripting carried out by an omnipotent God obsessed with torturing his conscripted human actors in an eternally unvarying drama enacted for his good pleasure by his helpless but allegedly responsible mor- tal pawns; and the info-techno redefinition of human beings as replicants and cyborgs infinitely fungible and mutable, existing ultimately to serve the financial well-being of a transnational elite of exceedingly wealthy and select leading shareholders acting in concert with the purportedly democratic government of the United States.
Similar perceptions of deep crisis shape many New Age documents.
Thus we turn to New Age proposals for a revision of positivist science, such as the one embedded in Woodhouse’s book. Such proposals understand themselves as scientific, so they do not exclude verifiability and intersubjective checking on principle—in fact, since they are part of a movement that has embraced scientific language and models, they in fact call for it. (To be sure, some New Age theoreticians do call upon subtle extra senses for additional monitoring of data.) The most interesting difference lies elsewhere. It is rather that the use or misuse of quantum metaphors, scalar and vibratory paradigms, resonance language, and the like is with New Age writers set within a script or narrative of human labor, suffering, and elite victory, a script far more overtly religious or meaning-laden than the carefully pruned script of human evolution and progress fostered in positivistic official science. The conventional and austere drama of human advance in mainstream science, on the other hand, is often hardly a drama at all—more a kind of neutral wisdom that sometimes seems maddeningly devoid of human satisfaction or point.
On the other hand, the New Age script chronicling scientific progress is far vaster and extends much further forward and backward in time. Its heroes and villains can include cosmic entities, interdimensional beings, helpers and hinderers from distant universes, times, dimensions—the flavor of the whole thing is redolent of Hellenistic Alexandria, or perhaps a bit reminiscent of Renaissance Florence. It is pretty well the opposite of the atmosphere of the Scottish Enlightenment! New Age revisions of science are full of a sense of mission to limited and suffering intelligences (ours, to be exact) and must be read as such—as Steve Bruce puts it, “New Age science has more in common with religion than with science.”
Like the ancient spirituality of Hellenistic Gnosticism, which specialist Eduard Schweizer argued was a response to the failures of earlier politics and religion in the Hellenistic world, with the gods of the polis inaccessible and imperialism making government less and less accountable, so also now, it might be held, as then, does a religioscientific worldview attempt to overcome the mate- rial world. New Age does this, not so much by a gnostic escape of the soul from the body as from a thoroughgoing transcending of matter itself. This it achieves not only by techniques of meditation and channeling but by invocation of quantum formulae of indeterminacy in which the dualisms of Plato and Descartes dissolve along with the modern reliability of the world altogether. In this sense, at least, political and economic failures are overcome in the rhetorical world of the New Age.
Woodhouse on science will sound familiar—the stress on quantum thinking and energy, the distrust of mere matter. The villain is “mechanistic science.” The solution is “vibrational modeling,” “frequency modeling,” and attention to “deep patterns” of “resonance, harmony, or interference” rather than “changes in the spatial configuration of discrete underlying things.”
Along with this goes a kind of obsession with “the quantum connectedness of distant particles,” that is, Bell’s theorem (the thesis “that the predictions of quantum mechanics [QM] differ from those of intuition”) taken to delegitimate local causation (in the sense that “deep reality” is asserted to be of necessity “non-local”). For Woodhouse Bell’s theorem is important because he thinks it “may force us to acknowledge domains outside of conventional space time to account for presumed action-at-a- distance exceeding the speed of light.” For Woodhouse all this advances “a convergence of science and spirituality” based on the work of the physicist David Bohm. That Woodhouse unfolds as “Energy Monism,” which “views energy and consciousness as dual aspects of a neutral ground.” Here he finds himself allied with Buddhism, Taoism, and Spinozism, not to mention Carl Jung, Carlos Casteñeda, Paracelsus, Wilhelm Reich, and Swedish radiologist Björn Nordenström, all of whom he invokes leading up to affirmation of a “bipolar” oscillating life “energy or ch’i visible” to “psychically sensitive individuals.”
Now such a revision of science makes sense only within a metanarrative once more invoking distant intelligences and a mighty process of human transformation into “a new human species . . . Homo noeticus,” “on the planet after the millennium manifesting greater wisdom, love, empowerment, and psychic awareness.” A new science and a new consciousness are required if we are to “enter into constructive relations . . . with positively oriented extraterrestrials” while being able to “live at peace” and “find our place in the larger galactic community.” As more people grow in consciousness, Woodhouse finds cause for hope in objective data: sensitive individuals report what is confirmed by “telepathic transmission” from friendly aliens “indicating their willingness to assist in human progress at this ‘evolution- ary turning point.
The Bilderbergers, Nazi occultists, the Templars, the Illuminati, alien genetic experimentation in ancient Sumer, and ancient advanced civilizations possessing atomic energy, all in support of his grand alternatives by way of conclusion:
It is the immense and ancient power of the [alien-derived—JS/GS] knowing elite . . . that has sought to usurp and control virtually every major movement toward the development of full human potential, from long before early Christianity to the New Age. Since it has been demonstrated that this knowledge—or view of the world—is still tightly held within the inner sanctums of the secret societies, there appear to be but three possibilities: the small inner elite continues to accumulate wealth and power in the hope of contacting our ancient creators (nonhuman intelligences); or they have already achieved such contact and are being guided or controlled; or they are the ancient creators. . . .
However “extreme” his strategies may be, however much empirical or real-world conditions may be envisioned, in the New Age variant the strategies of rescue ordinarily do far more, it would appear, to siphon off discontent and keep hope alive within the system of transnational capitalism, than they do to foster overt revolt—which, indeed, they more often than not tend to retard." [Shuck, Stroup; Escape into the Future]
"Intelligent design presents itself as a respectable scientific alternative to Darwinian evolution and natural selection. Hence, although they intend to exclude intelligent design altogether, philosophical critiques emphasizing naturalism in science have paradoxically given intelligent design a measure of intellectual legitimacy despite its over- whelming scientific failure. Too often, intelligent design has become a philo- sophical perspective to be debated in typically inconclusive fashion, with only passing reference to the decisive answers from mainstream science.
The integrity of science education is best supported by presenting the suc- cesses of actual science rather than highlighting philosophical attempts to de- fine the boundaries of proper science. Intelligent design, like older versions of creationism, is not practiced as a science. Its advocates act more like a political pressure group than like researchers entering an academic debate. They seem more interested in affirming their prior religious commitments than in putting real hypotheses to the test. They treat successful scientific ap- proaches—for example, a preference for naturalistic explanations—as mere prejudices to be discarded on a metaphysical whim.
Pointing out such dubious practices in the intelligent-design camp must remain an important part of any critique. It is even more important, how- ever, to show how mainstream science explains complexity much more suc- cessfully, even without invoking a mysterious intelligent designer. We know how Darwinian mechanisms generate information. We know how evolution- ary biology fits in with our modern knowledge of thermodynamics. We know that computer science and information theory give creationism no comfort. In the end, scientists reject claims of intelligent design because of their fail- ures, not because intelligent design is indelibly stamped with a philosophical scarlet letter.
Intelligent Design is the successor to old-fashioned creationism but dressed in a new coat—its hair cut, its beard trimmed, and its clock set back 10 or 15 billion years. It is nevertheless a hair's-breadth away from creationism in its insistence that everyone is wrong but its proponents, that science is too rigid to accept what is obvious, and that intelligent-design advocates are the victims of a massive conspiracy to withhold the recognition that their insights deserve.
Creationism, though very popular in its young-earth version, has failed as a strategy for introducing religious beliefs into the science curriculum. En- ter neocreationism, or intelligent design. Not as obviously a religious conceit as creationism, intelligent-design creationism has made a case that, to the public, appears much stronger. Pertinently, its proponents are sometimes coy about the identity of their designer. They admit to the age of the earth or set aside the issue, and some even give qualified assent to pillars of evolutionary theory, such as descent with modification. They have therefore been able to feign a scientific legitimacy that creationism was never able to attain.
This aura of legitimacy has enabled the proponents of intelligent design to appeal to the public's sense of fairness and ask that intelligent design be added to school curricula, alongside Darwinian evolution, as an intellectually substantial alternative. Intelligent design, however, has found no support what- soever from mainstream scientists, and its proponents have not established a publication record in recognized and peer-reviewed scientific journals. They have nevertheless raised a significant sum of money and embarked on a single- minded campaign to inject intelligent design into the science curriculum.
Biblical literalism, in its North American form, took shape in the 1830s. One impetus was the attack on slavery by religious abolitionists. Slave owners or their ministers responded by citing biblical passages, notably Genesis 9:24–27, as justification for enslaving black people: “And Noah awoke from his wine, and knew what his younger son [Ham, the supposed ancestor of black people] had done to him. And he said, Cursed be Canaan [son of Ham], a servant of ser- vants shall he be unto his brethren.... Canaan shall be [Shem's] servant ... and [Japheth's] servant” (King James version).
At about the same time, the millennialist strain in Christianity began a resurgence in Britain and North America. This movement, the precursor of modern fundamentalism, also stressed the literal truth of the Bible (Sandeen 1970). Most millennarians and their descendants, however, adjusted their “lit- eral” reading of Genesis to accommodate the antiquity of the earth. Some ac- cepted the gap theory: that God created the heavens and the earth in the beginning but created humans after a gap of millions or billions of years. Others accepted the day-age theory, which recognized the days mentioned in Genesis as eons rather than literal 24-hour days. There was, therefore, no contradic- tion between science and their religious beliefs. Many evangelical thinkers went as far as to accept not only an old earth but even biological evolution, provided that evolution was understood as a progressive development guided by God and culminating in humanity (Livingstone 1987).
Evolution education did not become a fundamentalist target until the early twentieth century. Then, in the aftermath of the Scopes trial, literalist Christianity retreated into its own subculture. Even in conservative circles, the idea of a young earth all but disappeared (Numbers 1992).
The pivotal event behind the revival of young-earth creationism was the 1961 publication of The Genesis Flood, co-authored by hydraulic engineer Henry M. Morris and conservative theologian John Whitcomb. Morris resurrected an older theory called flood geology and tried to show that observed geological features could be explained to be results of Noah's flood. In Morris' view, fossils are stratified in the geological record not because they were laid down over billions of years but because of the chronological order in which plants and animals succumbed to the worldwide flood. To Morris and his followers, the chronology in Genesis is literally true: the universe was created 6000 to 10,000 years ago in six literal days of 24 hours each. With time, Morris's young-earth creationism supplanted the gap theory and the day-age theory, even though some denominations and apologists, such as former astronomer Hugh Ross, still endorse those interpretations (Numbers 1992, Witham 2002).
Creationists campaigned to force young-earth creationism into the biology classroom, but their belief in a young earth, in particular, was too obviously religious. A few states, such as Arkansas in 1981, passed “balanced-treatment” acts. Arkansas's act required that public schools teach creation science, the new name for flood geology, as a viable alternative to evolution. In 1982, Judge William Overton ruled that creation science was not science but religion and that teaching creation science was unconstitutional. Finally, the 1987 Supreme Court ruling, Edwards v. Aguillard, signaled the end of creation science as a force in the public schools (Larson 1989).
The intelligent-design movement sprang up after creation science failed. Beginning as a notion tossed around by some conservative Christian intel- lectuals in the 1980s, intelligent design first attracted public attention through the efforts of Phillip Johnson, the University of California law professor who wrote Darwin on Trial (1993). Johnson's case against evolution avoided bla- tant fundamentalism and concentrated its fire on the naturalistic approach of modern science, proposing a vague “intelligent design” as an alternative. Johnson was at least as concerned with the consequences of accepting evolution as with the truth of the theory.
In 1996, Johnson established the Center for Science and Culture at the Discovery Institute, a right-wing think tank. In 1999, the center had an op- erating budget of $750,000 and employed 45 fellows (Witham 2002, 222). Johnson named his next book The Wedge of Truth (2000) after the wedge strat- egy, which was spawned at the institute. According to a leaked document titled “The Wedge Strategy” (anonymous n.d.), whose validity has been established by Barbara Forrest (2001), the goal of the wedge is nothing less than the over- throw of materialism. The thin edge of the wedge was Johnson's book, Darwin on Trial.
The wedge strategy is a 5-year plan to publish 30 books and 100 technical and scientific papers as well as develop an opinion-making strategy and take legal action to inject intelligent-design theory into the public schools. Its religious overtone is explicit: “we also seek to build up a popular base of sup- port among our natural constituency, namely, Christians.... We intend [our apologetics seminars] to encourage and equip believers with new scientific evidence's [ sic] that support the faith” (anonymous n.d.). Johnson remains a leader of the movement, although he is the public voice of intelligent design rather than an intellectual driving force. That role has passed to Michael Behe, William Dembski, and others. As the wedge document acknowledges, however, reaching beyond conservative Christian circles has been a problem. Success evidently requires a semblance of scientific legitimacy beyond lawyerly or philosophical arguments." [Matt Young, Taner Edis; Why Intelligent Design Fails: A Scientific Critique of the New Creationism]
"Intelligent design is not bad science like cold fusion or wrong science like the Lamarckian inheritance of acquired charac- teristics, although it probably lies farther along the same continuum. Criti- cizing it gives it no more scientific legitimacy than… the magazine Skeptical Inquirer gives to quack medicine when it exposes such practices as phony.
Looking for the footprints of the deity is not necessarily unscientific. What is unscientific is to decide ahead of time on the answer and search for God with the determination to come up with a positive result. That is precisely what William Dembski, Michael Behe, and other ID advocates seem to be attempting. Knowing the answer in advance and being immune to contradictory evidence are typical of pseudoscience.
Perhaps we should be hesitant to use a label such as pseudoscience or crank science; after all, such terms are no longer favored among philosophers of science. It has become increasingly clear (Laudan 1988) that there is no clean way of separating scientific claims from nonscientific just by applying principles like falsifiability or methodological naturalism. Additionally, label- ing a rival idea as pseudoscientific may well replace real argument with a po- litical attempt to deny it legitimacy.
Nevertheless, we argue that pseudoscience can be a useful term. If the intelligent-design advocates advertise themselves as doing science, even when their practices are far from the customary intellectual conduct of mainstream science, we can and should suspect that intelligent design is not legitimately science.
A pseudoscientist tries to prove that something is true; a good scientist tries to find out whether it is true. This distinction is important.
William Dembski (1998d) forfeits his credibility as a scientist, or ought to, when he says, “As Christians, we know” (14). Sorry, but we don't know. What Dembski ought to say is “As Christians, we hypothesize, ” and then go out and test his hypothesis. Instead, he seems to have the answer and there- fore only pretends to be searching for it.
Pseudoscientists seem to think that everyone is wrong but them. Indeed, they may dare you to prove them wrong, little realizing that the burden of proof is usually on the person who makes the claim. Often they imply a conspiracy among their opponents to silence them. Many pseudoscientists make grandi- ose claims and think they are misunderstood geniuses; they compare themselves to Galileo, a man persecuted by the Church for his scientific discoveries (Friedlander 1995). Behe (1996) claims that his thesis of irreducible complex- ity “must be ranked as one of the greatest achievements in the history of science. The discovery rivals those of Newton and Einstein, Lavoisier and Schrödinger, Pasteur, and Darwin” (233). Dembski (1991), at the beginning of his career, had already compared himself to Kant and Copernicus and now claims (1999, 2002b) he has discovered a new law of thermodynamics. The philosopher Rob Koons compares Dembski to Newton (Dembski 1999, jacket blurb).
Intelligent design is the argument from design in new clothing. Advocates of intelligent design, such as Dembski, claim to look for evidence of de- sign, but they simply estimate probabilities and use them to eliminate chance or necessity. They ignore other alternatives and have no positive criterion for identifying a designer; their combination of low probability (often miscalcu- lated) with a dubiously defined and often misused concept of specification provides no real evidence.
It is not scientific." [Mark Perakh, Matt Young/ Young, Edis; Why ID Failed]
"Many ID proponents not only sport Ph.D.s but have also done research in disciplines such as mathematics, philosophy, and even biology. They disavow overly sectarian claims, steering away from questions such as the literal truth of the Bible. And instead of trafficking in absurdities like flood geology, they emphasize grand intellectual themes: that complex order requires a designing intelligence, that mere chance and necessity fall short of accounting for our world (Moreland 1994, Dembski 1998a, Dembski 1999, Dembski and Kushiner 2001). They long to give real scientific teeth to intuitions about order and design shared by diverse philosophical and religious traditions.
Stepping outside the western debate over evolution may help us put ID into perspective. Islam has lately attracted much attention as a resurgent scripture-centered faith in a time of global religious revival. It appears to be an excep- tion to the thesis that secularization is the inescapable destiny of modernizing societies, and it impresses scholars with the vitality of its religious politics.
Less well known, however, is the fact that the Islamic world harbors what may be the strongest popular creationism in the world and that the homegrown intellectual culture in Muslim countries generally considers Darwinian evolution to be unacceptable. In Islamic book-stores from London to Istanbul, attractive books published under the name of Harun Yahya appear, promising everything from proof of the scientific col- lapse of evolution (Yahya 1997) to an exposition that Darwinism is fundamentally responsible for terrorist events such as that of 11 September 2001 (Yahya 2002).
In other words, a kind of diffuse, taken-for-granted version of ID is part of a common Muslim intellectual background. The grand themes of ID are just as visible in the anti-evolutionary writings of Muslims who have more stature than Yahya. Osman Bakar (1987), vice-chancellor of the University of Malaya, criticizes evolutionary theory as a materialist philosophy that attempts to deny nature's manifest dependence on its creator and throws his support behind the endeavor to construct an alternative Islamic science, which would incorporate a traditional Muslim perspective into its basic assumptions about how nature should be studied (Bakar 1999). His desire is reminiscent of theistic science as expressed by some Christian philosophers with ID sympathies, which includes a built-in design perspective as an alternative to natu- ralistic science (Moreland 1994, Plantinga 1991). Seyyed Hossein Nasr (1989, 234–44), one of the best-known scholars of Islam in the field of religious stud- ies, denounces Darwinian evolution as logically absurd and incompatible with the hierarchical view of reality that all genuine religious traditions demand, echoing the implicit ID theme that ours must be a top-down world in which lower levels of reality depend on higher, more spiritual levels.
The notion of intelligent design, as it appears in the Muslim world or in the western ID movement, is not just philosophical speculation about a divine activity that has receded to some sort of metaphysical ultimate. Neither is it a series of quibbles about the fossil record or biochemistry; indeed, ID's central concern is not really biology. The grand themes of ID center on the nature of intelligence and creativity.
In the top-down, hierarchical view of reality shared by ID proponents and most Muslim thinkers, intelligence must not be reducible to a natural phe- nomenon, explainable in conventional scientific terms. As John G. West, Jr., (2001) asserts:
"Intelligent design ... suggests that mind precedes matter and that intelligence is an irreducible property just like matter. This opens the door to an effective alternative to materialistic reductionism. If intelligence itself is an irreducible property, then it is improper to try to reduce mind to matter. Mind can only be explained in terms of itself—like matter is explained in terms of itself. In short, intelligent design opens the door to a theory of a nonmaterial soul that can be defended within the bounds of science."
Accordingly, ID attempts to establish design as a “fundamental mode of sci- entific explanation on a par with chance and necessity”—as with Aristotle's final causes (Dembski 2001b, 174).
Intelligence, of course, is manifested in creativity. ID proponents believe that the intricate, complex structures that excite our sense of wonder must be the signatures of creative intelligence. The meaningful information in the world must derive from intelligent sources. The efforts of mathematician and philosopher William Dembski (1998b, 1999), the leading theorist of ID, have been geared toward capturing this intuition that information must be some- thing special, beyond chance and necessity. The grand themes of ID resonate with a Muslim audience: they are found in much Muslim writing about evolution and how manawi (spiritual) reality creatively shapes the maddi (ma- terial). This is no surprise, because these themes are deeply rooted in any cul- ture touched by near-eastern monotheism. They have not only popular appeal but the backing of sophisticated philosophical traditions developed over millennia.
Darwinian biology, however, strains this view since it relies on nothing but blind mechanisms with no intrinsic directionality. The main sticking point is not descent with modification or progress but mechanism: chance and necessity suffice; hence, design as a fundamental principle disappears. The grand themes of ID do not require that descent with modification be false, just that mere mechanisms not be up to the task of assembling functional complexity. Technically, Dembski's theories of ID do not require divine intervention all the time. The information revealed in evolution could have been injected into the universe through its initial conditions and then left to unfold (Edis 2001). So there is at least the possibility of some common ground.
On their part, liberal religious thinkers about evolution usually do not treat ID as a religious option worth exploring. One exception is Warren A. Nord (1999), who has included ID among the intellectually substantive approaches he thinks biology education should acknowledge alongside a Darwinian view:
"Yes, religious liberals have accepted evolution pretty much from the time Charles Darwin proposed it, but in contrast to Darwin many of them believe that evolution is purposeful and that nature has a spiritual dimension.... Biology texts and the national science standards both ignore not only fundamentalist creationism but also those more liberal religious ways of interpreting evolution found in process theology, creation spirituality, intelligent-design theory and much feminist and postmodern theology."
Such acknowledgment of ID is notably rare.
Liberal religion not only adapts to the modern world but is, in many ways, a driving force behind modernity. It has embraced modern intellectual life and ended up much better represented in academia than among the churchgoing public. By and large, it has been friendly to science, preferring to assert compatibility between science and a religious vision mainly concerned with moral progress. One result has been a theological climate in which the idea of direct divine intervention in the world, in the way that ID proponents envision, seems extremely distasteful.
The debate over ID easily falls into well-established patterns. ID arose from a conservative background, and conservatives remain its constituency. Its perceived attack on science triggers the accustomed political alignments already in place during the battle over old-fashioned creationism, when lib- eral theologians were the most reliable allies of mainstream science. What is at stake in this battle is not so much scientific theory as the success of rival political theologies and competing moral visions.
But if science is almost incidental to the larger cultural struggle, it is still crucial to how ID is perceived. In our culture, science enjoys a good deal of authority in describing the world; therefore, ID must present a scientific ap- pearance. Although liberal religious thought has been influenced by postmodern fashions in the humanities and social sciences, resulting in some disillusionment with science, liberals still usually seek compatibility with science rather than confrontation.
So what scientists think of ID is most important for its prospects, more important than its fortunes in the world of philosophy and theology.
One important way for unorthodox ideas to gain a hearing is through scientific criticism. But ID does not seem to be moving forward at all in the scientific world. The movement today looks more like an interest group trying to find political muscle than a group of in- tellectuals defending a minority opinion. Like their creationist ancestors, they continually make demands on education policy. Similarly, their arguments against evolution do not build a coherent alternative view but collect alleged “failures of Darwinism.” Unfortunately for ID, there is no crisis in Darwinian evolution.
From speculations in physical cosmology (Smolin 1997) to influential hypotheses in our contemporary sciences of the mind, variation- and-selection arguments have come to bear on many examples of complex order in our world. To some, this suggests a universal Darwinism that under- mines all top-down, spiritual descriptions of our world (Dennett 1995, Edis 2002), while others argue that the Darwinian view of life is no threat to lib- eral religion (Ruse 2001, Rolston 1999).
ID, however, is not part of this debate. Darwinian ideas spilling out of biology can only confirm the suspicions of ID proponents that Darwinism is not just innocent science but a materialist philosophy out to erase all percep- tions of direct divine action from our intellectual culture. So they have plenty of motivation to continue the good fight. In the immediate future, however, the fight will not primarily involve scientific debate or even a wider philosophical discussion but an ugly political struggle." [Taner Edis, Why Intelligent Design Fails]
"Behe says, in essence, that a system is irreducibly complex if it includes three or more parts that are crucial to its operation. The system may have many more than three parts, but at least three of those parts are crucial. An irre- ducibly complex system will not just function poorly without one of its parts; it will not function at all. Behe claims that the mousetrap is irreducibly complex—that is, that it cannot function without all of its parts. The statement is entirely wrong.
Irreducible complexity, to Dembski, means that a given part has to be removed with no changes to any of the others. And that points out exactly what is wrong with the concept of irreducible complexity.
In biology, parts that were used for one purpose may be co-opted and used for another purpose. A well-known example is the development of the mam- malian ear from reptilian jaw bones. Specifically, Stephen Jay Gould (1993) gives good evidence that bones that originally supported the gills of a fish evolved first into a brace for holding the jaw to the skull and later into the bones in the inner ear of mammals.
Those jaw bones did not just suddenly one day re-form themselves and decide to become ear bones; because mammals did not need unhinging jaws, the jaw bones gradually changed their shape and their function until they be- came the bones of the inner ear. The ear today may be irreducibly complex, but once it was not. You might, however, be fooled into thinking that the ear could not have evolved if you did not know exactly how it originated.
What is the difference between a mouse and a mousetrap? Or, more precisely, how do mice propagate, and how do mousetraps propagate (Young 2001a)?
Mousetraps are not living organisms. They do not multiply and do not carry within them the information necessary to propagate themselves. Instead, human engineers propagate them by blueprints, or exact specifications (see table 2.1). Each mousetrap in a given generation is thus nominally identical to every other mousetrap. If there are differences among the mousetraps, they are usually not functional, and they do not propagate to the next generation. Changes from one generation to the next, however, may be very significant, as when the designer triples the strength of the spring or doubles the size of the trap and calls it a rattrap.
The genome, by contrast, is a recipe, not a blueprint. The genome tells the mouse to have hair, for example, but it does not specify where each hair is located, precisely as a recipe tells a cake to have bubbles but does not specify the location of each bubble. Additionally, the specifications of each mouse differ from those of each other mouse because they have slightly different ge- nomes. Thus, we could expect a mouse to evolve from a protomouse by a succession of small changes, whereas we can never expect a mousetrap to evolve from a prototrap.
This is so because the mousetrap is specified by a blueprint, the mouse by a recipe. If improvements are made to a mousetrap, they need not be small. It is therefore…Behe...who has erred in using the mousetrap as an analog of an evolving organism, precisely because the parts of an evolving system change as the system evolves.
Behe argues that an irreducibly complex system cannot evolve by small changes. His preferred example is the flagellum, and he asks, in essence, “What good is half a flagellum?” A flagellum without its whiplike tail or without its power source or its bearing cannot work. Behe cannot imagine how each part could have evolved in concert with the others, so he decides it could not have happened. In this respect, he echoes the smug self-confidence of the creationist who asks, “What good is half an eye?” An eye, according to the creationist, has to be perfect or it has no value whatsoever.
This logic is easily debunked (Young 2001b, 59–62, 122–23). As any near-sighted person will tell you, an eye does not have to be perfect in order to have value. An eye does not even have to project an image to have value. Indeed, the simplest eye, a light-sensitive spot, gives a primitive creature warning of an approaching predator.
Biologists Dan Nilsson and Susanne Pelger (1994) have performed a so- phisticated calculation to show that an eye capable of casting an image could evolve gradually, possibly within a few hundred thousand years, from a simple eye spot, through a somewhat directional eye pit, to a spherical eye that can- not change focus, and finally to an eye complete with a cornea and a lens. Because the eye is composed of soft tissue, we do not have fossil evidence of the evolution of eyes in this way. Nevertheless, every step that appears in the calculation is represented in some animal known today. The inference that eyes evolved roughly as suggested in the calculation is therefore supported by hard evidence. (See Berlinski [2002, 2003] and Berlinski and his critics [2003a, 2003b] for a surprisingly intemperate attack against Nilsson and Pelger's paper.)
The eye is not irreducibly complex. You can take away the lens or the cones, for example, and still have useful if impaired vision. Nevertheless, it was used for years as an example of a system that was too complicated to have evolved gradually. It is not, and neither is the eukaryotic flagellum (Stevens 1998, Cavalier-Smith 1997) or the bacterial flagellum." [Matt Young, Why ID Fails]
"William Dembski (1999) invites us to consider an archer who shoots at a tar- get on a wall. If the archer shoots randomly and then paints a target around every arrow, he says, we may infer nothing about the targets or the archer. On the other hand, if the archer consistently hits a target that is already in place, we may infer that he is a good archer. His hitting the target represents what Dembski calls a pattern, and we may infer design in the sense that the arrow is purposefully, not accidentally, centered in the target.
Using the archer as an analogy, Dembski notes that biologists find genes that are highly improbable yet not exactly arbitrary, not entirely gibberish. That is, the genes contain information; their bases are not arranged arbitrarily.
Dembski calls such improbable but nonrandom genes complex because they are improbable and specified because they are not random. A gene or other entity that is specified and complex enough displays specified complexity. Arrows sticking out of targets that have been painted around them are complex but not specified; arrows sticking out of a target that has been placed in advance are specified.
According to Dembski, natural processes cannot evolve information in excess of a certain number of bits—that is, cannot evolve specified complexity. His claim is not correct. We can easily see how specified complexity can be derived by purely natural means—for example, by genes duplicating and subsequently diverging or by organisms incorporating the genes of other organisms. In either case, an organism whose genome has less than the putative upper limit, 500 bits, can in a single stroke exceed that limit, as when an organism with a 400-bit genome incorporates another with a 300-bit genome (Young 2002).
Consider a biological compound such as chlorophyll. Chlorophyll provides energy to plant cells, and most (but not all) of life on earth either directly or indirectly depends for its existence on chlorophyll. The gene that codes for chlorophyll has a certain number N of bits of information. Dembski would calculate the probability of that gene's assembling itself by assuming that each bit has a 50-percent probability of being either 0 or 1 (Wein 2002a). As noted in connection with a book by Gerald Schroeder (Young 1998), such calculations are flawed by the assumption of independent probabilities—that is, by the assumption that each bit is independent of each other bit. Additionally, they assume that the gene in question has a fixed length and that the information in the gene has been selected by random sampling, whereas most biologists would argue that the gene developed over time from less- complex genes.
But Dembski makes a more-fundamental error: he calculates the probability of occurrence of a specific gene (T-urf13) and also considers genes that are homologous with that gene. In other words, he calculates the probability of a specific gene and only those genes that are closely related to that gene. In terms of the archer analogy, Dembski is saying that the target is not a point but is a little fuzzy. Nevertheless, calculating the probability of a specific gene or genes is the wrong calculation, and the error is exemplified in Dembski's archer analogy. (He makes another interesting conceptual error: On page 292 of No Free Lunch, Dembski (2002b) calculates the probability that all the pro- teins in the bacterial flagellum will come together “in one spot.” Besides the assumptions of equal and independent probabilities, that is simply the wrong calculation. He should have calculated the probability of the genes that code for the flagellum, not of the flagellum itself.
Let us do a Dembski-style analysis using the example of chlorophyll. Ac- cording to the Encyclopedia Britannica, there are at least five different kinds of chlorophyll. There may be potentially many more that have never evolved. Thus, the archer is not shooting at a single, specific target on the wall but at a wall that may contain a very large number of targets, any one of which will count as a bull's-eye. Dembski should have considered the probability that the archer would have hit any one of a great number of targets, not just one target.
Chlorophyll, moreover, is not necessary for life. We know of bacteria that derive energy from the sun but use bacteriorhodopsin in place of chlorophyll. Other bacteria derive their energy from chemosynthesis rather than photo- synthesis. If we are interested in knowing whether life was designed, then we have to calculate the probability that any energy-supplying mechanism will evolve, not just chlorophyll. Thus, photosynthesis, chemosynthesis, and all their variants must show up as targets on the wall, as well as other perhaps wholly unknown mechanisms for providing energy to a cell.
I do not think Dembski is arguing that life takes a single shot at a target and either hits it or not; he knows very well that complexity was not born instantaneously. The target is a distant target, and the path is tortuous. But by using his archer analogy, Dembski implies that life is very improbable and the target impossible to hit by accident. It may or may not be: there are more galaxies in the known universe than there are stars in our galaxy. Life has ar- guably had a great many opportunities to evolve. That it evolved complexity here is no doubt improbable; that it evolved complexity somewhere is very possibly not. Dembski has, in effect, calculated the probability that a certain woman in New Jersey will win the lottery twice, whereas it is more meaningful to calculate the (much-higher) probability that someone, somewhere will win the lottery twice.
In terms that Dembski knows very well, his rejection region should have included many more possibilities than just a handful of homologous genes. In my example, the rejection region should have included a target for chloro- phyll, a target for bacteriorhodopsin, a target for chemosynthesis, and so on.
Now, it is entirely possible that even such an extended rejection region (on every planet in every universe) will yield a very low net probability, but Dembski has shown no such thing. And he cannot since we do not know just how much of the wall is covered with targets, nor how many arrows the archer has launched to get just one hit, nor how many archers there are in the universe, nor even how many universes there are.
Behe and Dembski have an agenda: to prove that an intelligence guides evo- lution rather than find out whether an intelligence does so." [ib.]
"Both Behe and Dembski seem to think that evolution must be shooting at a present-day, preset target from a great distance in the past. Dembski argues that the archer intelligently designs the trajectories of his arrows to hit the target from a great distance, which assumes, for example, that the remote ancestors of birds were evolving toward flight from the start.
Evolution, however, does not make such claims about the origin of complex features. Dinosaurs did not begin by evolving a route to flight, thinking, “If I give up the normal use of my hands now, my descendants will be able to fly.” The archer of evolution shoots its genes only as far as the next genera- tion. How well it succeeds depends on the morphology of the archer: any heritable feature that helps pass on those genes will survive in its offspring. In fact, evolution is much more like shooting arrows and then painting the bull's- eyes, because what is considered a hit is determined by the success of the next generation—and, contrary to Dembski, only after that arrow has been fired.
A system that evolved for one purpose can later be co-opted to serve some other purpose. So numerous are the examples of co-optation that biologists have coined a term for them: exaptations (Gould and Vrba 1982). An exaptation is a feature that originally evolved for one function but is now used for a dif- ferent function. After a feature has been exapted, it may evolve, or adapt to its new function.
It is a mistake to assume that irreducibly complex systems could not have been assembled from other systems that performed functions other than their current function. It is also unreasonable to expect irreducibly complex sys- tems to be limited to the microbiological level of organization; they should exist at the level of the organism as well.
Behe (1996, 41) has been dismissive of the organismal level, arguing that we don't know all the parts of complex organismal systems. But organismal biology and the fossil record are important, because what matters in evolution is not limited to proteins: it encompasses the ability of the whole organism to survive and produce offspring. Accordingly, if we find irreducibly complex systems at the organismal level, we should be able to investigate their structure and evolution just as well as we can in molecular systems, if not better.
Contrary to Behe, I argue that we know the parts of systems at the organismal level at least as well as we do those at the biochemical level. At the organismal level, we have the additional advantage of a historical record for those struc- tures and functions; thus, we can observe how they were assembled for different functions and only exapted later and canalized, or developmentally and structurally locked, into a seemingly irreducible form for their current function.
Evolution often works through exaptation, assembling systems from disparate parts and not following a plan. Darwin (1859) himself realized that selective extinction removed transitional stages. Thus, when the system lacks a historical (fossil) record, it may be impossible to see how such a system evolved, and the system may actually appear unevolvable.
Paleontologists like to say that evolution erases its history. If you were to look only at the present-day world, you would have a hard time telling how different groups of animals are related because some groups, such as birds, turtles, and whales, have evolved morphologies so different from other ani- mals that it is hard to compare them or to reconstruct the morphological path they took to get to their current state.
Evolution may have erased much of its history in living organisms, but the fossil record preserves some of the documents; with these documents, we can reconstruct the evolution of many groups of organisms. The avian flight system provides an excellent ex- ample of how an irreducibly complex system evolves by small steps through selection for different functions, a series of functional and morphological changes detailed in the fossil record of dinosaurs." [Gishlick/Young, Edis; Why ID Failed]
"Stigmergic accounts of nest construction recognize that there are many construction pathways to a nest of a given general shape. The pathway actually taken (which wasp does what in response to lo- cal cues and when) reflects both the internal states of the wasps and the many chancy, unpredictable contingencies associated with the actual construction.
The result of this dumb process is a complex structure that gives the mislead- ing appearance of being intelligently designed. Here is how it happens.
Hexagonal cells are the basic unit of the wasp comb. Hexagonal cells are a very efficient way to fill a two-dimensional space and also very economical. But how, exactly, do these regular structures emerge? Detailed observations show that hexagonal forms are a predictable by-product of cell-building ac- tivity and do not require any higher-level rule or information (West Eberhard 1969, Karsai and Theraulaz 1995).
The hexagonal cells emerge from wasps' attempts to make conelike struc- tures. When a wasp lengthens a given cell, it also tries to increase its diameter. Imagine that the wasp builds a cone by adding a small quantity of material to the lower edge of the cone. Several cones are linked, however; and if the wasp detects another cell adjacent to the cell it is building, it slightly modifies its posture and does not extend the cell in that direction. The result of this behavior can be seen very clearly in the cells that are on the periphery of the comb. They have two or three neighbors, and all sides facing outward are curved (see Figure 7.3 ). Later, when new cells are added to the comb, these outer cells become inner cells and are turned into hexagonal cells. The hexagonal shape emerges without a blueprint, as a result of a simple building rule that is based only on local information. The hexagonal cell is just one of the emergent regular characteristics of the wasp nests. These cells form a comb, which has a definite (generally regular) structure. One of the most common comb shapes is a hexagonally symmetrical shape.
The hexagonally symmetrical comb shape has several adaptive advantages: it requires less material per cell, is better in terms of heat insulation, and, because of its small circumference, can be protected easily. But the adaptive explanation of this compact cell arrangement will not tell us how wasps built this structure. Philip Rau (1929) concluded from his experiment that the hexagonal symmetry is learned. Istvan Karsai and Zsoltan Pénzes (1993) analyzed the nest structures and the behavior of wasps and argued that the construc- tion is based on stigmergy.
In a stigmergic type of construction, the key problem is to understand how stimuli are organized in space and time to ensure coherent building. The hexagonally symmetrical structure emerges as a global pattern without deliberate planning. It is a by-product of simple rules of thumb that are triggered on the basis of local information (the wasps do not experience or conceive the shape of the comb).
Karsai and Pénzes (2000) examined several candidate rules of thumb and compared predicted nest forms to natural nest forms. They found that not all of the nest forms in nature have an optimal shape. These suboptimal forms could be explained away as anomalies, or they could be consequences of the rules of thumb. Karsai and Pénzes considered these “faulty” nests to be real data and the inevitable consequence of the rule of thumb actually used.
The rule can be described in functional terms as follows: construct a new cell, where the summed age of the neighbors of the new cell shows the maximum value. This rule gives rise to the maximum age model.
Karsai and Pénzes showed that a beautiful, regular, and adaptive structure emerges even if the builders are unaware of this regularity. The builders follow simple rules. As the nest grows and changes during construction, the nest itself provides new local stimuli to which the rule-following builders respond. As the builders respond to changing local stimuli, a globally ordered structure emerges. It is as if the developing nest governs its own development; the builders are only the tools. The wasps do not follow the ages of cells and sum their ages for their decision. In fact, several parameters correspond to the age of cells: cells become longer and wider as they age, and they absorb more chemicals. These constitute the local information that can be sensed by the wasps.
Now that we have explained how the regular hexagonally symmetrical comb shape emerges, it is natural to try to understand how other comb shapes emerge.
Does every shape need a unique rule of thumb? Using the stigmergy approach, Karsai and Pénzes (1998) showed that the variability of comb forms can be deduced from the same construction algorithm. Tweaking a single parameter of the model, the authors generated all forms found in nature and, interestingly, only those. This shows that variability and complexity may emerge in a very simple system in which interacting units follow simple rules and make simple decisions based on the contingencies of local information.
Communities of nest-building wasps are open-dissipative systems. The internal dynamics of these systems is driven by flows of energy through the system and constrained by parameters derived from the environment with which the insects interact. The elaborate, structurally coherent nests are highly improbable forms that could not have arisen by chance. In fact, these orderly, low-entropy structures emerge as the products of interactions between the insects that constitute the nest-building community and their immediate environments. These structures require no intelligent design from outside the system, nor do they require a guiding intelligence, be it a single individual or collective of individuals, operating within the system. The orderly, complex structures emerge as the consequence of the operation of blind, unintelligent, natural mechanisms operating in response to chancy, contingent, and unpredictable environments." [Young, Edis; Why ID Fails]
"Irreducible Complexity" is the new 'atomism' or another word for God...
"Phillip Johnson is one of the leaders of the intelligent-design movement. His opinion about common descent is stated most clearly in his well-known Darwin on Trial (1993):
“[Creationists'] doctrine has always been that God created basic kinds, or types, which subsequently diversified. The most famous example of creationist microevolution involves the descendants of Adam and Eve, who have diversified from a common ancestral pair to create all the diverse races of the human species".
This is an intriguing passage for several reasons. It suggests that humans are a basic kind and were created as such. By switching from basic kind to microevolution, Johnson avoids explicitly stating that humans are a basic kind and thus created but strongly suggests that they are. Johnson's example of basic kind is at the species level and implicitly affirms that the meaning of microevolution is “change within a species.”
If this is Johnson's view, then by implication all species are created, and microevolution is allowed to create only minor modifications within species. Whatever the definition of basic kinds (be it on the species or family level), microevolution by definition produces no new species. Any creation model that ends up with more species than the number of species it started with needs a natural mechanism to produce new species.
We can see that Johnson would love to believe in the special creation of humans:
"We observe directly that apples fall when dropped, but we do not observe a common ancestor for modern apes and humans. What we do observe is that apes and humans are physically and biochemically more like each other than they are like rabbits, snakes, or trees. The ape-like common ancestor is a hypothesis in a theory, which purports to explain how these greater and lesser similarities came about. The theory is plausible, especially to a philosophical materialist, but it may nonetheless be false. The true explanation for natural relationships may be something much more mysterious."
With humor, Johnson remarks that “descent with modification could be a testable scientific hypothesis”.
But suggesting a mysterious cause for natural relationships is not “a testable scientific hypothesis.” Johnson does not propose a nonmaterialist explanation. He evidently does not like the hypothesis of common descent but is unable to find good reasons to reject it and fails to present an alternative.
This is science by personal preference." [Gert Korthoff/Young, Edis; Why ID Fails]
"This claim of evidence for a divine cosmic plan is based on the observation that earthly life is so sensitive to the values of the fundamental physical constants and properties of its environment that even the tiniest changes to any of these would mean that life, as we see it around us, would not exist. The universe is then said to be exquisitely fine-tuned: delicately balanced for the production of life. As the argument goes, the chance that any initially random set of constants would correspond to the set of values that we find in our universe is very small, and the universe is exceedingly unlikely to be the result of mindless chance. Rather, an intelligent, purposeful, and indeed caring personal creator must have made things the way they are.
Some who make the fine-tuning argument are content to suggest merely that intelligent, purposeful, supernatural design has become an equally viable alternative to the random, purposeless, natural evolution of the universe and humankind suggested by conventional science. This mirrors recent arguments for intelligent design as an alternative to evolution.
A few design advocates, however, have gone further to claim that God is now required by scientific data. Moreover, this God must be the God of the Christian Bible. They insist that the universe is provably not the product of purely natural, impersonal processes. Typifying this view is physicist and astronomer Hugh Ross (1995), who cannot imagine fine tuning happening any other way than by a “personal Entity ... at least a hundred trillion times more 'capable' than are we human beings with all our resources.” He concludes that “the Entity who brought the universe into existence must be a Personal Being, for only a person can design with anywhere near this degree of precision”.
The formation of chemical complexity is likely only in a universe of great age. The element-synthesizing processes in stars depend sensitively on the properties and abundances of deuterium and helium produced in the early universe. Deuterium would not exist if the difference between the masses of a neutron and a proton were just slightly displaced from its actual value. The relative abundances of hydrogen and helium also depend strongly on this parameter. They, too, require a delicate balance of the relative strengths of gravity and the weak force—the force responsible for nuclear beta decay. A slightly stronger weak force, and the universe would be 100 percent hydrogen. A slightly weaker weak force would have led to a universe that was 100 percent helium, with no hydrogen to fuel the fusion processes in stars. Neither of these extremes would have allowed for the existence of stars and life as we know it based on carbon chemistry (Livro et al. 1989).
Many religious thinkers see the anthropic coincidences as evidence for a pur- poseful design of the universe. They ask, How can the universe possibly have obtained the unique set of physical constants it has, so exquisitely fine-tuned for life as they are, except by purposeful design—design with life and perhaps humanity in mind (Swinburne 1998, Ellis 1993, Ross 1995)?
First and foremost, and fatal to the design argument all by itself, is the wholly unwarranted assumption that only one type of life is possible—the particular form of carbon-based life we have here on Earth.
Carbon seems to be the chemical element best suited to act as the building block for the complex molecular systems that develop lifelike qualities. Even today, new materials assembled from carbon atoms exhibit remarkable, unexpected properties, from superconductivity to ferromagnetism. But to assume that only carbon life is possible is tantamount to “carbocentrism, ” which results from the fact that you and I are structured on carbon.
Given the known laws of physics and chemistry, we can easily imagine life based on silicon (computers, the Internet?) or other elements chemically similar to carbon. These still require cooking in stars and thus a universe old enough for star evolution. The N 1 = N 2 coincidence would still hold in this case, although the anthropic principle would have to be renamed the cyberthropic principle or some such, with computers rather than humans, bacte- ria, and cockroaches the purpose of existence.
Only hydrogen, helium, and lithium were synthesized in the early big bang. They are probably chemically too simple to be assembled into diverse structures. So it seems that any life based on chemistry would require an old universe, with long-lived stars producing the needed materials. Still, we cannot rule out forms of matter other than molecules as building blocks of complex systems. While atomic nuclei, for example, do not exhibit the diversity and complexity seen in the way in which atoms assemble into molecular struc- tures, perhaps they might be able to do so in a universe with different properties and laws.
Sufficient complexity and long life may be the only ingredients needed for a universe to have some form of life. Those who argue that life is highly improbable need to open their minds to the possibility that life might be likely with many different configurations of laws and constants of physics. Furthermore, nothing in anthropic reasoning indicates any special preference for human life or indeed intelligent or sentient life of any sort—just an inordinate fondness for carbon.
The media have reported a new harmonic convergence of science and religion (Begley 1998). This is more a convergence between theologians and devout scientists than a consensus of the scientific community. Those who deeply need to find evidence for design and purpose in the universe now think they have done so. Many say that they see strong hints of purpose in the way in which the physical constants of nature seem to be exquisitely fine-tuned for the evolution and maintenance of life. Although not so specific that they select out human life, various forms of anthropic principles have been suggested as the underlying rationale.
Design advocates argue that the universe seems to have been specifically designed so that intelligent life would form. These claims are essentially a modern, cosmological version of the ancient argument from design for the existence of God. The new version, however, is as deeply flawed as its predecessors were, making many unjustified assumptions and being inconsistent with existing knowledge. One gross and fatal assumption is that only one kind of life, ours, is conceivable in every possible configuration of universes. But a wide variation of the fundamental constants of physics leads to universes that are long-lived enough for life to evolve, even though human life need not exist in such universes.
Someday we may have the opportunity to study different forms of life that evolved on other planets. Given the vastness of the universe and the com- mon observation of supernovae in other galaxies, we have no reason to assume life exists only on earth. Although it hardly seems likely that the evolution of DNA and other details were exactly replicated elsewhere, car- bon and the other elements of our form of life are well distributed through- out the universe, as evidenced by the composition of cosmic rays, meteors, and the spectral analysis of interstellar gas.
We also cannot assume that life would have been impossible in our universe had the physical laws been different. Certainly we cannot speak of such things in the normal scientific mode in which direct observations are described by theory. But at the same time, it is not illegitimate, not unscientific, to ex- amine the logical consequences of existing theories that are well confirmed by data from our own universe.
Multiple universes are certainly a possible explanation; but a multitude of other, different universes is not the sole naturalistic explanation available for the particular structure of our universe.
But if many universes beside our own exist, then the anthropic coincidences are a no-brainer. Within the framework of established knowledge of physics and cosmology, our universe could be one of many in a super-universe, or multiverse. Andrei Linde (1990, 1994) has proposed that a background space- time “foam” empty of matter and radiation will experience local quantum fluctuations in curvature, forming many bubbles of false vacuum that individually inflate into mini-universes with random characteristics. Each universe within the multiverse can have a different set of constants and physical laws. Some might have life in a form different from ours; others might have no life at all or something even more complex or so different that we cannot even imag- ine it. Obviously we are in one of those universes with life.
Although not required to negate the fine-tuning argument, which collapses under its own weight, other universes besides our own are not ruled out by fundamental physics and cosmology. The theory of a multiverse composed of many universes with different laws and physical properties is actually more parsimonious, more consistent with Occam's razor, than a single universe. Specifically, we would need to hypothesize a new principle to rule out all but a single universe. If, indeed, multiple universes exist, then we are simply in that particular universe of all the logically consistent possibilities that had the properties needed to produce us.
The fine-tuning argument and other recent intelligent-design arguments are modern versions of God-of-the-gaps reasoning, in which a God is deemed necessary whenever science has not fully explained some phenomenon. When humans lived in caves, they imagined spirits behind earthquakes, storms, and illness. Today, we have scientific explanations for those events and much more. So those who desire explicit signs of God in science now look deeper, to highly sophisticated puzzles like the cosmological-constant problem. But once again, science continues to progress, and we now have a plausible explanation that does not require fine tuning. Similarly, science may someday have a theory from which the values of existing physical constants can be derived or otherwise explained.
The fine-tuning argument would tell us that the sun radiates light so that we can see where we are going. In fact, the human eye evolved to be sensitive to light from the sun. The universe is not fine-tuned for humanity. Humanity is fine-tuned to the universe." [Victor Stenger/Young, Edis; Why ID Fails]
Whatever Penrose's theories of the [You must be registered and logged in to see this link.] may be, when it comes to his theories on consciousness, I think one needs to read him with caution.
There's definitely Spinoza's influnces in his [You must be registered and logged in to see this link.] - I [You must be registered and logged in to see this link.] with the non-computability aspect [consciousness cannot be explained as some digital or computer analog], but the rest sounds fishy:
"The lead article is by Stuart Hameroff and Roger Penrose, who have reviewed and updated their controversial Orch-OR theory of consciousness. Here is the abstract:
The nature of consciousness, the mechanism by which it occurs in the brain, and its ultimate place in the universe are unknown. We proposed in the mid 1990ʼs that consciousness depends on biologically ‘orchestrated’ coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity, and that the continuous Schrödinger evolution of each such process terminates in accordance with the specific Diósi–Penrose (DP) scheme of ‘objective reduction’ (‘OR’) of the quantum state. This orchestrated OR activity (‘Orch OR’) is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space–time geometry, so Orch OR suggests that there is a connection between the brainʼs biomolecular processes and the basic structure of the universe. Here we review Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology. We also introduce a novel suggestion of ‘beat frequencies’ of faster microtubule vibrations as a possible source of the observed electro-encephalographic (‘EEG’) correlates of consciousness. We conclude that consciousness plays an intrinsic role in the universe.
And this is how Hameroff (1998) puts it in a peer reviewed article titled [You must be registered and logged in to see this link.]:
Perhaps panpsychists are in some way correct and components of mental processes are fundamental, like mass, spin or charge. Following the ancient Greek panpsychists, Spinoza (1677) saw some form of consciousness in all matter. Leibniz (1766) portrayed the universe as an infinite number of fundamental units (“monads”) each having a primitive psychological being. Whitehead (e.g. 1929) was a process philosopher who viewed reality as a collection of events occurring in a basic field of protoconscious experience (“occasions of experience”). Abner Shimony observed that Whitehead’s occasions were comparable to quantum state reductions-actual events in physical reality. It is possible, in other words, that we are connected to a field matrix and that individual consciousness participates in and contributes to this matrix. There are some metaphysical traditions, particularly eastern ones, that have long made similar assertions and actively encourage adherents to engage with this field matrix, whatever it may be (we currently have only a hypothetical understanding of this proposed field)."
If the leading pan-psychist Strawson [You must be registered and logged in to see this link.] on Penrose, which I haven't gotten into and so cant comment conclusively, then there's something that's perhaps not right. Can't be sure if he's testing or [You must be registered and logged in to see this link.]…
"1. Mind is the sensible relation of aesthetic qualities. 2. Sense is the intervention of becoming upon what has become. 3. Sense is a continuum of ‘minding’ and relatively mindless perspectives. 4. Sense precedes being, existence, or matter."
The hedonistic eroticization of wisdom that goes under the garb of Pan-psychism or Intelligent Design or Value Ontology or Experientialism, etc. as "philosophy" is more properly termed, Erosophy: "A more intimate love of wisdom", found in the title of one [You must be registered and logged in to see this link.].
Ainsophy is an erosophy that loves its pre-possessed truth - truth is love is value.
"Eros was the term more closely associated with erotic love. It is a possessive love. It is the love that wants to have its object for itself. The eros lover objectifies his object and subjectifies himself. "She is mine." "Philosophy" is not "erosophy" because the love of wisdom we intend is not one which presumes to have wooed and caught wisdom, like a trophy wife, like an achievement.
Agape is the selfless type of love. It is the love that wants to be had by its object. The agape lover subjectifies her object and objectifies herself. "Take me. I give myself to you." "Philosophy" is not "agaposophy" because the love of wisdom we intend is not one which places wisdom on such a high pedestal that it sacrifices everything else for it.
Storge is the love for your family, which came automatically to you from childhood. It is instinctual allegiance. It is natural bias, unintentional preference. "We are together because we have always been together. I have always loved you. I never had to begin to love you." "Philosophy" is not "storgesophy" because the love of wisdom we intend is not something which comes naturally to us, as though we did not have to be deliberate and patient and thoughtful.
So, we begin to see a picture of that love of wisdom which really is meant by compounding philos and sophia.
Philosophy is not about taking knowledge, as though you could actually attain it and own it, as certain, as a possession of your own.
It is not about giving up your individuality in the search for knowledge, submitting yourself to serve reason, to be mastered by logic.
And it is not something you do not have to will, or to work for, which you have automatically, from birth.
Philosophy is about coming alongside Wisdom (Knowledge, Reason, Logic, etc.) as a peer, rising to the challenge —what is neither Wisdom nor against Wisdom: uncertainty and will."