nettime's_recombinant on Tue, 9 Sep 2003 07:34:11 +0200 (CEST)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> DNA and computers digest [keim, thacker]

DNA and Computers -- Genetic Reductionism 
     Brandon Keim <>
RE: <nettime> DNA and computers
     "Eugene Thacker" <>

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Date: Mon, 08 Sep 2003 11:52:11 -0400
Subject: DNA and Computers -- Genetic Reductionism 
From: Brandon Keim <>

-- "From: "Eduardo Navas" <>
subject: Re: <nettime> DNA and computers
date: Sat, 6 Sep 2003 03:51:15 -0600


Here is another interesting summary
on Slashdot on DNA, spaguetti code and computers:
which some nettimers may already be aware of, but it is defintely worth
keeping in mind." --

On the Slashdot page is a quote from an NPR discussion of DNA, in which
someone says 'For many, the best analogy for the way DNA works is that it's
like a computer program at the heart of every cell.'

This statement is a fair representative of the mainstream understanding of
DNA, as evidenced so ubiquitously during the 50th anniversary of Watson &
Crick's 'discovery'.  "The building blocks of life" . . . "the code of life"
. . . it goes on and on:  the attribution to DNA of a central role in
organismal development.

This myth has been seized upon in part, I think, because it provides a
ready-made master narrative; it is linear, it is causal, complex but
conceivable.  However, it is wrong.  DNA is a fragment of the whole; it is
not even the most important fragment.  It does not 'replicate itself', and
only very rarely can a clear link be made between even a group of genes, in
a particular environment, and a particular trait.  Understanding the complex
relationships between cellular machinery, of which DNA is a part, the
organismal whole, and the organism's physical and social environment -- and
ultimately the society or ecosystem composed by those organisms -- is not
possible while we concentrate on DNA's mythical central role.

Below are several articles from GeneWatch, the publication of the Council
for Responsible Genetics, which were intended to provide a counterbalance to
the 50th anniversary's revival of reductionism.  One is by Stuart Newman, a
systems biologist who has worked with Jeremy Rifkin on life patent issues
(google them); another by Dick Lewontin, author of 'The Triple Helix'; and
Barry Commoner, one of the founders of modern environmentalism.


The DNA Era

by Richard C. Lewontin


No one who reads the newspapers or scientific journals can have missed the
fact that this is the 50th anniversary of the publication of the correct
three-dimensional structure of DNA. That structure, a double helix of two
chains of nucleotides, has become a popular icon and the very phrase,
"double helix" has been spoken and written so often as to become part of
ordinary discourse.

The fact that genes were composed of DNA had already been established nine
years before the publication of Watson and Crick's paper on its structure,
and the chemical, as opposed to the spatial, configuration of DNA was also
well known before 1953. Yet, despite the obvious importance of DNA in
understanding the molecular details of both heredity and development, it was
not until after the publication of the proposed double helical structure
that DNA started increasingly to occupy the interest of biologists and
finally became the focus of the study of genetics and development. The last
fifty years have seen the reorganization of most of biology around DNA as
the central molecule of heredity, development, cell function and evolution.
Nor is this reorganization only a reorientation of experiment. It informs
the entire structure of explanation of living processes and has become the
center of the general narrative of life and its evolution. An entire
ideology has been created in which DNA is the "Secret of Life", the "Master
Molecule", the "Holy Grail" of biology, a narrative in which we are
"lumbering robots created, body and mind" by our DNA. This ideology has
implications, not only for our understanding of biology, but for our
attempts to manipulate and control biological processes in the interests of
human health and welfare, and for the situation of the rest of the living

The first step in building the claim for the dominance of DNA over all
living processes has been the assignment of two special properties to DNA,
properties that are asserted over and over again, not only in popular
expositions but in textbooks. On the one hand, it is said that DNA is
self-replicating; on the other, that DNA makes proteins, the molecular
building blocks of cells. But both of these assertions are false -- and what
is sodisturbing is that every biologist knows they are false.

First, DNA is not self replicating. It is manufactured out of small
molecular bits and pieces by an elaborate cell machinery made up of
proteins. If DNA is put in the presence of all the pieces that will be
assembled into new DNA, but without the protein machinery, nothing happens.
What actually happens is that the already present DNA is copied by the
cellular machinery so that new DNA strands are replicas of the old ones. The
process is analogous to the production of copies of a document by an office
copying machine, a process that would never be described as
"self-replication". In fact, many errors are made in the DNA copying
process; there is protein proofreading machinery devoted to comparing the
newly manufactured strands to the old ones and correcting the errors. An
office copier that made such mistakes would soon be discarded.

Second, DNA does not make anything, certainly not proteins. New proteins are
made by a protein synthesis machinery that is itself made up of proteins.
The role of the DNA is to provide a specification of the serial order of
amino acids that are to be strung together by the synthetic machinery. But
this string of amino acids is not yet a protein. To become a protein with
physiological and structural functions, it must be folded into a three
dimensional configuration that is partly a function of the amino acid
sequence, but is also determined by the cellular environment and by special
processing proteins that, among other things, may cut out parts of the amino
acid chain and splice what remains back together again.

The other function of DNA is to provide a set of "on-off" switches that are
responsive to cellular conditions so that different cells at different times
will produce different proteins. When the conditions of the cell set a
switch associated with a particular gene to the "on" position, then the
protein manufacturing machinery of the cell will read that gene. Otherwise
the cell will ignore it.

In this mechanical description of the relation of DNA to the rest of the
cellular machinery there is no "master molecule", no "secret of life." The
DNA is an archive of information about amino acid sequences to which the
synthetic machinery of the cell needs to refer when a new protein molecule
is to be produced. When and where in the organism that information is read
depends on the physiological state of the cells. An organism cannot develop
without its DNA, but it cannot develop without its already existing protein
machinery (unless it is a parasite like a virus that has no synthetic power
of its own but gets a free ride on its host's protein machinery).

The unjustified claim for special autonomous powers of DNA is the prelude to
the next step in building a picture of a DNA-dominated world. This picture
is simply the molecular version of a biological determinism that has
dominated explanations of the properties of organisms, and especially of
humans, since the nineteenth century. Differences in temperament, talents,
social status, wealth, and power were all said to reside "in the blood." The
physical manifestations of these claimed hereditary differences could be
seen by criminal and racial anthropologists in the shapes of noses and heads
and the color of skins. With the rise of Mendelian genetics, genes were
substituted for blood in the explanations, but they remained, for the fifty
years of genetics, merely formal entities with no concrete description
beyond the fact that they were some bit of a chromosome. The discovery that
DNA is the material of the gene, and the subsequent determination of the
correspondence between nucleotide sequences of genes and amino acid
sequences of proteins, then provided a concrete molecular basis for a total
scheme of explanation of the organism. The fact that organisms are built
primarily of proteins and that DNA carries the archive of information for
the amino acid sequence of the proteins gave an immense weight to the
conclusion that the organism as a whole is coded in its DNA. A manifestation
of this view is the claim made, at a symposium in commemoration of the 100th
anniversary of the death of Darwin, by a founder of the molecular biology of
the gene: that if he were given the DNA sequence of an organism and a large
enough computer, he could compute the organism. One is reminded of
Archimedes' claim that, given a long enough lever and a place to stand, he
could move the earth. But while Archimedes may have at least been right in
principle, the molecular biologist was not. An organism cannot be computed
from its DNA because the organism does not compute itself from its own DNA.

It is a basic principle of biology, known to all biologists but ignored by
most of them as inconvenient, that the development of an organism is the
unique consequence of its genes and the temporal sequence of environments in
which it developed. The current fascination of developmental genetics is
with the way in which information from different genes enters into the
formation of the major features of an organism. How does the front end of
the animal become differentiated from the back end? Why does the egg of a
horse develop into an animal with four legs while the egg of a bird produces
an organism with two legs and two wings, and the egg of a butterfly results
in an animal with six legs and two sets of wings? This concentration on the
major differences and similarities between different species has resulted in
a genetically determinist view of development that ignores the actual
variation among individuals. There is an immense experimental literature in
plants and animals showing that individuals of the same genetic constitution
differ widely from each other in physical characteristics if they develop in
different environments. Moreover, the relative ranking in some physical
trait of individuals of different genotypes changes from environment to
environment. Thus, a genetic type that is the fastest growing at one
temperature may be the slowest at another. But even genes and environment
together do not determine the organism. All "symmetrical" organisms show a
fluctuating asymmetry between their two sides and the variation between left
and right sides is often as great as the difference between individuals. For
example, the fingerprint pattern on the left and right hands of a human
individual are not identical; on some fingers, they may be extremely
dissimilar. This variation is the manifestation of random growth differences
that arise from small differences in the local tissue and cell conditions in
different parts of the body, and from the fact that there is random
variation in the number of copies of particular molecules in different
cells. A consequence is that two individuals with identical genes and
identical environments will not develop identically. If we want to
understand human variation, we need to ask far more subtle and complex
questions than is the rule in DNA-dominated biology.

The other side of the movement of DNA to the center of attention in biology
has been the development of tools for the automated reading of DNA
sequences, for the laboratory replication and alteration of DNA sequences
and for the insertion of pieces of DNA into an organism's genome. Taken
together, these techniques provide the power to manipulate an organism's DNA
to order. The three obvious implications of this power are in the detection
and possible treatment of diseases, the use of organisms as productive
machines for the manufacture of specific biological molecules, and the
breeding of agricultural species with novel properties.

The Human Genome Project has been largely justified by the promise that it
will now be possible to locate genes that cause human disease by comparing
the DNA sequences of affected and unaffected individuals. Once the
nucleotide difference has been established, that difference can be used as a
diagnostic criterion, as a predictor of a future onset of the disease, and
as a basis for a cure by gene replacement therapy. It is undoubtedly true
that some fraction of human ill health is a consequence of deleterious
mutations. However, while family studies can strongly suggest that a disease
is being inherited as a single Mendelian gene difference, the determination
that it is a consequence of mutation of a particular gene is not a trivial
problem. A blind search for a genetic difference that is common to all
affected individuals is impractical given that, on the average, any two
humans differ from each other at 3 million nucleotide sites. On the other
hand, if the biochemistry of the disease is sufficiently well understood, it
may be that a few candidate genes can be singled out for investigation.
Alternatively, studies of the pattern of inheritance may show that the
disorder is inherited coordinately with an associated gene of known location
in the genome, greatly narrowing down the search for the DNA variation
implicated in the disease.

As in all other species, for any given gene, human mutations with
deleterious effects almost always occur in low frequency. Hence specific
genetic diseases are rare. Even in the aggregate, genes do not account for
most of human ill health. Given the cost and expenditure of energy that
would be required to locate, diagnose and genetically repair any single
disease, there is no realistic prospect of such genetic fixes as a general
approach for this class of diseases. There are exceptions, such as sickle
cell anemia and conditions associated with other abnormal hemoglobins, in
which a non-negligible fraction of a population may be affected, so that
these might be considered as candidates for gene therapy. But for most
disease that represents a substantial fraction of ill health and for which
some evidence of genetic influence has been found, the relation between
disease and DNA is much more complex and ambiguous. Claims for the discovery
of "genes for" schizophrenia and bipolar syndrome have repeatedly been made
and retracted. It is generally accepted that cancer is a consequence of
mutations in a variety of genes related to the control of cell division, but
even in the strongest individual case, the breast cancer-inducing BRCA1
mutations, only about 5% of such cancers are linked to these specific

Up to the present we do not have a single case of a successful cure for a
disease by means of gene therapy. All successful interventions, whether in
genetically simple disorders like phenylketonuria or in complex cases like
diabetes, have been at the level of biochemistry and were in place well
before anything was known about DNA. Of course, a successful gene therapy
for some disease may be produced in the future, but the claim that the
manipulation of DNA is the path to general health is unfounded. In fact, on
a world scale, most ill-health and premature death is caused by a
combination of infectious disease and undernourishment -- factors which
genetic manipulation will never solve.

The second implication, the possibility of using genetically transformed
organisms as factories for the commercial production of biologically useful
molecules, has been realized in practice. The most famous case, the mass
production of human insulin by bacteria, is particularly instructive.
Insulin for diabetics was originally extracted from cow and pig pancreases.
This molecule, however, differed in a couple of amino acids from human
insulin. Recently, the DNA coding sequence for human insulin has been
inserted into bacteria, which are then grown in large fermenters; a protein
with the amino acid sequence of human insulin is extracted from the liquid
culture medium. But amino acid sequence does not determine the shape of a
protein. The first proteins harvested through this process, though they
possessed the correct amino acid sequence, were physiologically inactive.
The bacterial cell had folded the protein incorrectly.

A physiologically active molecule was finally produced by unfolding the
bacterially produced protein and refolding it under conditions that are a
trade secret known only to the manufacturer, Eli Lilly. This success,
however, has a severely negative consequence. For some diabetics this
"human" insulin produces the symptoms of insulin shock, including loss of
consciousness. Whether this effect is caused by a manufacturing impurity, or
because the insulin is not folded in the same way as in the human pancreas,
or because the molecule is simply too physiologically active to be taken in
large discrete doses rather than internal, continuously released amounts
calibrated by a normal metabolism, is unknown.

The problem is that Eli Lilly, which holds the patent on the extraction of
insulin from animal pancreases, no longer produces pig or cow insulin.
Hypersensitive diabetics for whom Eli Lilly's standard treatment is
dangerous no longer have an easily obtainable alternative supply. The most
widely known and contentious application of DNA technology to production is
in agriculture. The introduction of DNA sequences derived from widely
divergent species into agricultural varieties has resulted in a struggle of
immense proportions in both North America and Europe. The proximate purpose
of the creation of varieties with DNA introduced from other kinds of
organisms is to produce agricultural crops with novel features that cannot
be obtained by the usual methods of selection because the relevant genes are
not present in the agricultural species. The benefits to farmers, consumers
and commercial seed producers vary considerably from case to case, although
in every case the ultimate goal of the commercial breeder is increased
profit and the protection of their property rights. There are four cases to
be distinguished. First there is the introduction of pest and disease
resistance, as in the introduction of the BT protein from Bacillus
thuringiensis into maize. This is intended to reduce the labor, chemicals
and machinery needed by the farmer for pest control. Some of the cost
reduction is lost in the higher price of the commercial seed, but saving
labor is important to farmers. Second, there is creation of varieties that
are resistant to herbicides used to control weeds. The best-known examples
are the Roundup Ready varieties produced by Monsanto, designed to coerce
farmers into purchasing Monsanto's general herbicide (Roundup) as well as
their seed. The supposed advantage to the farmer is a reduction in machinery
and labor involved in tillage, but again the cost saving is reduced by the
increased price of seed. The third case is the pure protection of property
rights of the seed producers with no benefit to farmers or consumers. The
most infamous example is the attempted introduction of "Terminator"
technology by the Delta Pine and Land Company, which was later purchased by
Monsanto. Terminator seed varieties will germinate and produce sterile
crops, thus forcing farmers to purchase commercial seed anew every year. (It
should be noted that this technology, of no advantage to farmers or
consumers, was produced in cooperation with the U.S. Department of
Agriculture). The fourth case is the introduction into mass produced field
crops of DNA coding for particular compounds normally only produced by
specialty species. This technology has the potential to destroy much of the
economy of Third World countries that are dependent on the export of
agriculturally produced commodities. An example is the transfer into rape
seed, a widely grown crop in North America, of the DNA coding for palmitic
acid oils that are used in industrial processes. Normally these oils are
extracted from oil palm seed grown in Southeast Asia.

While much of the opposition to transgenic agriculture has been based on the
"unnaturalness" of the process, this objection misses the point. No
agricultural variety is Œnatural',but is the product of centuries of
gradual, cumulative genetic modification from its wild ancestors to produce
varieties that are utterly different from the ancestral forms. Moreover,
crosses between different species have been a standard method of plant
breeding for more than a century. The real issue is that DNA technology
provides a powerful tool for the control of agricultural production by
monopolistic producers of the inputs into agriculture with no ultimate
advantage either to farmers or consumers and with the possibility of
destroying entire national agricultural economies. All of the elements that
characterize the era of DNA have in common an underlying simplistic view of
living organisms. By concentrating in practice and in theory on the
properties and functions of a single molecule, biologists, both in their
professional work and in their public statements, reduce the extraordinary
complexity of life processes to the structure and metabolism of DNA. This
emphasis ignores the intricate and multiple ways in which organisms are
built and function. The intricacy is a consequence of the structural and
metabolic functions of proteins and the interactions of those proteins with
each other, with other molecules, and with the environment in the course of

Moreover, for human life, no account at all is taken of the role of social
and economic processes in determining health and life activities and molding
the processes of industrial and agricultural production. We cannot
understand our size, shape and internal functioning except by a detailed
understanding of the extremely complex web of interactions among the various
molecules which form the body in concert with influences exerted by our
environments. We cannot understand the origin and development of our mental
states except by an understanding of the map of nervous connections and how
that map is influenced by experience. We cannot understand why agricultural
technology develops in particular directions if we do not understand the
social, political and economic interactions that drive technological
innovation. The bottom line is that life in all its manifestations is
complex and messy and cannot be understood or influenced by concentrating
attention on a particular molecule of rather restricted function.


Richard C. Lewontin is an evolutionary geneticist, philosopher of science,
and social critic. An early pioneer in the development of molecular
population genetics, his works include Biology as Ideology, The Triple
Helix: Gene, Organism, and Environment, and Not in Our Genes, co-authored
with Steven Rose and Leon Kamin. He is Alexander Agassiz Research Professor
at Harvard University, and regularly writes for the New York Review of


The Fall and Rise of Systems Biology

by Stuart A. Newman


While this year is the 50th since the discovery of the double helical
structure of DNA, the molecule of which all animal, plant, and bacterial
genes are composed, it is also only a bit more than a century since the very
notion of the gene first entered mainstream biology. Throughout the
twentieth century, increased knowledge of genes and their structure was
supposed to provide an understanding of the material basis of complex traits
of organisms, such as the fact that we humans typically have four limbs with
five digits on each, and hair all over our bodies, while fruit flies have
six legs and two wings, with bodies covered in bristles. Instead, though our
knowledge of the chemistry of cells and tissues has grown enormously in the
past hundred years, we are still at the point (except for the simplest
cases, such as eye color) of only being able to correlate the presence of
alternative versions of a gene in an organism with alternative versions of a
trait. This is all that Gregor Mendel, the originator of the gene concept,
was able to do when he studied traits of the garden pea, such as stem length
and flower position, in the 1860s, four decades before the scientific
mainstream took note of his work.

Apart from their failure to deliver on scientific promises, notions of
genetic reductionism and determinism, in the century just past, provided a
pseudoscientific gloss to divisive conceptions of human capability and
worth. In the last decade or so, similar ideas, linked to increased capacity
for genetic manipulation and computer-aided monitoring of gene activity,
have led to calls to refashion people, food crops, and animals to suit
narrowly-defined needs. However, recent scientific developments have also
afforded the possibility of a more integrated understanding of living
systems, and have led to an appreciation of the poor theoretical basis for
attempts to explain and manipulate complex traits at the level of the genes.


The reason why knowledge of genes -- no matter how detailed our sequencing of
the human genome may be -- cannot provide an understanding of an organism's
significant traits, its shape and form, its behaviors, and so forth, is that
such traits are generated during the organism's embryonic development or
later life by systems of interactions across many scales. Genes, and
particularly the RNA and protein products specified by DNA sequences, are
only a subset of the components of such systems. Moreover, these systems
have physical, as well as chemical, properties.

In the last few years, as it has become clear that the emerging human genome
sequence would not provide -- as had been promised by Human Genome Project
(HGP) administrators and scientists -- a "blueprint" or "Book of Life"
describing what it means to be human, there has been increasing discussion
of "systems biology" in the scientific literature. The new standard view, in
a turnabout by principal spokespersons of the HGP, is that an organism's
genome is only a "parts list." The real work, and real understanding, will
come only as we begin to learn how all the parts interact to generate
organismal traits.

In some ways this is a positive development, but the term "systems biology"
can be understood in different ways, not all of which represent a great deal
of conceptual or practical progress. For those with a continued stake in a
reductionist account of living organisms, the systems in question are just
collections of interacting components of a well-defined single type. Thus,
one now often hears that organisms are "systems of interacting genes" or, in
recognition of new knowledge that one gene may specify many different
proteins(1), "systems of interacting proteins." Advocates of single-level
explanations include theorists disappointed that deciphering full genomes
did not yield the expected revelations, and, importantly, corporate
stakeholders who would like to believe that a patentable entity -- a gene, a
protein, a drug that affects a metabolic step -- has a unique causal
relationship to a biological function or trait, such as blood pressure,
obesity, or depression.

Systems biology can also be understood in a much more integrative sense as
multiscale, multilevel explanations of organismal properties. Organisms
contain many different kinds of complex systems. One of these is metabolism:
the network of chemical transformations that provide cells with building
blocks for large molecules like DNA, proteins, and polysaccharides, and
permit them to extract usable energy from cellular fuel sources. Another is
the genetic network that operates during embryonic development, in which the
products of genes active at particular developmental stages induce or
repress the activity of other genes, in a sequential fashion, so as to allow
the embryo to advance to successive stages. Signaling networks are systems
by which small molecules modulate the rates of cellular processes and permit
the coordination of other systems, such as the metabolic and genetic
networks just described. The brain's neural network is still another system,
one in which the electrical activities of tens of millions of nerve cells
are coordinated by the mutual exchange of chemical signals. Each of these
biological systems, and all others, has an evolutionary history, in which
not only its particular internal character, but its relationship to other
systems, has undergone change. This adds a further complexity: relationships
between the same systems in different organisms will not always be the same.

Multileveled, multiscale approaches have long characterized the
nonbiological sciences. When a chemist wants to understand a chemical
reaction rate -- how fast carbon dioxide and water are formed when propane is
burned, for example -- she is not only concerned with what molecules are
involved, but with the particular values of external physical parameters
such as temperature and pressure. An astronomer's assessment of the nature
of an event in a distant galaxy draws simultaneously on his knowledge of
physical systems on the scale of light years as well as systems at the
submillimeter scale. No significant phenomenon in nature can be accounted
for in terms of a single process measured on one scale of space or time. The
twentieth century notion that genes represent a privileged level of
explanation of the development and evolution of organismal traits is
therefore a fantasy, and a distraction from the development of biology as a

Biological systems are currently the subject of profuse scientific efforts
involving mathematical modeling, computer simulation, and experimental
analysis. Many techniques are used to study a given system, and many systems
enter into the comprehension of any biological phenomenon--cell division, or
an animal's feeding behavior, for example. This work is only at its
beginning stages. But understood in this sense, the new systems biology
represents a return of biology to the world of the other sciences after a
century-long focus on genetic mechanisms which, in its latter half, became a
veritable DNA binge.


Observers of the current DNA celebrations might find it surprising that
throughout the eighteenth and nineteenth centuries, biologists, despite
their ignorance of genes, saw themselves as active participants in a vibrant
scientific culture that had produced laws of mechanical stasis and motion,
the atomic theory of chemical transformations, principles of conservation
and dissipation of energy, and an understanding of electricity and
magnetism. Advances had been made in describing the microscopic structure of
cells and the macroscopic organization of tissues, such as bones and
muscles. The growing recognition that life on earth has an evolutionary
history, a notion first introduced in the modern era (the ancient Greeks had
their own version) by Lamarck in 1800, and later provided with the plausible
mechanism of natural selection -- not dependent on any particular concept of
genes -- by Darwin in 1857, led to scenarios of organismal transformation and
diversification that remain sound to this day. And the visible evidence that
complex organisms take their form in each generation by a sequence of steps
beginning with a single fertilized egg, gave rise to a descriptive and
experimental developmental mechanics of cells and tissues, for which modern
genetics has provided molecular correlates, but has not replaced.

A characteristic of most European science before the twentieth century was
that, while the preeminence of matter and its laws of motion was
acknowledged, how matter wound up assuming particular configurations and
arrangements was still a mystery. The matter described by Isaac Newton, the
great codifier of the science of mechanics, is inert. Although the motions
of billiard balls and planets are governed by mathematically precise laws,
the outcome of such motion is entirely dependent on the initial preparation
of the system -- the arbitrarily given starting position and velocity of each
particle. In order for the matter in a many-body system to become organized
in a complex fashion it would have to be Œset up' in an appropriate way.
This is why Descartes, Newton, and the other founders of the mechanistic
worldview could simultaneously be physical determinists and religious
believers: God, they opined, was in the initial conditions.

Biology in the nineteenth century developed fully within this same
tradition. Along with their recognition that living organisms were composed
of the same atomic constituents found in nonliving nature (the chemist F.
Wöhler synthesized the biological molecule urea from inorganic materials in
1828, for instance), biologists rightly noted that the arrangement and
organization of molecules in living systems was not automatically dictated
by their identity. In this regard, they were following the eighteenth
century philosopher Immanuel Kant, who dismissed the hope that the
principles upon which organisms were constructed could be derived solely
from causal analysis based on physical science(2).

On the larger scale of organismal construction -- the arrangement of bones,
muscles and other parts -- the conceptual separation between the lawful
behavior of the material constituents and the origin of those constituents
also held sway. Early in the nineteenth century, Georges Cuvier, the founder
of paleontology and comparative anatomy, held that all the parts and
functions of an organism are interrelated with one another by strict laws of
nearly mathematical regularity. Any deviation from these preordained
relationships would yield an impossible organism--one whose structure and
function did not "compute." His main intellectual rival, Etienne Geoffroy
St. Hilaire, claimed, instead, that the material nature of tissues led to
their being governed by "laws of form" analogous to, but more complex than,
those discerned by Newton for billiard balls and planets. These involved the
molding, folding and segmentation of tissue masses. In distinction to
Cuvier, who asserted that special creation devised anatomical arrangements
to suit an animal's "conditions of existence," Geoffroy held that an
animal's anatomy determined its "mode of life." Despite the evident
differences in the "form follows function" and "function follows form"
viewpoints, Cuvier and Geoffroy, like Newton and Kant before them, both
understood that the origination of the biological organizing principles they
were proposing could never be derived solely from the details of their

In one sense, though, Geoffroy, more than Cuvier, was a progenitor of modern
systems biology. The "systems" approach swept through the sciences during
the twentieth century, with the exception of biology, where it was derailed
by the "rediscovery" of Mendel's work by several scientists in 1901, and the
subsequent focus on genes. A systems analysis is not all-embracing: no
scientific theory has ever explained, nor claimed to explain, all aspects of
its field of discourse. Even modern fundamental particle theory, the most
sophisticated analysis of matter and its origins yet devised, cannot explain
why there is something rather than nothing, and why the various physical
constants have their particular values. But beginning in the nineteenth
century, and throughout the twentieth, physical science has incessantly
pushed back the point at which there is an opening for non-naturalistic
explanations, i.e., ones that take recourse in inexplicable, or specially
arranged, conditions. In contrast to Cuvier's notion of the "correlation of
parts," arranged at creation in conformity with the organism's essential
nature, Geoffroy's concept that tissues themselves generate different types
of organisms by means of "laws of form" implies that proximate biological
development has a naturalistic explanation. Indeed, twentieth century
advances in the sciences of complex systems and condensed materials have
permitted new insights into the laws of form that pertain to living tissues.
Before this new program took hold, however, the era of the gene intervened.


It is not that the "rational morphology" approaches associated with Cuvier,
Geoffroy, and other nineteenth century biologists, have proven incorrect. It
is well known that modern-day fossil reconstruction draws on the general
validity of the assumption that the organization of (even previously
unknown) organisms is discernible from a set of principles that make no
reference to regularities at the molecular level. Nonetheless, the major
trend of twentieth century biology was to reject the idea that a system of
global organizing principles sets the terms for more small-scale processes.
What replaced it was the opposite notion: that a privileged set of
small-scale processes -- interactions of genes and their products -- is where
one's attention must be focused when biological organizing principles are

A key figure in the turning away from a systems approach to understanding
biological organization was William Bateson. Paradoxically, Bateson began as
a strong advocate of the notion of laws of form. In his book Materials for
the Study of Variation, published in 1894, he concerned himself with the
repetitive organization of certain animal parts, such as the segments of
earthworms, the backbones of vertebrates, and the digits of the hand. He
proposed a physical metaphor for the generation of such repetitions in terms
of "Chladni figures," the wave-like patterns that form when a fine powder is
placed on a vibrating surface, such as a sounding violin. Changing the
frequency of the vibration could also change the features of the pattern in
a dramatic fashion, so the Chladni figures were not only a metaphor for the
production of repetitive patterns during individual development, but also a
metaphor for the discontinuous change of biological form during evolution.

Powder on the surface of a sounding violin is not a living embryo, however,
and knowledge at the time did not permit making a mechanistic connection
between them. In the 1950s, the mathematician Alan Türing would show that
the reacting and diffusing molecules in living tissues could spontaneously
arrange into Chladni-like concentration distributions, thus providing the
conceptual link that Bateson lacked. Though Türing's work, along with other
physics-based phenomena of "self-organization" (3), would prove seminal for
a new wave of systems biology that began to emerge later in the century, it
did not come early enough for Bateson. Once Mendel's work on the inheritance
of discrete "factors" -- genes -- gained wide attention in 1900 (due, in large
degree, to Bateson's enthusiastic promotion), Bateson's program for a
systems approach to understanding biological form was written out of the
scientific mainstream, as even he proved unable to cast it in fashionable
Mendelian terms.

The scientific conversion of the prominent embryologist Thomas Hunt Morgan,
who rejected his earlier systems approach to biological development in favor
of a strict Mendelian focus, was also a key turning point in early twentieth
century biology. From this point on, there was a heightened emphasis on the
rules of transmission of factors that influenced form and function
(transmission genetics), the rules by which such factors are distributed in
populations under varying conditions (population genetics), and the chemical
nature of such factors (molecular genetics). Such work met with great
success, and established itself as the mainstream of biological science.
None of these branches of genetics, however, attempted to account, in causal
terms, for biological forms and complex functions.

Less successful was early work that did attempt to formulate such accounts
in genetic terms (physiological genetics), such as that of Richard
Goldschmidt and C. H. Waddington in the 1930s to 1950s. Both these
scientists understood that a systems approach was needed to address these
questions, but, like their contemporaries, they were tied intellectually to
the gene as the privileged level of biological explanation. Real progress
required moving beyond this level.

By the end of the twentieth century, the capacity to identify genes, their
products and their interactions, had become enormous, and some claimed the
genetic approach to development had finally succeeded. Entire genomes had
been sequenced; gene interactions involved in establishing boundaries and
structures during insect development had been the subject of the shared 1995
Nobel Prize. Still, no satisfying picture had appeared of how processes
capable of generating organized forms, structures, and behaviors had emerged
over the course of evolution. Evelyn Fox Keller's recent Making Sense of
Life, focusing on the elaborate nature of genetic mechanisms of modern-day
development (but not on the origination of such mechanisms early in
evolution), concludes that organisms are, in fact, too "irreducibly complex"
to yield such a picture.


The obstacles that have stood in the way of creating a successful systems
biology are partly technical and conceptual, but to a great extent they have
also been ideological.

To take these up in order, detailed knowledge of gene sequences, processing
of gene products (RNAs and proteins), and gene-gene product interactions
will undoubtedly be essential to a comprehensive systems biology, and
neither these, nor the computers necessary to keep track of numerous
interacting components changing through time, were available in the early
part of the twentieth century. Conceptually, understanding the laws of form
for complex materials, such as living tissues, requires, in addition to
genetic information, an understanding of chemical dynamics, including
oscillations, pattern forming processes, and chaotic behavior3. Also
required is an understanding of the physics of condensed, viscoelastic
materials, including fractal phenomena. These were not developed until the
last quarter of the past century.

However, the ideological barriers were perhaps most important. Gene products
can participate in self-organizing physical processes, but the ideas of
pioneers in this way of thinking, such as Türing and D'Arcy W. Thompson (4),
were kept on the sidelines, muscled out by the notion that genes could do
everything by themselves. As Erwin Schrödinger, a physicist who should have
known better, wrote in his influential 1945 book, What is Life?, "The
chromosome structures are instrumental in bringing about the development
they foreshadow. They are the law-code and executive power -- or to use
another simile, they are the architect's plan and builder's craft in one."

As noted above, no scientific theory can avoid leaving the door open to some
assumptions beyond its explanatory capabilities. But by marginalizing the
role of naturalistic physical organizing principles, the gene-centered view
of biological causation has left itself susceptible to the notion of
"intelligent design," an unfortunate throwback to Cuvierian creationism.
Moreover, genetic determinism has suffused the worst political movements and
social policies of the twentieth century (5), and has pointed towards
experimental manipulations of the germ line that could be the bane of the
present one (6). 

Systems biology is on the rise, though so far it is more of an agenda than a
body of results. In response to new interest and activities in these fields
(and the failed promises of genetic reductionism), the U.S. National Science
Foundation (NSF) has begun funding multi-million dollar programs in
"integrative biology" and "biocomplexity." A recent solicitation for
applications to establish a center for "synthesis in biological evolution"
states the unexceptionable objective of fostering "synthetic, collaborative,
cross-disciplinary studies [and] further unification of the biological
sciences [by drawing] together knowledge from disparate biological fields to
increase our general understanding of biological design and function." The
National Institutes of Health, a scientifically more conservative agency, is
also, for the first time, allocating funds for systems approaches.

In its most intellectually fertile form the new systems biology is bringing
mathematical and computational methods to bear on genetics, physiology,
development and evolution, so as to deal with multiscale complexities
without losing sight of them. In its scientifically sound form, moreover,
this improved approach to biology does not seek to replace cognitive or
social sciences. If such a research program is permitted to flourish, in a
few years the twentieth century's gene bender will be just a memory, and
biology will again take its place among the subtle products of the human
mind. However, if systems biology spawns a new reductionism of social
integration through molecular manipulation [see sidebar], we may witness
another regression into oversimplification and misunderstanding that could
set back our knowledge of ourselves and the natural world by at least
another century. Another binge on reductionism could be the fatal one,
putting not only our science, but our lives and natures, at risk.

Stuart A. Newman is Professor of Cell Biology and Anatomy at New York
Medical College, where his research focuses upon cellular and molecular
mechanisms of vertebrate limb development; the physical mechanisms of
morphogenesis -- development of the size, form, or other structural features
of an organism -- and mechanisms of morphological evolution. His Origination
of Organismal Form, co-edited with Gerd B. Müller, was released in February
by the MIT Press.

FOOTNOTES 1 See Barry Commoner, "Unraveling the DNA Myth: The Spurious
Foundation of Genetic Engineering." Harper's Magazine, February 2002. 2 See
Timothy Lenoir, The Strategy of Life, Univ. of Chicago Press, 1982, for a
full discussion of this "teleomechanist" framework. 3 See Gerd B. Müller and
Stuart A. Newman, Eds., Origination of Organismal Form, MIT Press, 2003. 4
D'Arcy W. Thompson authored the classic On Growth and Form, Cambridge Univ.
Press, 1919, rev. 1942. 5 See Daniel J. Kevles, In The Name Of Eugenics:
Genetics And The Uses Of Human Heredity, Knopf, 1985. 6 See Lee M. Silver,
Remaking Eden: How Genetic Engineering And Cloning Will Transform The
American Family, Avon Books, 1998, and Gregory Stock, Redesigning Humans:
Our Inevitable Genetic Future, Houghton Mifflin, 2002, for positive views of
genetically manipulating humans.


The comprehensive, integrative perspective of systems biology is not immune
to abuse -- a fact represented all too clearly in a recent report jointly
sponsored by the NSF and the U.S. Department of Commerce, "Converging
Technologies for Improving Human Performance." According to that report's
executive summary: 

Convergence of diverse technologies is based on material unity at the
nanoscale [i.e., submicroscopic, molecular scale] and on technology
integration from that scale. The building blocks of matter that are
fundamental to all sciences originate at the nanoscale. Revolutionary
advances at the interfaces between previously separate fields of science and
technology are ready to create key transforming tools for NBIC [nano-, bio-,
information and cognitive] technologies. Developments in systems approaches,
mathematics and computation in conjunction with NBIC allow us for the first
time to understand the natural world, human society, and scientific research
as closely coupled complex, hierarchical systems. At this moment in the
evolution of technical achievement, improvement of human performance through
integration of technologies becomes possible.

The report goes on to outline a future in which the "ability to control the
genetics of humans, animals, and agricultural plants will greatly benefit
human welfare" and "[f]ast, broadband interfaces directly between the human
brain and machines will transform work in factories, control automobiles
[and] ensure military superiority...."

The report can be found in its chilling entirety at


Unraveling the Secret of Life

by Barry Commoner


The title of James Watson's new book, DNA: The Secret of Life, echoes the
boast voiced on the day, fifty years ago, when he and Francis Crick
discovered the structure of this now-famous molecule. The inexplicable
uniqueness of life has for centuries been mystery enough to elicit religious
doctrine, let alone scientific research. Therefore it is fitting that, to
celebrate the fiftieth anniversary of the double helix, Time's February 17,
2003 cover depicts an updated Adam and Eve standing before the biblical tree
of life, each entwined in the coils of a golden helix anatomically placed to
symbolize their recent loss of innocence. In the story itself, "Solving the
Mysteries of DNA", Time tells us the long-sought secret that Watson and
Crick's scientific discovery revealed: "The beauty of DNA is that its form
is its function. It's a self-reproducing molecule that carries the
instructions for making living things from one generation to the next." An
accompanying molecular diagram explains exactly "How DNA Works" by making "a
copy of itself." 

Time's story line accurately reflects Watson and Crick's original account.
Although, of all known forms of matter, only a living thing is endowed with
the prodigious power of self-replication, that power, they believed,
originates exclusively in one of its lifeless chemical components -- DNA.
This idea is embodied in the most frequently quoted sentence in their
celebrated one-page letter to Nature, published on April 25, 1953: "It has
not escaped our attention that the specific pairing we have postulated
immediately suggests a possible copying mechanism for the genetic material."
The sentence refers to a crucial feature of the DNA double helix: the two
DNA strands are so aligned that their four types of constituent nucleotides
(A, T, C and G) form complementary pairs. In the double helix, in which each
strand may be comprised of a linear array of thousands of nucleotides, the
nucleotide A in one strand is always positioned across from nucleotide T in
the other strand, and similarly, G is located opposite C. These pairings are
enforced by a particular intermolecular link -- the hydrogen bond -- between
each of the paired nucleotides.

On May 30, 1953, this time in a two-page paper, also published in Nature,
Watson and Crick defined "the essential operation of a genetic material [as]
that of exact self-duplication." That the paired nucleotides are held
together by hydrogen bonds in the double helix suggested to them a plausible
mechanism for the exact self-duplication of DNA: A single parental DNA
strand's nucleotide sequence is replicated simply by attracting to itself,
by means of hydrogen bonds, the complementary nucleotides that are freely
available in the cell. These are thereby aligned and incorporated into a new
DNA strand, to form a complementary version of the parent strand's
nucleotide sequence. Later, in 1958, Crick explained that the DNA's
nucleotide sequence is the genetic information which, transferred to the
cell's proteins, determines their chemical specificity and therefore the
inherited traits they engender.

Watson and Crick were aware, however, that once a newly acquired free
nucleotide is properly lined up on the parental DNA template, the chemical
bond that links it to the next nucleotide in the growing strand must be
formed -- a biochemical task, polymerization, requiring an enzyme. In 1956,
Arthur Kornberg discovered such an enzyme, DNA polymerase, in a wide array
of organisms. His test-tube experiments showed that in a mixture containing
a pre-existing DNA template, a supply of the four types of nucleotides, and
DNA polymerase (a protein purified from tissue or bacteria), a new strand of
DNA is formed, joined to the template by the hydrogen-bonded complementary
nucleotide pairs. Kornberg concluded that in such experiments, "The unifying
base generalization about the action of this enzyme [DNA polymerase] is that
it catalyzes the synthesis of a new DNA chain in response to directions from
a DNA template; these directions are dictated by the hydrogen-bonding
relationship of adenine [A] to thymine [T] and of guanine [G] to cytosine

By the 1960s a new and rapidly growing breed of researchers, molecular
biologists, were convinced by the Watson-Crick theories and Kornberg's
experiments. Impelled by the idea that Watson told the Time writer was "too
good not to be true," they turned DNA into an experimental powerhouse. If
the DNA of the gene for human erythropoietin, essential for red blood cell
production, contains all the genetic information needed for its own
replication, why not insert the gene into bacteria, enabling them to produce
this valuable protein and replacing the repeated transfusions needed by
anemia patients. On the same grounds, why not inject the gene itself into
patients, who could then continue to produce the protein on their own? Also,
if DNA is universally able to govern the course of inheritance, including
development, why not clone rather than breed the most productive domestic
animals? Lurking only slightly off-stage is the proposal, advanced by even a
few Nobel notables, to insert into human embryos replacements for genes
which are linked to inherited disease -- or, as Watson has suggested,
stupidity and ugliness.

All of these biological ventures were conceived in the belief that the gene
contains the only information needed to specify an inherited trait and that,
by replicating itself, it ensures its own propagation and the trait's as
well. This theoretical autonomy of the gene -- that it can maintain and
propagate its distinctive specificity in any biological context -- is a
consequence of that wholly unprecedented belief that a lifeless chemical
has, within itself, the power of self-duplication. The huge and still
growing edifice of molecular biomedical and agricultural research and
technology rests on the validity of that concept.

However, the outcome of experimental biological transformations, grounded in
the conventional interpretation of gene replication, suggests that these
applications have been troubled by inherent failures. Human erythropoeitin
made by bacteria has been sufficiently unlike the protein produced in the
human body to cause critical immune reactions in some patients; gene therapy
trials have been dangerously uncertain and even fatal in their outcome; for
every transgenic crop now widely grown in the United States, 99 failed
examples of the same transformation have been discarded as "unsuitable";
more than 90% of cloned animal embryos fail to survive, and the few
surviving, like Dolly the sheep, die prematurely.

Despite these problems, the conceptual framework of molecular biology has
remained unchanged since the 1950s. Thus, the Time cover story's
illustrations, which have presumably been checked by a certified molecular
biologist, are a virtual caricature of the original Watson and Crick
description of DNA self-duplication: Molecular models show the four DNA
bases (which, in fact, should be nucleotides) propelled toward their proper
complementary partners in the parental DNA strand, with a separate box to
show that "the base pairs attach to each other with hydrogen bonds."
Apparently, the mechanism first proposed by Watson and Crick fifty years ago
to explain DNA Œself-replication' is, even today, accepted as the basic
precept of molecular biology. This is further confirmed by a prominent DNA
polymerase researcher, Myron Goodman, who has pointed out that since the
1950s "we have seen few challenges to the primacy of hydrogen bonding in the
replication hierarchy."

However, in another part of the forest, so to speak, of the vast terrain of
DNA research, there are investigators, among them Myron Goodman, who in the
last decade have put the concept of DNA self-duplication to the test of
experiment. In 1991, Professor Goodman and several colleagues, publishing in
Annual Reviews of Biochemistry, expressed serious doubts about the
importance of hydrogen bond contribution to the fidelity of DNA replication,
on the grounds that the bonds were energetically inadequate to distinguish
between complementary and non-complementary free nucleotides. In 1997, the
dominant role of hydrogen bonding as the cause of DNA self-replication
failed to meet an initial experimental test: It was shown that an analog of
the natural thymine nucleotide, chemically modified to eliminate its
capacity to form hydrogen bonds, is nevertheless not only incorporated by a
DNA polymerase into DNA, but is also placed in its proper position opposite
a template-borne adenine nucleotide. Since the analog does resemble the
natural thymine nucleotide in shape and size, this result suggested that
geometry, rather than hydrogen bonding, accounted for the selection of the
appropriate complementary nucleotides in polymerase-catalyzed DNA

This clue has been pursued using new physico-chemical techniques, especially
in x-ray crystallography and nuclear magnetic resonance, to bring the
analysis of DNA replication down to the level of sub-molecular structure.
Eric T. Kool of the Stanford University Department of Chemistry and Thomas
A. Kunkel of the National Institutes of Environmental Health Sciences have
recently reviewed much of the relevant research in Annual Reviews of
Biochemistry. These studies have described the intimate relations among the
participants in test tube DNA replication: the parental DNA template, the
primer (the newly synthesized DNA strand), the free nucleotides added to it,
and the DNA polymerase enzyme.

Crystallographic analysis shows that a pocket is formed by a specific
segment of the DNA polymerase protein, which includes the enzyme's
biochemically "active site", together with a small section of the DNA
template and the primer. The pocket is just the right size and shape to
accommodate a free nucleotide, but only if that nucleotide makes the proper
complementary pair with the adjacent template-borne nucleotide. Once the
incoming free nucleotide is accepted into the pocket, a segment of the
polymerase's protein structure is rearranged, which tightens the fit of the
nucleotide pair within the pocket. This change in turn exposes the catalytic
locus of the polymerase, which thereby induces the chemical reaction that
links the new nucleotide to the end of the growing strand of complementary
replicated DNA. A video of this molecular ballet is available on the website
of Professor Joseph Kraut of the University of California, San Diego. It
shows that the hydrogen bond between the template-borne nucleotide and its
incoming complementary partner is formed only after that free nucleotide has
been accepted and closely fitted into the template/polymerase pocket.

At the end of this molecular choreography, the now-doubled DNA helix is
equipped with the Watson-Crick hydrogen bonds between the template and the
newly synthesized complementary DNA strand. But that is the consequence
rather than the cause of DNA replication. That honor belongs jointly to the
parental DNA template and the polymerase protein. Together, they form the
walls of the pocket which, by its geometry, selects the properly
complementary free nucleotides for DNA synthesis. Thus, even here, in the
germinal event of biological inheritance -- DNA replication -- it is now
evident that DNA is not the sole source of the genetic information embodied
in the newly synthesized genome. Rather, that property is possessed jointly
by DNA and the DNA polymerase protein.

Yet, to my knowledge, this conclusion -- the inescapable outcome of an
extensive series of physico-chemical studies -- has not yet found its way
into the literature of molecular genetics. As several of the field's leading
researchers have complained, outmoded belief in the Watson-Crick
hydrogen-bonding theory of DNA self-replication still "exists on the level
of teaching paradigms." Proof that this work has been ignored by molecular
geneticists has appeared in a recent issue of Nature (Jan. 23, 2003). In an
extensive series of articles to commemorate the discovery of the DNA double
helix -- "a modern icon" -- including one on the role of polymerase in DNA
replication, there is no mention of this crucial research.

What can account for this surprising lapse in the normally avid interest of
a vigorous area of research in such a new, meticulously documented and
challenging discovery? A meaningful clue is provided by Crick's germinal
1958 paper, in which he proposed fundamental precepts that have governed the
development of molecular genetics. One of these, the Sequence Hypothesis,
which has since been well established by experiment, states that the gene's
DNA nucleotide sequence codes for a protein's amino acid sequence. The
second hypothesis, which Crick called the Central Dogma, states that "Once
(sequential) information has passed into protein it cannot get out again."
Although this precept has been assiduously adopted by molecular geneticists,
in 1970 Crick attached to it an ominous warning: If even a single
observation showed that genetic information could flow from protein to DNA,
to RNA, or to another protein, "it would shake the whole intellectual basis
of molecular biology."

Crick apparently based this idea on the well-established fact that DNA's
genetic information is encoded in a protein's amino acid sequence, which
becomes inaccessible when that linear array is folded up into a
three-dimensional ball-like structure. However, the biochemical activity of
a protein, for example the catalytically active site of an enzyme, which
actually gives rise to the genetic trait engendered by the protein, usually
occurs on the surface of its folded structure. This fact alone conflicts
with the notion that the enzyme's genetic information cannot "get out" of
that molecule. After all, in Crick's scheme the genetic role of the enzyme
is precisely to transmit the genetic information of its amino acid sequence
to the chemical events that give rise to the inherited trait. But this can
only occur through the passage afforded by the active site, which is a part
of the protein's three-dimensional configuration.

Although it is not clear exactly how a protein's linear structure becomes
folded into a specific three-dimensional configuration, evidently its amino
acid sequence is a necessary ­ if sometimes a not sufficient ­ determinant
of that process. Therefore, it follows that the catalytically "active site"
on the enzyme's three-dimensional surface represents genetic information
that at least in part is derived from its amino acid sequence ­ which in
turn is received from its gene's DNA. In the same way, the segment of the
DNA polymerase's three dimensional structure, which is necessary to the
formation of the pocket that enforces the selection of the proper incoming
nucleotide, is also a form of the polymerase protein's genetic information.
Linked by the polymerase to the growing DNA primer strand, the newly
acquired nucleotide contributes to the nucleotide sequence of the new DNA
strand. Thus, the nucleotide sequence of the newly synthesized DNA ­ the
cardinal example of genetic information ­ contains genetic information
transferred from DNA polymerase, a protein. This is an explicit
contradiction of Crick's Central Dogma.

Thus, the "intellectual basis of molecular biology" has indeed been shaken
by this critical analysis of DNA replication, but neither the practitioners
of that science, the biotechnologists who depend on it, nor the general
public, who will suffer the consequences, appear to be aware of it.

The process of DNA replication, so vividly dramatized in Professor Kraut's
video, exemplifies a curious irony. The course taken by molecular biology in
the last 50 years is, of course, reductionist. A strenuous effort has been
made to explain biology as an intricate form of molecular chemistry. Yet now
that Professors Kool and Kraut and their colleagues have gone even further
in the reductionist direction, and have given us a sub-molecular drama of
DNA replication, in which even the roles of individual amino acids in the
polymerase protein's structure are choreographed, the end result has the
unmistakably anti-reductionist flavor of biology.

Here, I refer to my own reaction to Kunkel's visualization of the DNA
polymerase-template pocket, with segments of the polymerase described
(albeit somewhat fancifully) as a hand with its palm, thumb and fingers --
the latter reaching toward the DNA of the parental template. It is not the
anthropomorphic symbolism that brings biology into this sub-molecular
picture; rather, it is the emergent creation of a new property -- DNA
replication -- from the intimate physical interaction of DNA and the
polymerase protein, neither of which, alone, possesses this capability.

In retrospect, it can be seen that the demise of the self-replicating DNA
molecule was foreshadowed by earlier biochemical work that, following
Kornberg's studies, has detected nearly twenty different DNA polymerases in
a wide range of living things. Certain of these enzymes, like Kornberg's
original bacterial polymerase, carry out the initial template-supported
synthesis of DNA from free nucleotides. In the test tube none of these
polymerases come close to the Watson-Crick ideal of "exact" replication. As
shown by the rate of spontaneous gene mutations, in living things DNA
replication is indeed remarkably exact. The probability of a gene mutating --
about one in 100,000 per cell division -- is so low as to preclude an error
rate in DNA replication of more than one wrongly placed constituent
nucleotide in about ten billion. An initial DNA polymerase typically has an
error rate of about one in ten thousand -- still a million times too great to
meet the biologically required fidelity. The remaining errors are reduced to
the required one in 10 billion by a group of "repair" enzymes -- polymerases
that detect, remove and properly replace the misplaced nucleotides or
degrade heavily damaged sections of the newly synthesized DNA. The
properties and functions of these enzymes vary
considerably among different species. The net result of such inter-species
variation is that, as Goodman has pointed out, "Different polymerases
copying the same primer-template DNA can exhibit markedly different mutation
frequencies and spectra [that is, types of mutations]."

This phenomenon can convert an inter-species transgenic experiment into a
genetic gamble. A gene that is faithfully replicated in its own species will
undergo markedly more erratic replication when it is transferred into a new
host with a different set of polymerase enzymes. Thus, in a recent study of
rice containing a corn-derived transgene, it was concluded that "[T]he
combined action of DNA repair and degradation enzymes on the introduced DNA
gives rise to rearranged transgenic [nucleotide] sequences."

The experimental evidence of DNA replication in transgenic organisms shows
that the genetic information of the newly replicated transgene is in part
derived from the new host's polymerase system. Given their separate
evolutionary histories, the transgenic DNA and the host polymerase system
are incongruent with regard to their respective influence on nucleotide
selection. As a result, the high level of fidelity typical of DNA
replication within both of the separate species breaks down in the
transgenic organism.

In every species, evolution has been at work, long ago bringing the two
participant parts of the replicative process -- the DNA genes and the
polymerase proteins that synthesize and repair them -- into harmony. The
polymerase's influence on the newly replicated DNA's nucleotide sequence and
the influence of its gene on the polymerase's nucleotide selectivity are
congruent, so that the biochemical specificity of both the DNA and the
polymerase are faithfully reproduced and are stable
over time. 

Within any given species it is possible to argue that since the DNA encodes
the polymerase protein, that enzyme cannot contribute any additional genetic
information to the replication process. But this argument cuts both ways,
for it is equally possible to say that the nucleotide sequence of the
polymerase's gene was determined, in part, by the polymerase when that DNA
was synthesized. Both of these statements represent an effort to derive a
linear relationship from one that is not. Because the molecular mechanism of
DNA replication is governed jointly by the DNA template and the polymerase,
the relationship between their respective genetic information is circular:
The biochemical specificity of the polymerase -- its genetic information -- is
governed by its gene; and the nucleotide sequence of the gene's DNA -- its
genetic information -- is influenced by the polymerase's biochemical
specificity. As a result, the genetic information that flows in this
circular pattern is necessarily a commingled mixture of influences from both
the DNA and the polymerase system. In a transgenic organism, this mutual
relation is disrupted by the evolutionary incompatibility of the two
component parts, and their separate contributions to the overall process
thereby become noticeable.

All this is to say that the living cell is not merely a sack of chemicals,
but a unique network of interacting components, dynamic yet sufficiently
stable to survive. The living cell is made fit to survive by evolution; the
marvelously intricate behavior of the nucleoprotein site of DNA synthesis is
as much a product of natural selection as the bee and the buttercup. In
moving DNA from one species to another, biotechnology has broken into the
harmony that evolution produces, within and among species, over many
millions of years of experimentation. Genetic modification is a process of
very unnatural selection, a way to perversely reinvent the inharmonious
arrangements that evolution has long ago discarded. The biotechnology
industry has stood Darwin on his head.

It is a truism that in our society, such a new industry is created not for
the purpose of enhancing scientific understanding, but inthe hope of a
competitive financial return. Unfortunately, the science on which
biotechnology is founded has become, to a large extent, distorted by this
process as well, and is itself in need of critical revision. If the science
is to be redirected, and the unpredictable, uncontrolled experiment that is
biotechnology is to be sent back to the laboratory where it rightly belongs,
we will need to accept this task as our own and set Darwin back on his feet.


Author's Acknowledgment: It is my privilege to acknowledge my colleague, Dr.
Andreas Athanasiou, who has closely collaborated with me on the assembly of
the data and the development of the ideas that are the basis of this

Barry Commoner, Senior Scientist at the Center for the Biology of Natural
Systems, Queens College, City University of New York, directs the CBNS
Critical Genetics Project. He is a lifelong biologist, educator and
activist, and one of the guiding influences of the modern environmental

Readers may obtain a list of references for the data cited in this article
from This article is based in part on a
presentation to The Gene Futures Conference, London, UK, February 11, 2003.

Brandon Keim
GeneWatch Editor 
Director of Communications
Council for Responsible Genetics
5 Upland Rd, Suite 3, Cambridge, MA USA 02140
Phone: (617) 868-0870 / Fax: (617) 491-5344 /

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Subject: RE: <nettime> DNA and computers
Date: Mon, 8 Sep 2003 12:39:00 -0400
From: "Eugene Thacker" <>

It's amazing how much milage nanotech (as a field and as an industry) 
has gotten out of talking about itself (perhaps that's why it reads 
better as social science fiction) - it's kindof becoming a joke (recall 
the smart-newspaper headline in Minority Report that boasts the promises 
of nanotech?) - but of course this investment fuels further visionary 

I'm only aware of a handful of actual nanotech products, and mostly in 
biotech: Nanogen's microarray for instance or Bell Lab's DNA tweezers or 
DARPA's MEMS chips

Nanotech is not "post-biotech" - there's a secret link between then, and 
that's proteins - Drexler likes to use proteins as his proof-of-concept 
from nature - ribosomes are little molecular factories that churn out 
proteins in the living cells, etc. - the argument from design ("Simple 
molecular devices combine to form systems resembling industrial 
machines...Yet over three billion years before Jacquard, cells had 
developed the machinery of the ribosome. Ribosomes are proof that 
nanomachines built of protein and RNA can be programmed to build complex 
molecules" Engines of Creation, p.8)

What's interesting is that Drexler et al, from the beginning, talk about 
nanotech as the new industrial revolution - there is no separating the 
visionary science from the industrial application

Only problem is that the basis for nanotech is not mechanical or 
electrical engineering - it's biology or biology-as-technology - and it 
seems that biotech has already been creating nano-factories for some 
time: transgenics, GM foods, even gene therapy (it's your body but 
someone else's enzymes)

Nanotech intersects w/ biotech over the "ambiguity of matter" - both 
reach a certain point at which the distinction between living and 
nonliving matter is moot - to use Xerox-PARC's phrase, it's all 
"programmable matter" - saying this provides a way of side-stepping the 
often moral skirmishes over the living/nonliving distinction - think 
about debates over patenting, stem cells, cloning etc.

ps - I sent another post on DNA computing a day ago, to clarify some 
issues - did it not get thru?



Eugene Thacker, Assistant Professor
School of Literature, Communication & Culture
Georgia Institute of Technology

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: and "info nettime-l" in the msg body
#  archive: contact: