Eugene Thacker on 18 Feb 2001 02:25:18 -0000


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> The Human Genome Race Considered as a High-Speed Data Dump



----------------------------------------------------------------------------------------------

The Human Genome Race Considered as a High-Speed Data Dump

Eugene Thacker



I. The Race

Celera - a latecomer to the race - got off to an energetic start. With a
souped-up high-octane arsenal of turbo-IT gene-machines at its side, it
looked as though this would be the quickest race of this type ever
sanctioned.

The public consortium, however, was not so easily put off. Their old
school DNA bi-partisan tag-team strategy served them well, leading off
from multiple points in the U.S. and U.K., as well as Japan and Germany.

While Celera had the advantage coming into the race, its youth,
inexperience, and bold, brash style drew many criticisms. The public
consortium, on the other hand, was no stranger to the challenges of
extreme big science (or XBS).

Celera had, in many ways, fired the starting shot for the race when it
aggressively tapped the "enter" key on its sequencing computer in 1998.
With robust, lean vehicles from Perkin-Elmer, the data began flushing in
almost immediately. Patent, publish, and press enter again - in that
order.

The public consortium quickly became aware of the sexy hum emanating from
Celera's gene machines. After doing some rush-delivery online shopping, it
too upgraded its tech, but without dropping its old school appeal. Some of
the researchers would actually teach and do research at universities - a
sign of by-gone days.

But the decider for the race was not Celera's new-kid-on-the-block
attitude, nor was it the public consortium's old-boys-will-prevail
assurance - it was software. The public consortium relied on the
pragmatics of the GigAssembler - 10,000 lines of code written in four
weeks by a grad student last summer: string them together & do a quick
taste-test. Celera stuck to its avant-garde guns by employing its
controversial Whole Genome Shotgun sequencing method (WGS): blow it to
pieces & then cluster them together freestyle.

Although both groups published their findings simultaneously, it appears
that Celera inched through the finish line at the last moment (though
frame-by-frame analysis has yet to confirm this - ftp clocks are being
checked at the servers of both Nature and Science magazines). Reports from
both groups state that this is only the end of the beginning. Celera
continues the race wireless - but that is another story.

(For more info, see "The Assassination of John Fitzgerald Kennedy
Considered as a Downhill Motor Race" by J.G. Ballard, and "The Crucifixion
Considered as an Uphill Bicycle Race" by Alfred Jarry.)


II. Reports

Critique functions the best, perhaps, when it does no work at all. That
is, when critique is embodied in the very same locus of its object, a kind
of discursive telescoping occurs, in which that which is implicit is made
explicit. The race to map the human genome has been such a process. More
than any other "big science" project, the mapping of the genome has always
partially un-done itself. It has played two roles: that of a promotional
campaign supporting a certain tradition of Western science, and that of a
highly unstable agenda for the production of a hegemonic view of health,
the body, and theories of biological "life."

As Mae-Wan Ho states in her recent critique of the genome
<http://www.i-sis.org>, a number of critics have long been suggesting what
scientists are only now beginning to consider, with the "hard evidence" at
hand: that genetic reductionism doesn't work, that "central dogma"
theories of any kind are highly limiting, and that "genes" (a still
relatively undefined category of biomolecule) may not be the central and
single most important component of molecular life. We can look to Evelyn
Fox Keller, Richard Lewontin, Susan Oyama, Donna Haraway, Critical Art
Ensemble and others for examples of such critiques.

The recent reports on the human genome actually perform this - the
articles by Celera (in Science) and the International Human Genome
Consortium (in Nature) are filled with a rhetoric of surprise at the
findings: the number of "genes" is lower than expected, the number of
unique genes to humans is fewer than expected, the "coding" portion of the
genome is only about 1.5% of the total DNA, and vast regions of the genome
are filled with "deserts" of repetitive sequence whose function is
unknown.

Three perspectives here: What did they find? What do these findings imply?
What challenges do these findings put forth?


What did they find?: Both the public consortium and Celera came to
essentially the same conclusions in their initial analyses of the genome.
And both expressed "surprise" at their findings:

1. Number of genes: approx. 30,000-40,000. This is much less than
expected, with most estimates before this around 100,000 or more, and many
companies (mostly pharmaceutical or tech companies banking on the high
number of genes) are still in genomic denial. In addition, this is not
much more than bacteria or mice. In cross-species genome comparisons, it
was found that only some 300 genes are unique to human beings compared
with the mouse genome.

2. Large number of repetitive sequences. Again, a surprise finding, that
often large segments of DNA are repeat sequences, either inserted by
retroviruses, remnants of fossilized DNA, or sequences which play as yet
undetermined functions in gene expression and regulation. Jumping genes,
or "transposons," are beginning to be taken seriously as a primary
contingency in the dynamic aspects of the genome.

3. Modularity of genes in alternative splicing. Contrary to the
conservative one gene-one protein hypothesis, alternative splicing of RNA
from DNA (the "spliceosome") means that the combinatorial possibilities of
gene function increases greatly. In humans at least, this means that one
"gene" can have many different variations, and can perform many different
functions. There is no reason to think that it would be different for
proteins, or any other biomolecule in the cell.


What do these findings imply?: Reading the primary findings of the genome
analyses through the social and philosophical implications of "surprise"
discoveries, reveals several implicit critiques at work in genomics
specifically, and biotech generally:

1. The low number of genes is only surprising if it is assumed that the
quantity of genes is directly related to the complexity of the organism or
species. More genes would thus mean more complex organisms. In a
characteristically consumerist vein, the assumption here was that "more is
better."

2. The finding of repetitive sequences in the genome is only surprising if
it is assumed that each "gene" or DNA sequence has a unique function,
thereby making the genome original and unique. Repetition here is taken to
be synonymous with redundancy and a lack of novelty, while the icons of
American subjectivity - individuality, originality, novelty - are assumed
to reside in the very bodies of subjects, down to their biomolecules. As
bioinformatics is just beginning to demonstrate, repetition can be a
highly complex system for generating complexity (be it ones and zeros or
pairing of A-T and C-G).

3. The evidence of alternative splicing - modularity - in the production
of RNA is only surprising if it is assumed that one gene makes only one
protein. Despite the continual suggestion from both scientists and
cultural theorists, the old "central dogma" (DNA makes RNA makes protein
makes us) is still very much alive. A molecule with more than one primary
function, at one time, is beyond the scope of conventional thinking in
genetics and genomics.


What challenges do these findings put forth?: The initial genome analyses
make it all the more clear that different models of theorising the genome
and its significance are desperately needed, not only by scientists, but
by computer programmers, activists, artists, cultural theorists,
journalists, and policy makers.

1. Informatic ontogeny: As Susan Oyama long ago theorised (_The Ontogeny
of Information_), the debates between organism and environment are filled
with assumptions about the pre-existence of both, often at the expense of
the productive, generative "interactionisms" that occur within dynamic
organisms and dynamic environments. Genes do not simply already exist as
fixed entities, and then interact with a fluctuating environment.
Information is actually generated within the dynamisms of the
organism-environment system - genetic information thus has an ontogeny, a
self-making property that emerges out of different interactionisms. As
Oyama states, "developmental information itself, in other words, has a
developmental history." Genetic information, then, "neither preexists its
operations nor arises from random order"; it is, to paraphrase Gregory
Bateson, a difference that makes a difference, and therefore "its meaning
is dependent on its actual functioning."

Several questions here: What if genetic "information" does not simply
pre-exist in each cell, waiting still and patiently to be mapped and
annotated? What if a "gene" describes not a specific string of DNA that
codes for the production of a protein, but a particular network of
operative biomolecules embedded within various contingencies, that have a
certain biochemical effect? There is already work being done in this
direction, but it remains marginal with respect to the genome.

2. Combinatorial complexity: The emergence of complexity out of simplicity
is the result of combinatorial variations. John von Neumann's _Theory of
the Automaton_ stated this very early on: complexity of spatial patterning
and temporal variation can arise out of the simplest elements, be they
binary bits or DNA molecules. A combinatorial approaches relies on
mathematics, stochastic approaches, etc., and therefore also relies on
computers: bioinformatics. This approach thus demands that bioinformatics
be more than merely a tool for reiterating the central dogma (but with
fancy molecular models); it demands that a kind of computer science
develop which takes into account the dynamic, "wet" complexity of the
molecular body.

The intersection of bio-science and computer science is intriguing; but
bioinformatics has a long way to go before it starts functioning as more
than a tool for big science. The interstitial fields of a-life,
bio-computing, and nanotech may provide a window to seeing past
bioinformatics as another lab tool.

3. Systems approaches to molecular biology: The smaller number of genes
and their repetitions point to the obvious fact that a complex organism
such as the human somehow operates through apparent simplicity. Gradually
we're seeing the term "complex" and "complexity" seep into the mainstream
discourse surrounding genetics (both Venter and the New York Times
referred to the genome in this way). The discovery of DNA itself - two
pairs of two molecules - reiterates this theme of the simplicity of
beginning conditions, and their combinatorial complexity (see below).
Clearly what is needed is not a narrow, linear, production-oriented
approach (genes makes proteins make us), but a wider, more flexible, more
differentiated, systems approach to molecular biology. This would, among
other things, take into account the extra-genomic components involved in
cellular processes.

Writing about systems theory during the 1940s (_General System Theory_),
Ludwig von Bertalanffy states that "the living cell and organism is not a
static pattern or machine-like structure...It is a continuous
process...The organism's being an 'open system' is now acknowledged as one
of the most fundamental criteria of living systems..."


The initial reports from the human genome once again demonstrate that what
is needed - theoretically, technically, and politically - is something
very very simple: to think in more complex ways.
----------------------------------------------------------------------------------------------



¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
Eugene Thacker
e: maldoror@eden.rutgers.edu
w: http://gsa.rutgers.edu/maldoror/index.html
Pgrm. in Comparative Literature, Rutgers Univ.
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

CURRENT:
"Molecules That Matter: Nanomedicine & the
Advent of Programmable Matter" @ nettime: 
<http://www.nettime.org>.

"Regenerative Medicine: We Can Grow It For
You Wholesale" @ Machine Times (DEAF_00,
V2 book, http://www.v2.nl/deaf)

"The Post-Genomic Era Has Already Happened"
@ Biopolicy Journal <http://bioline.bdt.org.br/py>

"SF, Technoscience, Net.art: The Politics 
of Extrapolation" @ Art Journal 59:3 
<http://www.collegeart.org/caa/publications/AJ/artjournal.html>

"Point-and-Click Biology: Why Programming is
the Future of Biotech" @ MUTE (Issue 17 - archives
at http://www.metamute.com)
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬
also:
FAKESHOP <http://www.fakeshop.com>
¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬




#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net