Alexander Galloway on Sun, 12 Jan 2003 20:20:47 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Tactical Media & Conflicting Diagrams (draft chapter)


Nettimers--I'm preparing a book manuscript on computer protocols and 
how they establish control in the seemingly anarchical Internet. I'm 
hoping that some of you will be able to read my draft chapter below on 
tactical media which tries to show how there are many interesting flaws 
in the protocological system of control. Please point out my mistakes 
before i send it to my editor! :-) thanks, -ag

+ + +

"The Internet is like the Titanic. It is an instrument which performs 
extraordinarily well but which contains its own catastrophe."[1]
—Paul Virilio

Like many interesting social movements that may manifest themselves in 
a variety of ways, tactical media has an orthodox definition and a more 
general one. The orthodoxy comes from the new tech-savvy social 
movements taking place in an around the Western world and associated 
with media luminaries such as Geert Lovink, Ricardo Dominguez (with the 
Electronic Disturbance Theater) and Critical Art Ensemble (CAE). 
Tactical media is the term given to political uses of both new and old 
technologies, such as the organization of virtual sit-ins, campaigns 
for more democratic access to the Internet, or even the creation of new 
software products not aimed at the commercial market.
	"Tactical Media are what happens when the cheap 'do it yourself' 
media, made possible by the revolution in consumer electronics and 
expanded forms of distribution (from public access cable to the 
internet) are exploited by groups and individuals who feel aggrieved by 
or excluded from the wider culture,” write tactical media gurus Geert 
Lovink and David Garcia. “Tactical media are media of crisis, criticism 
and opposition."[2] Thus, tactical media means the bottom-up struggle 
of the networks against the power centers. (And of course the networks 
against the power centers who have recently reinvented themselves as 
networks!)
	But there is also a more general way of thinking about tactical 
phenomena within the media. That is to say, there are certain tactical 
effects that often only leave traces of their successes to be 
discovered later by the ecologists of the media. This might include 
more than would normally fit under the orthodox definition. Case in 
point: computer viruses. In a very bland sense they are politically 
bankrupt and certainly no friend of the tactical media practitioner. 
But in a more general sense they speak volumes on the nature of 
network-based conflict.
	For example computer viruses are incredibly effective at identifying 
anti-protocological technologies. They infect proprietary systems, and 
propagate through the homogeneity contained within them.
	Show me a computer virus and I’ll show you proprietary software with a 
market monopoly.
	I will not repeat here the excellent attention given to the subject by 
CAE, Lovink and others. Instead in this chapter I would like to examine 
tactical media as those phenomena that are able to exploit flaws in 
protocological and proprietary command and control, not to destroy 
technology, but to sculpt protocol and make it better suited to 
people’s real desires. “Resistances are no longer marginal, but active 
in the center of a society that opens up in networks,”[3] Hardt & Negri 
remind us. Likewise, techno-resistance is not outside of protocol, but 
is at its center. Tactical media propel protocol into a state of 
hypertrophy, pushing it further, in better and more interesting ways. 

Computer Viruses

While a few articles on viruses and worms appeared in the 1970s and 
beginning of the ‘80s,[4] Frederick Cohen’s work in the early eighties 
is cited as the first sustained examination of computer viruses. He 
approached this topic from a scientific viewpoint, measuring infection 
rates, classifying different types of viruses, and so on.

"The record for the smallest virus is a Unix “sh” command script. In 
the command interpreter of Unix, you can write a virus that takes only 
about 8 characters. So, once you are logged into a Unix system, you can 
type a 8 character command, and before too long, the virus will spread. 
That’s quite small, but it turns out that with 8 characters, the virus 
can’t do anything but reproduce. To get a virus that does interesting 
damage, you need around 25 or 30 characters. If you want a virus that 
evolves, replicates, and does damage, you need about 4 or 5 lines."[5]

Cohen first presented his ideas on computer viruses to a seminar in 
1983. His paper “Computer Viruses—Theory and Experiments” was published 
in 1984, and his Ph.D. dissertation titled “Computer Viruses” 
(University of Southern California) in 1986.
     Cohen defines a computer virus as “a program that can ‘infect’ 
other programs by modifying them to include a, possibly evolved, 
version of itself.”[6] Other experts agree: “a virus is a 
self-replicating code segment which must be attached to a host 
executable.”[7] Variants in the field of malicious code include worms 
and Trojan Horses. A worm, like a virus, is a self-replicating program 
but one that requires no host to propagate. A Trojan Horse is a program 
which appears to be doing something useful, but also executes some 
piece of undesirable code hidden to the user.
	In the literature viruses are almost exclusively characterized as 
hostile or harmful. They are often referred to completely in the 
negative, as in “anti-virus software” or virus prevention, or as one 
author calls it, a “high-tech disease.” They are considered nearly 
exclusively in the context of detection, interception, identification, 
and removal.
     Why is this the case? Viral marketing, emergent behavior, 
self-replicating systems—these concepts are all the rage at the turn of 
the millennium. Yet computer viruses gain from none of these positive 
associations. They are thought of as a plague used by terrorists to 
wreak havoc on the network.
     So why did computer viruses become so closely connected with the 
viral metaphor in biology? Why think of self-replicating programs as a 
“virus” and not simply a parasitic nuisance, or a proper life form? 
Even the father of computer virus science, Cohen, thought of them as a 
form of artificial life[8] and recognized the limitations of the 
biological analogy. “[C]onsider a biological disease that is 100% 
infectious, spreads whenever animals communicate, kills all infected 
animals instantly at a given moment, and has no detectable side effect 
until that moment,”[9] wrote Cohen, identifying the ultimate inaccuracy 
of the analogy. How did self-replicating programs become viruses?
	For example, if viruses had emerged a decade later in the late-1990s, 
it is likely that they would have a completely difference 
socio-cultural meaning. They would most certainly be thought of more as 
a distributed computing system (like SETI@home) or an artificial life 
experiment (like Tom Ray’s Tierra), or an artwork (like Mark Daggett’s 
email worm, vcards), or as a nuisance (spam), or as a potential 
guerilla marketing tool (adware)—not a biological infestation.
	Computer viruses acquired their current discursive position because of 
a unique transformation that transpired in the mid-1980s around the 
perception of technology. In fact several phenomena, including computer 
hacking, acquired a distinctly negative characterization during this 
period of history because of the intense struggle waging behind the 
scenes between proprietary and protocological camps.
	My hypothesis is this: early on, computer viruses were identified with 
the AIDS epidemic. It is explicitly referenced in much of the 
literature on viruses, making AIDS both the primary biological metaphor 
and primary social anxiety informing the early discourse on computer 
viruses. In that early mode, the virus itself was the epidemic. Later, 
the discourse on viruses turns toward weaponization and hence 
terrorism. Here, the virus author is the epidemic. Today the moral 
evaluation of viruses is generally eclipsed by the search for their 
authors, who are prosecuted as criminals and often terrorists. The 
broad viral epidemic itself is less important that the criminal mind 
that brings it into existence (or the flaws in proprietary software 
that allow it to exist in the first place).
	Thus, by the late 1990s viruses are the visible indices of a search 
for evil-doers within technology, not the immaterial, anxious fear they 
evoked a decade earlier under the AIDS crisis.
	Computer viruses appeared in a moment in history where the integrity 
and security of bodies, both human and technological, was considered 
extremely important. Social anxieties surrounding both AIDS and the war 
on drugs testify to this. The AIDS epidemic in particular is referenced 
in much of the literature on viruses.[10] This makes sense because of 
the broad social crisis created by AIDS in the mid to late 1980s (and 
beyond). “In part,” writes Ralf Burger, “it seems as though a hysteria 
is spreading among computer users which nearly equals the uncertainty 
over the AIDS epidemic.”[11] A good example of this discursive pairing 
of AIDS and computer viruses is seen in the February 1, 1988 issue of 
Newsweek. Here an article titled “Is Your Computer Infected?,” which 
reports on computer viruses affecting hospitals and other institutions, 
is paired side-by-side with a medical article on AIDS.
	Consider two examples of this evolving threat paradigm. The Jerusalem 
virus[12] was first uncovered in December 1987 at Hebrew University of 
Jerusalem in Isreal. “It was soon found that the virus was extremely 
widespread, mainly in Jerusalem, but also in other parts of the 
country, especially in the Haifa area,”[13] wrote professor Yisrael 
Radai. Two students, Yuval Rakavy and Omri Mann, wrote a 
counter-program to seek out and delete the virus.
	Mystery surrounds the origins of the virus. As Frederick Cohen writes, 
terrorists are suspected of authoring this virusbecause  it was timed 
to destroy data precisely on the first Friday the 13th it encountered, 
which landed on May 13, 1988 and coincided with the day commemorating 
forty years since the existence of a Palestinian state.[14] (A 
subsequent outbreak also happened on Friday, January 13th 1989 in 
Britain.) The Edmonton Journal called it the work of a “saboteur.” This 
same opinion was voiced by The New York Times, who reported that the 
Jerusalem virus “was apparently intended as a weapon of political 
protest.”[15] Yet Radai claims that in subsequent, off-the-record 
correspondence, the Times reporter admitted that he was “too quick to 
assume too much about this virus, it’s author, and its intent.”[16]
In the end it is of little consequence whether or not the virus was 
written by the PLO. What matters is that this unique viral threat was 
menacing enough to influence the judgment of the media (and also Cohen) 
to believe, and perpetuate the belief, that viruses have a unique 
relationship to terrorists. Words like “nightmare,” “destroy,” 
“terrorist,” and “havoc” pervade the Times report.
	Second, consider the “AIDS Information Introductory Diskette Version 
2.0” Disk. On December 11, 1989, the PC Cyborg Corporation mailed 
approximately 10,000[17] computer diskettes to two direct mail lists 
compiled from the subscribers to PC Business World and names from the 
World Health Organization’s 1988 conference on AIDS held in 
Stockholm.[18] The disk carried the title “AIDS Information 
Introductory Diskette Version 2.0,” and presents an informational 
questionnaire to the user and offers an assessment of the user’s risk 
levels for AIDS based on their reported behavior.
	The disk also acted as a Trojan Horse containing a virus. The virus 
damages file names on the computer and fills the disk to capacity. The 
motives of the virus author are uncertain in this case, although it is 
thought to be a rather ineffective form of extortion as users of the 
disk were required to mail payment of $189 (for a limited license) or 
$378 (for a lifetime license) to a post office box in Panama.
	The virus author was eventually discovered to be an American named 
Joseph Popp who was extradited to Britain in February 1991 to face 
charges but was eventually dismissed as being psychiatrically unfit to 
stand trial.[19] He was later found guilty in absentia by an Italian 
court.
	Other AIDS-related incidents include the early Apple II virus 
“Cyberaids,” the AIDS virus from 1989 which displays “Your computer now 
has AIDS” in large letters, followed a year later by the AIDS II virus 
which performs a similar infraction.
	So here are two threat paradigms, terrorism and AIDS, which 
characterize the changing discursive position of computer viruses from 
the 1980s to ‘90s. While the AIDS paradigm dominated in the late ‘80s, 
by the late ‘90s computer viruses would become weaponized and more 
closely resemble the terrorism paradigm.
	The AIDS epidemic in the 1980s had a very specific discursive diagram. 
With AIDS, the victims became known, but the epidemic itself was 
unknown. There emerged a broad, immaterial social anxiety. The 
biological became dangerous and dirty. All sex acts became potentially 
deviant acts and therefore suspect.
	But with terrorism there exists a difference discursive diagram. With 
terror the victims are rarely known. Instead knowledge is focused on 
the threat itself—the strike happened here, at this time, with this 
weapon, by this group, and so on.
	If AIDS is an invisible horror, then terror is an irrational horror. 
It confesses political demands one minute, then erases them another 
(while the disease has no political demands). The State attacks terror 
with all available manpower, while it systematically ignores AIDS. Each 
shows a different exploitable flaw in protocological management and 
control.
	While the shift in threat paradigms happened in the late 1980s for 
computer viruses, the transformation was long in coming. Consider the 
following three dates.
	In the 1960s in places like Bell Labs,[20] Xerox PARC and MIT 
scientists were known to play a game called Core War. In this game two 
self-replicating programs were released into a system. The programs 
battled over system resources and eventually one side came out on top. 
Whoever could write the best program would win.
	These engineers were not virus writers, nor were they terrorists or 
criminals. Just the opposite, they prized creativity, technical 
innovation and exploration. Core War was a fun way to generate such 
intellectual activity. The practice existed for several years 
unnoticed. “In college, before video games, we would amuse ourselves by 
posing programming exercises,” said Ken Thompson, co-developer of the 
UNIX operating system, in 1983. “One of the favorites was to write the 
shortest self-reproducing program.”[21] The engineer A. K. Dewdney 
recounts an early story at, we assume, Xerox PARC about a 
self-duplicating program called Creeper which infested the computer 
system and had to be brought under control by another program designed 
to neutralize it, Reaper.[22] Dewdney brought to life this battle 
scenario using his own gaming language called Redcode.
	Jump ahead to 1988. At 5:01:59pm[23] on November 2 Robert Morris, a 
23-year-old graduate student at Cornell University and son of a 
prominent computer security engineer at the National Computer Security 
Center (a division of the NSA), released an email worm into the 
ARPANET. This self-replicating program entered approximately 60,000[24] 
computers in the course of a few hours, infecting between 2,500 and 
6,000 of them. While it is notoriously difficult to calculate such 
figures, some speculations put the damage caused by Morris’s worm at 
over $10,000,000.
	On July 26, 1989 he was indicted under the Computer Fraud and Abuse 
Act of 1986. After pleading innocent, in the spring of 1990 he was 
convicted and sentenced to three years probation, fined $10,000 and 
told to perform 400 hours of community service. Cornell expelled him, 
calling it “a juvenile act,”[25] while Morris’s own dad labeled it 
simply “the work of a bored graduate student.”[26]
	While the media cited Morris’s worm as “the largest assault ever on 
the nation’s computers,”[27] the program was largely considered a sort 
of massive blunder, a chain reaction that spiraled out of control 
through negligence. As Bruce Sterling reports: “Morris said that his 
ingenious ‘worm’ program was meant to explore the Internet harmlessly, 
but due to bad programming, the worm replicated out of control.”[28] 
This was a problem better solved by the geeks, not the FBI, thought 
many at the time. “I was scared,” admitted Morris, “it seemed like the 
worm was going out of control.”[29]
	Morris’s peers in the scientific community considered his prosecution 
unnecessary. As reported in UNIX Today!, only a quarter of those polled 
thought Morris should go to prison, and, as the magazine testified, 
“most of those who said ‘Yes’ to the prison question added something 
like, ‘only a minimum security prison—you know, like the Watergate 
people vacationed at.’”[30] Thus while not unnoticed, Morris’s worm was 
characterized as a mistake not an overt, criminal act. Likewise his 
punishment was relatively lenient for someone convicted of such a 
massive infraction.
	Ten years later in 1999, after what was characterized as the largest 
Internet man hunt ever, a New Jersey resident named David Smith was 
prosecuted for creating Melissa, a macro virus that spreads using the 
Microsoft Outlook and Word programs. It reportedly infected over 
100,000 computers worldwide and caused $80 million in damage (as 
assessed by the number of hours computer administrators took to clean 
up the virus). While Melissa was generally admitted to have been more 
of a nuisance than a real threat, Smith was treated as a hard criminal 
not a blundering geek. He pleaded guilty to 10 years and a $150,000 
fine.
	With Smith, then, self-replicating programs flipped 180 degrees. The 
virus is now indicative of criminal wrongdoing. It has moved through 
it’s biological phase, characterized by the associations with AIDS, and 
effectively been weaponized. Moreover criminal blame is identified with 
the virus author himself who is thought of not simply as a criminal but 
as a cyber-terrorist. A self-replicating program is no longer the 
hallmark of technical exploration, as it was in the early days, nor is 
it (nor was it ever) a canary in the coal mine warning of technical 
flaws in proprietary software, nor is it even viral; it is a weapon of 
mass destruction. From curious geek to cyber terrorist.

[...]

Conflicting Diagrams

"Netwar is about the Zapatistas more than the Fidelistas, Hamas more 
than the Palestine Liberation Organization (PLO), the American 
Christian Patriot movement more than the Ku Klux Klan, and the Asian 
Triads more than the Costa Nostra."[61]
—John Arquilla & David Ronfeldt

Throughout the years new diagrams (also called graphs or organizational 
designs) have appeared as solutions or threats to existing ones. 
Bureaucracy is a diagram. Hierarchy is one too, so is peer-to-peer. 
Designs come and go, useful asset managers at one historical moment, 
then disappearing, or perhaps fading only to reemerge later as useful 
again. The Cold War was synonymous with a specific military 
diagram—bilateral symmetry, mutual assured destruction (MAD), 
massiveness, might, containment, deterrence, negotiation; the war 
against drugs has a different diagram—multiplicity, specificity, law 
and criminality, personal fear, public awareness.
	This book is largely about one specific diagram, or organizational 
design, called distribution, and its approximate relationship in a 
larger historical transformation involving digital computers and 
ultimately the control mechanism called protocol.[62]
	In this diagramatic narrative it is possible to pick sides and 
describe one diagram as the protagonist and another as the antagonist. 
Thus the rhizome is thought to be the solution to the tree,[63] the 
wildcat strike the solution to the boss's control, Toyotism[64] the 
solution to institutional bureaucracy, and so on. Alternately, 
terrorism is thought to be the only real threat to state power, the 
homeless punk-rocker a threat to sedentary domesticity, the guerrilla a 
threat to the war machine, the temporary autonomous zone a threat to 
hegemonic culture, and so on.
	This type of conflict is in fact a conflict between different social 
structures, for the terrorist threatens not only through fear and 
violence, but specifically through the use of a cellular organizational 
structure, a distributed network of secretive combatants, rather than a 
centralized organizational structure employed by the police and other 
state institutions. Terrorism is a sign that we are in a transitional 
moment in history. (Could there ever be anything else?) It signals that 
historical actors are not in a relationship of equilibrium, but instead 
are grossly mismatched.
	It is often observed that, due largely to the original comments of 
networking pioneer Paul Baran, the Internet was invented to avoid 
certain vulnerabilities of nuclear attack. In Baran’s original vision, 
the organizational design of the Internet involved a high degree of 
redundancy, such that destruction of a part of the network would not 
threaten the viability of the network as a whole. After World War II, 
strategists called for moving industrial targets outside of urban cores 
in a direct response to fears of nuclear attack. Peter Galison calls 
this dispersion the “constant vigilance against the re-creation of new 
centers.”[65] These are the same centers that Baran derided as an 
“Achilles Heel”[66] and what he longed to purge from the 
telecommunications network.
	“City by city, country by country, the bomb helped drive 
dispersion,”[67] Galison continues, highlighting the power of the 
A-bomb to drive the push towards distribution in urban planning. 
Whereas the destruction of a fleet of Abrams tanks would certainly 
impinge upon Army battlefield maneuvers, the destruction of a rack of 
Cisco routers would do little to slow down broader network 
communications. Internet traffic would simply find a new route, thus 
circumventing the downed machines.[68]
	(In this way, destruction must be performed absolutely, or not at all. 
“The only way to stop Gnutella,” comments WiredPlanet CEO Thomas Hale 
on the popular file sharing protocol, “is to turn off the 
Internet.”[69] And this is shown above in our examination of protocol’s 
high penalties levied against deviation. One is completely compatible 
with a protocol, or not at all.)
	Thus the Internet can survive attacks not because it is stronger than 
the opposition, but precisely because it is weaker. The Internet has a 
different diagram than nuclear attack; it is in a different shape. And 
that new shape happens to be immune to the older.
	All the words used to describe the World Trade Center after the 
attacks of September 11, 2001 revealed its design vulnerabilities 
vis-à-vis terrorists: it was a tower, a center, an icon, a pillar, a 
hub. Conversely, terrorists are always described with a different 
vocabulary: they are cellular, networked, modular, and nimble. Groups 
like Al-Qaeda specifically promote a modular, distributed structure 
based on small autonomous groups. They write that new recruits “should 
not know one another,” and that training sessions should be limited to 
“7 - 10 individuals.” They describe their security strategies as 
“creative” and “flexible.”[70]
	This is indicative of two conflicting diagrams.
	The first diagram is based on the strategic massing of power and 
control, while the second diagram is based on the distribution of power 
into small, autonomous enclaves. "The architecture of the World Trade 
Center owed more to the centralized layout of Versailles than the 
dispersed architecture of the Internet," wrote Jon Ippolito after the 
attacks. "New York's resilience derives from the interconnections it 
fosters among its vibrant and heterogeneous inhabitants. It is in 
decentralized structures that promote such communal networks, rather 
than in reinforced steel, that we will find the architecture of 
survival."[71] In the past the war against terrorism resembled the war 
in Viet Nam, or the war against drugs—conflicts between a central power 
and an elusive network. It did not resemble the Gulf War, or World War 
II, or other conflicts between states.
	"As an environment for military conflict," the New York Times 
reported, "Afghanistan is virtually impervious[72] to American power." 
(In addition to the stymied US attempt to route Al-Qaeda post-September 
11th is the failed Soviet occupation in the years following the 1978 
coup, a perfect example of grossly mismatched organizational designs.) 
Today being “impervious” to American power is no small feat.
	The category shift that defines the difference between state power and 
guerilla force shows that through a new diagram,guerillas, terrorists 
and the like can gain a foothold against their opposition.
	But as Ippolito points out this should be our category shift too, for 
anti-terror survival strategies will arise not from a renewed massing 
of power on the American side, but precisely from a distributed (or to 
use his less precise term, decentralized) diagram. Heterogeneity, 
distribution, communalism are all features of this new diagramatic 
solution.
	In short, the current global crisis is one between centralized, 
hierarchical powers and distributed, horizontal networks. John Arquilla 
and David Ronfeldt, two researchers at the RAND Corporation who have 
written extensively on the hierarchy-network conflict, offer a few 
propositions for thinking about future policy:
 
·      Hierarchies have a difficult time fighting networks. [...]
·      It takes networks to fight networks. [...]
·      Whoever masters the network form first and best will gain major 
advantages.[73]
 
These comments are incredibly helpful for thinking about tactical media 
and the roll of today’s political actor. It gives subcultures reason to 
rethink their strategies vis-à-vis the mainstream. It forces us to 
rethink the techniques of the terrorist. It also raises many questions, 
including what happens when “the powers that be” actually evolve into 
networked power (which is already the case in many sectors).
	In recent decades the primary conflict between organizational designs 
has been between hierarchies and networks, an asymmetrical war. 
However, in the future we are likely to experience a general shift 
downward into a new bilateral organizational conflict—networks fighting 
networks.
	“Bureaucracy lies at the root of our military weakness,” wrote 
advocates of military reform in the mid eighties. “The bureaucratic 
model is inherently contradictory to the nature of war, and no military 
that is a bureaucracy can produce military excellence.”[74]
	While the change to a new unbureaucratic military is on the drawing 
board, the future network-centric military—an unsettling notion to say 
the least—is still a ways away. Nevertheless networks of control have 
invaded our life in other ways though, in the form of the ubiquitous 
surveillance, biological informatization and other techniques discussed 
in the earlier chapter on power.
	The dilemma, then, is that while hierarchy and centralization are 
almost certainly politically tainted due to their historical 
association with fascism and other abuses, networks are both bad and 
good. Drug cartels, terror groups, black hat hacker crews and other 
denizens of the underworld all take advantage of networked 
organizational designs because they offer effective mobility and 
disguise. But more and more we witness the advent of networked 
organizational design in corporate management techniques, manufacturing 
supply chains, advertisement campaigns and other novelties of the 
ruling class, as well as all the familiar grass-roots activist groups 
who have long used network structures to their advantage.
	In a sense, networks have been vilified simply because the terrorists, 
pirates and anarchists made them notorious, not because of any negative 
quality of the organizational diagram itself. In fact, positive 
libratory movements have been capitalizing on network design protocols 
for decades if not centuries. The section on the rhizome in A Thousand 
Plateaus is one of literature’s most poignant adorations of the network 
diagram.
 
It was the goal of this chapter to illuminate a few of these networked 
designs and how they manifest themselves as tactical effects within the 
media’s various network-based struggles. As the section on viruses (or 
the previous chapter on hackers) showed, these struggles can be lost. 
Or as in the case of the end-to-end design strategy of the Internet’s 
core protocols, or cyberfeminism, or the free software movement, they 
can be won (won in specific places at specific times).
	These tactical effects are allegorical indices that point out the 
flaws in protocological and proprietary command and control.
	The goal is not to destroy technology in some neo-Luddite delusion, 
but to push it into a state of hypertrophy, further than it is meant to 
go. Then, in its injured, sore and unguarded condition, technology may 
be sculpted anew into something better, something in closer agreement 
with the real wants and desires of its users. This is the goal of 
tactical media.

------------------------------------------------------------------------
[1] Paul Virilio, "Infowar," in Druckrey (ed.), Ars Electronica, p. 
334. One assumes that the italicized "Titanic" may refer to James 
Cameron’s 1997 film as well as the fated passenger ship, thereby 
offering an interesting double meaning that suggests, as others have 
aptly argued, that films, understood as texts like any other, contain 
their own undoing.

[2] David Garcia and Geert Lovink, “The ABC of Tactical Media,” 
Nettime, May 16, 1997.

[3] Hardt & Negri, Empire, p. 25.

[4] Ralf Burger cites two articles, “ACM Use of Virus Functions to 
Provide a Virtual APL Interpreter Under User Control” (1974), and John 
Shoch and Jon Huppas’s “The Worm Programs—Early Experience  with a 
Distributed Computation” (1982) which was first circulated in 1980 in 
abstract form as “Notes on the ‘Worm’ programs” (IEN 159, May 1980). 
See Ralf Burger, Computer Viruses (Grand Rapids: Abacus, 1988), p. 19.

[5] Frederick Cohen, A Short Course on Computer Viruses (New York: John 
Wiley & Sons, 1994), p. 38.

[6] Ibid., p. 2.

[7] W. Timothy Polk, et al., Anti-Virus Tools and Techniques for 
Computer Systems (Park Ridge, NJ: Noyes Data Corporation, 1995), p, 4.

[8] Indeed pioneering viral scientist Fred Cohen is the most notable 
exception to this rule. He recognized the existence of “benevolent 
viruses” that perform maintenance, facilitate networked applications, 
or simply live in “peaceful coexistence” with us: “I personally believe 
that reproducing programs are living beings in the information 
environment.” See Frederick Cohen, A Short Course on Computer Viruses 
(New York: John Wiley & Sons, 1994), pp. 159-160, 15-21, and Frederick 
Cohen, It’s Alive! (New York: John Wiley & Sons, 1994). The author Ralf 
Burger is also not completely pessimistic, instructing us that when 
“used properly, [viruses] may bring about a new generation of 
self-modifying computer operating systems. ... Those who wish to 
examine and experiment with computer viruses on an experimental level 
will quickly discover what fantastic programming possibilities they 
offer.” See Ralf Burger, Computer Viruses (Grand Rapids: Abacus, 1988), 
p. 2.

[9] Fred Cohen, “Implications of Computer Viruses and Current Methods 
of Defense,” in Peter Denning, Ed., Computers Under Attack: Intruders, 
Worms, and Viruses (New York: ACM, 1990), p. 383.

[10] See Philip Fites, et al., The Computer Virus Crisis (New York: Van 
Nostrand Reinhold, 1992), pp. 28, 54, 105-117, 161-2; Ralf Burger, 
Computer Viruses (Grand Rapids: Abacus, 1988), p. 1; Charles Cresson 
Wood, “The Human Immune System as an Information Systems Security 
Reference Model” in Lance Hoffman, ed., Rogue Programs (New York: Van 
Nostrand Reinhold, 1990), pp. 56-57. In addition, the AIDS Info Disk, a 
Trojan Horse, is covered in almost every book on the history of 
computer viruses.

[11] Burger, Computer Viruses, p. 1.

[12] Also called the “Israeli” or “PLO” virus.

[13] Yisrael Radai, “The Israeli PC Virus,” Computers and Security 8:2, 
1989, p. 112.

[14] Cohen, A Short Course on Computer Viruses, p. 45.

[15] “Computer Systems Under Seige, Here and Abroad,” The New York 
Times, January 31, 1988, section 3, p. 8.

[16] Cited in Radai, “The Israeli PC Virus,” p. 113.

[17] Frederick Cohen reports the total number between 20,000 and 30,000 
diskettes. See Cohen, A Short Course on Computer Viruses, p. 50. Jan 
Hruska puts the number at 20,000. See Jan Hruska, Computer Viruses and 
Anti-Virus Warfare (New York: Ellis Horwood, 1992), p. 20.

[18] Philip Fites, et al., The Computer Virus Crisis, p. 46.

[19] Hruska, Computer Viruses and Anti-Virus Warfare, p. 22.

[20] A. K. Dewdney identifies a game called Darwin invented by M. 
Douglas McIlroy, head of the Computing Techniques Research Department 
at Bell Labs, and a program called Worm created by John Shoch (and Jon 
Hupp) of Xerox Palo Alto Research Center. See A. K. Dewdney, “Computer 
Recreations,” Scientific American, March, 1984, p. 22. For more on 
Shoch and Hupp see “The Worm Programs,” Communications of the ACM, 
March 1982. Many attribute the worm concept to the science fiction 
novel Shockwave Rider by John Brunner.

[21] Ken Thompson, “Reflections on Trusting Trust,” in Denning, Ed., 
Computers Under Attack, p. 98.

[22] Dewdney, “Computer Recreations,” p. 14.

[23] Jon A. Rochlis and Mark W. Eichin, “With Microscope and Tweezers: 
The Worm from MIT’s Perspective,” in Peter Denning, Ed., Computers 
Under Attack, p. 202. The precise time comes from analyzing the 
computer logs at Cornell University. Others suspect that the attack 
originated from a remote login at a MIT computer.

[24] Frederick Cohen, A Short Course on Computer Viruses (New York: 
John Wiley & Sons, 1994), p. 49. The figure of 60,000 is also used by 
Eugene Spafford who attributes it to the October 1988 IETF estimate for 
the total number of computers online at that time. See Eugene Spafford, 
“The Internet Worm Incident,” in Hoffman, ed., Rogue Programs, p. 203. 
Peter Denning’s numbers are different. He writes that “[o]ver an 
eight-hour period it invaded between 2,500 and 3,000 VAX and Sun 
computers.” See Peter Denning, ed., Computers Under Attack: Intruders, 
Worms, and Viruses (New York: ACM, 1990), p. 191. This worm is 
generally called the RTM Worm after the initials of its author, or 
simply the Internet Worm.

[25] From a Cornell University report cited in Ted Eisenberg, et al., 
“The Cornell Commission: On Morris and the Worm,” in Peter Denning, 
ed., Computers Under Attack, p. 253.

[26] Cited in The New York Times, November 5, 1988, p. A1.

[27] The New York Times, November 4, 1988, p. A1.

[28] Bruce Sterling, The Hacker Crackdown (New York: Bantam, 1992), pp. 
88-9.

[29] Cited in The New York Times, January 19, 1990, p. A19.

[30] “Morris’s Peers Return Verdicts: A Sampling of Opinion Concerning 
The Fate of the Internet Worm,” in Hoffman, ed., Rogue Programs, p. 104.

[...]

[61] John Arquilla & David Ronfeldt, Networks and Netwars: The Future 
of Terror, Crime, and Militancy (Santa Monica: RAND, 2001), p. 6. A 
similar litany from 1996 reads: “netwar is about Hamas more than the 
PLO, Mexico’s Zapatistas more than Cuba’s Fidelistas, the Christian 
Identity Movement more than the Ku Klux Klan, the Asian Triads more 
than the Sicilian Mafia, and Chicago’s Gangsta Disciples more than the 
Al Capone Gang” (see John Arquilla & David Ronfeldt, The Advent of 
Netwar [Santa Monica: RAND, 1996], p. 5). Arquilla & Ronfeldt coined 
the term netwar which they define as “an emerging mode of conflict (and 
crime) at societal levels, short of traditional military warfare, in 
which the protagonists use network forms of organization and related 
doctrines, strategies, and technologies attuned to the information age” 
(see Arquilla & Ronfeldt, Networks and Netwars, p. 6).

[62] This is not a monolithic control mechanism, of course. “The 
Internet is a large machine,” writes Andreas Broeckmann. “This machine 
has its own, heterogeneous topology, it is fractured and repetitive, 
incomplete, expanding and contracting” (“Networked Agencies,” 
http://www.v2.nl/~andreas/texts/1998/networkedagency-en.html).

[63] This is Deleuze & Guatari’s realization in A Thousand Plateaus.

[64] For an interesting description of Toyotism, see Manuel Castells, 
The Rise of the Network Society (Oxford: Blackwell, 1996), pp. 157-160.

[65] Peter Galison, “War against the Center,” Grey Room 4, Summer 2001, 
p. 20.

[66] Baran writes: “The weakest spot in assuring a second strike 
capability was in the lack of reliable communications. At the time we 
didn’t know how to build a communication system that could survive even 
collateral damage by enemy weapons. RAND determined through computer 
simulations that the AT&T Long Lines telephone system, that carried 
essentially all the Nation’s military communications, would be cut 
apart by relatively minor physical damage. While essentially all of the 
links and the nodes of the telephone system would survive, a few 
critical points of this very highly centralized analog telephone system 
would be destroyed by collateral damage alone by missiles directed at 
air bases and collapse like a house of card.” See Paul Baran, 
Electrical Engineer, an oral history conducted in 1999 by David 
Hochfelder, IEEE History Center, Rutgers University, New Brunswick, NJ, 
USA.

[67] Galison, “War against the Center,” p. 25.

[68] New Yorker writer Peter Boyer reports that DARPA is in fact 
rethinking this opposition by designing a distributed tank, “a tank 
whose principle components, such as guns and sensors, are mounted on 
separate vehicles that would be controlled remotely by a soldier in yet 
another command vehicle,” (see “A Different War,” The New Yorker, July 
1, 2002, p. 61). This is what the military calls Future Combat Systems 
(FCS), an initiative developed by DARPA for the US Army. It is 
described as “flexible” and “network-centric.” I am grateful to Jason 
Spingarn-Koff for bring FCS to my attention.

[69] Cited in Gene Kan “Gnutella” in Andy Oram, Ed. Peer-to-Peer: 
Harnessing the Power of Disruptive Technologies (Sebastopol: O’Reilly, 
2001), p. 99.

[70] See The al-Qaeda Documents: Vol. 1 (Alexandria, VA: Tempest, 
2002), pp. 50, 62.

[71] Jon Ippolito, "Don't Blame the Internet," Washington Post, 
September 29, 2001, p. A27.

[72] Wanting instead American invulnerability to Soviet nuclear power, 
in 1964 Paul Baran writes that “we can still design systems in which 
system destruction requires the enemy to pay the price of destroying n 
of n [communication] stations. If n is made sufficiently large, it can 
be shown that highly survivable system structures can be built—even in 
the thermonuclear era.” See Paul Baran, On Distributed Communications: 
1. Introduction to Distributed Communications Networks (Santa Monica, 
CA: RAND, 1964), p. 16. Baran’s point here is that destruction of a 
network is an all or nothing game. One must destroy all nodes, not 
simply take out a few key hubs. But the opposite is not true. A network 
needs only to destroy a single hub within a hierarchical power to score 
a dramatic triumph. Thus, Baran’s advice to the American military was 
to become network-like. And once it did the nuclear threat was no 
longer a catastrophic threat to communications and mobility (but 
remains, of course, a catastrophic threat to human life, material 
resources, and so on).

[73] Arquilla & Ronfeldt, Networks and Netwars, p. 15, emphasis removed 
from original. Contrast this line of thinking with that of Secretary of 
Defense Robert McNamara in the nineteen sixties, whom Senator Gary Hart 
described as advocating “more centralized management in the Pentagon.” 
See Gary Hart & William Lind, America Can Win (Bethesda, MD: Adler & 
Adler, 1986), p. 14. Or contrast it in the current milieu with the 
Powell Doctrine, named after four-star general and Secretary of State 
Colin Powell, which states that any American military action should 
have the following: clearly stated objectives; an exit strategy; the 
ability to use overwhelming force; and that vital strategic interests 
must be at stake. This type of thinking is more in line with a 
modernist, Clausewitzian theory of military strategy, that force will 
be overcome by greater force, that conflict should be a goal-oriented 
act rather than one of continuance, that conflict is waged by state 
actors, and so on.

[74] Gary Hart & William Lind, America Can Win (Bethesda, MD: Adler & 
Adler, 1986), pp. 240, 249.

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net