Alexander Galloway on Mon, 13 Jan 2003 16:52:28 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Institutionalization of computer protocols (draft chapter)


Nettimers--I'm preparing a book manuscript on computer protocols and  
how they establish control in the seemingly anarchical Internet. I'm  
hoping that some of you will be able to read my draft chapter below on  
the institutionalization of protocols via standards bodies. Please  
point out my mistakes before i send it to my editor! :-) thanks, -ag

+ + +

In this day and age, technical protocols and standards are established  
by an self-selected oligarchy of scientists consisting largely of  
electrical engineers and computer specialists. Composed of a patchwork  
of many professional bodies, working groups, committees and  
subcommittees, this technocratic elite toils away, mostly voluntarily,  
in an effort to hammer out solutions to advancements in technology.  
Many of them are university professors. Most all of them either work in  
industry, or have some connection to it.
	Like the philosophy of protocol itself, membership in this  
technocratic ruling class is open. “Anyone with something to contribute  
could come to the party,”[1] wrote one early participant. But, to be  
sure, because of the technical sophistication needed to participate,  
this loose consortium of decision-makers tends to fall into a  
relatively homogenous social class: highly educated, altruistic,  
liberal-minded science professionals from modernized societies around  
the globe.
	And sometimes not so far around the globe. Of the twenty-five or so  
original protocol pioneers, three of them—Vint Cerf, Jon Postel and  
Steve Crocker—all came from a single high school in Los Angeles’s San  
Fernando Valley.[2] Furthermore during his long tenure as RFC Editor,  
Postel was the single gatekeeper through whom all protocol RFCs passed  
before they could be published.
Internet historians Katie Hafner and Matthew Lyon describe this group  
as “an ad-hocracy of intensely creative, sleep-deprived, idiosyncratic,  
well-meaning computer geniuses.”[3]
	There are few outsiders in this community. Here the specialists run  
the show. To put it another way, while the Internet is used daily by  
vast swaths of diverse communities, the standards-makers at the heart  
of this technology are a small entrenched group of techno-elite peers.
The reasons for this are largely practical. “Most users are not  
interested in the details of Internet protocols,” Vint Cerf observes,  
“they just want the system to work.”[4] Or as former IEFT Chair Fred  
Baker reminds us: “The average user doesn't write code. [...] If their  
needs are met, they don't especially care how they were met.”[5]
     So who actually writes these technical protocols, where did they  
come from, and how are they used in the real world? They are found in  
the fertile amalgamation of computers and software that constitutes the  
majority of servers, routers and other internet-enabled machines. A  
signifigant portion of these computers were, and still are, Unix-based  
systems. A signifigant portion of the software was, and still is,  
largely written in the C or C++ languages. All of these elements have  
enjoyed unique histories as protocological technologies.
	The Unix operating system was developed at Bell Telephone Laboratories  
by Ken Thompson, Dennis Ritchie and others beginning in 1969 and  
continuing development into the early ‘70s. After the operating  
system’s release the lab’s parent company, AT&T, began to license and  
sell Unix as a commercial software product. But, for various legal  
reasons, the company admitted they “had no intention of pursuing  
software as a business.”[6] Unix was indeed sold by AT&T, but simply  
“as is” with no advertising, technical support or other fanfare. This  
contributed to its widespread adoption by universities who found in  
Unix a cheap but useful operating system that could be easily  
experimented with, modified and improved.
	In January 1974, Unix was installed at the University of California at  
Berkeley. Bill Joy and others began developing aspin-off of the  
operating system which became known as BSD (Berkeley Software  
Distribution).
	Unix was particularly successful because of its close connection to  
networking and the adoption of basic interchange standards. “Perhaps  
the most important contribution to the proliferation of Unix was the  
growth of networking,”[7] writes Unix historian Peter Salus. By the  
early ‘80s, the TCP/IP networking suite was included in BSD Unix.
	Unix was designed with openness in mind. The source code—written in C,  
which was also developed during 1971-1973—is easily accessible, meaning  
a higher degree of technical transparency.
The standardization of the C programming language began in 1983 with  
the establishment of an American National Standards Institute (ANSI)  
committee called “X3J11.” The ANSI report was finished in 1989 and  
subsequently accepted as a standard by the international consortium ISO  
in 1990.[8] Starting in 1979, Bjarne Stroustrup developed C++, which  
added the concept of classes to the original C language. (In fact,  
Stroustrup’s first nickname for his new language was “C with Classes.”)  
ANSI standardized the C++ language in 1990.
	C++ has been tremendously successful as a language. “The spread was  
world-wide from the beginning,” recalled Stroustrup. “[I]t fit into  
more environments with less trouble than just about anything else.”[9]  
Just like a protocol.
	It is not only computers that experience standardization and mass  
adoption. Over the years many technologies have followed this same  
trajectory. The process of standards creation is, in many ways, simply  
the recognition of technologies that have experienced success in the  
market place. One example is the VHS video format developed by JVC  
(with Matsushita), which beat out Sony’s Betamax format in the consumer  
video market. Betamax was considered by some to be a superior  
technology (an urban myth, claim some engineers) because it stored  
video in a higher-quality format. But the trade off was that Betamax  
tapes tended to be shorter in length. In the late ‘70s when VHS  
launched, the VHS tape allowed for up to two hours of recording time,  
while Betamax only one hour. “By mid 1979 VHS was outselling Beta by  
more than 2 to 1 in the US.”[10] When Betamax caught up in length (to  
three hours) it had already lost a foothold in the market. VHS would  
counter Betamax by increasing to four hours and later eight.
	Some have suggested that it was the pornography industry, who favored  
VHS over Betamax, that provided it with legions of early adopters and  
proved the long term viability of the format.[11]
But perhaps the most convincing argument is the one that points out  
JVC’s economic strategy which included aggressive licensing of the VHS  
format to competitors. JVC’s behavior is pseudo-protocological. They  
licensed the technical specifications for VHS to other vendors. They  
also immediately established manufacturing and distribution supply  
chains for VHS tape manufacturing and retail sales. In the meantime  
Sony tried to fortify its market position by keeping Betamax to itself.  
As one analyst writes:
 
Three contingent early differences in strategy were crucial. First,  
Sony decided to proceed without major co-sponsors for it Betamax  
system, while JVC shared VHS with several major competitors. Second,  
the VHS consortium quickly installed a large manufacturing capacity.  
Third, Sony opted for a more compact cassette, while JVC chose a longer  
playing time for VHS, which proved more important to most customers.[12]
 
JVC deliberately sacrificed larger profit margins by keeping prices low  
and licensing to competitors. This was in order to grow their market  
share. The rationale was that establishing a standard was the most  
important thing, and as they approached that goal, it would create a  
positive feedback loop that would further beat out the competition.
     The VHS/Betamax story is a good example from the commercial sector  
for how one format can beat out another format and become an industry  
standard. This example is interesting because it shows that  
protocological behavior (giving out your technology broadly even if it  
means giving it to your competitors) often wins out over proprietary  
behavior. The Internet protocols function in a similar way, to the  
degree that they have become industry standards not through a result of  
propriety market forces, but due to broad open initiatives of free  
exchange and debate. This was not exactly the case with VHS, but the  
analogy is useful nevertheless.
     This type of corporate squabbling over video formats has since  
been essentially erased from the world stage with the advent of DVD.  
This new format was reached through consensus from industry leaders and  
hence does not suffer from direct competition by any similar technology  
in the way that VHS and Betamax did. Such consensus characterizes the  
large majority of processes in place today around the world for  
determining technical standards.
     Many of today’s technical standards can be attributed to the  
Institute of Electrical and Electronics Engineers, or IEEE (pronounced  
“eye triple e”). In 1963 IEEE was created through the merging of two  
professional societies. They were the American Institute of Electrical  
Engineers (AIEE) founded in New York on May 13, 1884 (by a group which  
included Thomas Edison) and the Institute of Radio Engineers (IRE)  
founded in 1912.[13] Today the IEEE has over 330,000 members in 150  
countries. It is the world’s largest professional society in any field.  
The IEEE works in conjunction with industry to circulate knowledge of  
technical advances, to recognize individual merit through the awarding  
of prizes, and to set technical standards for new technologies. In this  
sense the IEEE is the world’s largest and most important protocological  
society.
	Composed of many chapters, sub-groups and committees, the IEEE’s  
Communications Society is perhaps the most interestingarea vis-a-vis  
computer networking. They establish standards in many common areas of  
digital communication including digital subscriber lines (DSLs) and  
wireless telephony.
IEEE standards often become international standards. Examples include  
the “802” series of standards which govern network communications  
protocols. These include standards for Ethernet[14] (the most common  
local area networking protocol in use today), Bluetooth, Wi-Fi, and  
others.
	“The IEEE,” Paul Baran observed, “has been a major factor in the  
development of communications technology.”[15] Indeed Baran’s own  
theories, which eventually would spawn the Internet, were published  
within the IEEE community even as they were published by his own  
employer, the RAND Corporation.
     Active within the United States are the National Institute for  
Standardization and Technology (NIST) and American National Standards  
Institute (ANSI). The century old NIST, formerly known as the National  
Bureau of Standards, is a federal agency that develops and promotes  
technological standards. Because they are a federal agency and not a  
professional society, they have no membership per se. They are also  
non-regulatory, meaning that they do not enforce laws or establish  
mandatory standards which must be adopted. Much of their budget goes  
into supporting NIST research laboratories as well as various outreach  
programs.
     ANSI, formerly called the American Standards Association, is  
responsible for aggregating and coordinating the standards creation  
process in the US. They are the private sector counterpart to NIST.  
While they do not create any standards themselves, they are a conduit  
for federally-accredited organizations in the field who are developing  
technical standards. The accredited standards developers must follow  
certain rules designed to keep the process open and equitable for all  
interested parties. ANSI then verifies that the rules have been  
followed by the developing organization before the proposed standard is  
adopted.
	ANSI is also responsible for articulating a national standards  
strategy for the US. This strategy helps ANSI advocate in the  
international arena on behalf of United States interests. ANSI is the  
only organization that can approve standards as American national  
standards.
     Many of ANSI’s rules for maintaining integrity and quality in the  
standards development process revolve around principles of openness and  
transparency and hence conform with much of what I have already said  
about protocol. ANSI writes that:
 
·      Decisions are reached through consensus among those affected.
·      Participation is open to all affected interests. [...]
·      The process is transparent — information on the process and  
progress is directly available. [...]
·      The process is flexible, allowing the use of different  
methodologies to meet the needs of different technology and product  
sectors.[16]
 
Besides being consensus-driven, open, transparent and flexible, ANSI  
standards are also voluntary, which means that, like NIST, no one is  
bound by law to adopt them. Voluntary adoption in the marketplace is  
the ultimate test of a standard. Standards may disappear in the advent  
of a new superior technology or simply with the passage of time.  
Voluntary standards have many advantages. By not forcing industry to  
implement the standard the burden of success lies in the marketplace.  
And in fact, proven success in the marketplace generally preexists the  
creation of a standard. The behavior is emergent, not imposed.
     On the international stage several other standards bodies become  
important. The International Telecommunication Union (ITU) focuses on  
radio and telecommunications, including voice telephony, communications  
satellites, data networks, television and in the old days, the  
telegraph. Established in 1865 they claim to be the world’s oldest  
international organization.
	The International Electrotechnical Commission (IEC) prepares and  
publishes international standards in the area of electrical  
technologies including magnetics, electronics and energy production.  
They cover everything from screw threads to quality management systems.  
IEC is comprised of national committees. (The national committee  
representing the US is administered by ANSI.) 
     Another important international organization is ISO, also known as  
the International Organization for Standardization.[17] Like the IEC,  
ISO grows out of the electro-technical field and was formed after World  
War II to “facilitate the international coordination and unification of  
industrial standards.”[18] Based in Geneva, but a federation of over  
140 national standards bodies including the American ANSI and the  
British Standards Institution (BSI), their goal is to establish  
vendor-neutral technical standards. Like the other international  
bodies, standards adopted by the ISO are recognized worldwide.
     Also like other standards bodies, ISO standards are developed  
through a process of consensus-building. Their standards are based on  
voluntary participation and thus the adoption of ISO standards is  
driven largely by market forces. (As opposed to mandatory standards  
which are implemented in response a governmental regulatory mandate.)  
Once established, ISO standards can have massive market penetration.  
For example the ISO standard for film speed (100, 200, 400, etc.) is  
used globally by millions of consumers.
     Another ISO standard of far-reaching importance is the Open  
Systems Interconnection (OSI) Reference Model. Developed in 1978, the  
OSI Reference Model is a technique for classifying all networking  
activity into seven abstract layers. Each layer describes a different  
segment of the technology behind networked communication, as described  
in various chapters above.
 
     Layer 7   Application
     Layer 6   Presentation
     Layer 5   Session
     Layer 4   Transport
     Layer 3   Network
     Layer 2   Data link
     Layer 1   Physical
 
This classification helps organize the process of standardization into  
distinct areas of activity, and is relied on heavily by those creating  
standards for the Internet.
	In 1987 the ISO and the IEC recognized that some of their efforts were  
beginning to overlap. They decided to establish an institutional  
framework to help coordinate their efforts and formed a joint committee  
to deal with information technology called the Joint Technical  
Committee 1 (JTC 1). ISO and IEC both participate in the JTC 1, as well  
as liaisons from Internet-oriented consortia such as the IEFT. ITU  
members, IEEE members and others from other standards bodies also  
participate here. 	Individuals may sit on several committees in several  
different standards bodies, or simply attend as ex officio members, to  
increase inter-organizational communication and reduce redundant  
initiatives between the various standards bodies. JTC 1 committees  
focus on everything from office equipment to computer graphics. One of  
the newest committees is devoted to biometrics.
	ISO, ANSI, IEEE, and all the other standards bodies are well  
established organizations with long histories and formidable  
bureaucracies. The Internet on the other hand has long been skeptical  
of such formalities and spawned a more ragtag, shoot from the hip  
attitude about standard creation.[19] I will focus the rest of this  
chapter on those communities and the protocol documents that they  
produce.
     There are four groups that make up the organizational hierarchy in  
charge of Internet standardization. They are the Internet Society, the  
Internet Architecture Board, the Internet Engineering Steering Group,  
and the Internet Engineering Task Force.[20]  
     The Internet Society (ISOC), founded in January 1992, is a  
professional membership society. It is the umbrella organization for  
the other three groups. Its mission is "[t]o assure the open  
development, evolution and use of the Internet for the benefit of all  
people throughout the world."[21] It facilitates the development of  
Internet protocols and standards. ISOC also provides fiscal and legal  
independence for the standards-making process, separating this activity  
from its former US government patronage.
     The Internet Architecture Board (IAB), originally called the  
Internet Activities Board, is a core committee of thirteen nominated by  
and consisting of members of the IETF.[22] The IAB reviews IESG  
appointments, provides oversight of the architecture of network  
protocols, oversees the standards creation process, hears appeals,  
oversees the RFC Editor, and performs other chores. The IETF (as well  
as the Internet Research Task Force which focuses on longer term  
research topics) falls under the auspices of the IAB. The IAB is  
primarily an oversight board, since actually accepted protocols  
generally originate within the IETF (or in smaller design teams).
	Underneath the IAB is the Internet Engineering Steering Group (IESG),  
a committee of the Internet Society that assists and manages the  
technical activities of the IETF. All of the directors of the various  
research areas in the IETF are part of this Steering Group.
	The bedrock of this entire community is The Internet Engineering Task  
Force (IETF). The IETF is the core area where most protocol initiatives  
begin. Several thousand people are involved in the IETF, mostly through  
email lists, but also in face to face meetings. “The Internet  
Engineering Task Force is,” in their own words, “a loosely  
self-organized group of people who make technical and other  
contributions to the engineering and evolution of the Internet and its  
technologies.”[23] Or elsewhere: “the Internet Engineering Task Force  
(IETF) is an open global community of network designers, operators,  
vendors, and researchers producing technical specifications for the  
evolution of the Internet architecture and the smooth operation of the  
Internet.”[24]
The IETF is best defined in the following RFCs:
 
·      “The Tao of IETF: A Guide for New Attendees of the Internet  
Engineering Task Force” (RFC 1718, FYI 17)
·      “Defining the IETF” (RFC 3233, BCP 58)
·      “IETF Guidelines for Conduct”[25] (RFC 3184, BCP 54)
·      "The Internet Standards Process -- Revision 3" (RFC 2026, BCP 9)
·      "IAB and IESG Selection, Confirmation, and Recall Process:  
Operation of the Nominating and Recall Committees" (RFC 2727, BCP 10)
·      "The Organizations Involved in the IETF Standards Process" (RFC  
2028, BCP 11)
 
These documents describe both how the IEFT creates standards, but also  
how the entire community itself is set up and how it behaves.
	The IETF is the least bureaucratic of all the organizations mentioned  
in this chapter. In fact it is not an organization at all, but rather  
an informal community. It does not have strict bylaws or formal  
officers.  It is not a corporation (nonprofit or otherwise) and thus  
has no Board of Directors. It has no binding power as a standards  
creation body and is not ratified by any treaty or charter. It has no  
membership, and its meetings are open to anyone. “Membership” in the  
IETF is simply evaluated through an individual’s participation. If you  
participate via email, or attend meetings, you are a member of the  
IETF. All participants operate as unaffiliated individuals, not as  
representatives of other organizations or vendors.
	The IETF is divided up by topic into various Working Groups. Each  
Working Group[26] focuses on a particular issue or issues and drafts  
documents that are meant to capture the consensus of the group. Like  
the other standards bodies, IETF protocols are voluntary standards.  
There is no technical or legal requirement[27] that anyone actually  
adopt IETF protocols.
     The process of establishing an Internet Standard is gradual,  
deliberate, and negotiated. Any protocol produced by the IETF goes  
through a series of stages, called the “standards track.” The standards  
track exposes the document to extensive peer review, allowing it to  
mature into an RFC memo and eventually an Internet Standard. “The  
process of creating an Internet Standard is straightforward,” they  
write. “A specification undergoes a period of development and several  
iterations of review by the Internet community and revision based upon  
experience, is adopted as a Standard by the appropriate body [...], and  
is published.”[28]
	Preliminary versions of specifications are solicited by the IETF as  
Internet-Draft documents. Anyone may submit an Internet-Draft. They are  
not standards in any way and should not be cited as such nor  
implemented by any vendors. They are works in progress and are subject  
to review and revision. If they are deemed uninteresting or  
unnecessary, they simply disappear after their expiration date of six  
months. They are not RFCs and receive no number.
	If an Internet-Draft survives the necessary revisions and is deemed  
important, it is shown to the IESG and nominated for the standards  
track. If the IESG agrees (and the IAB approves), then the  
specification is handed off to the RFC Editor and put in the queue for  
future publication. The actual stages in the standards track are:
 
1)  Proposed Standard—The formal entry point for all specifications is  
here as a Proposed Standard. This is the beginning of the RFC process.  
The IESG has authority via the RFC Editor to elevate an Internet-Draft  
to this level. While no prior real world implementation is required of  
a Proposed Standard, these specifications are generally expected to be  
fully-formulated and implementable.
2)  Draft Standard—After specifications have been implemented in at  
least two “independent and interoperable” real world applications they  
can be elevated to the level of a Draft Standard. A specification at  
the Draft Standard level must be relatively stable and easy to  
understand. While subtle revisions are normal for Draft Standards, no  
substantive changes are expected after this level.
3)  Standard—Robust specifications with wide implementation and a  
proven track record are elevated to the level of Standard. They are  
considered to be official Internet Standards and are given a new number  
in the “STD” sub-series of the RFCs (but also retain their RFC number).  
The total number of Standards is relatively small.
 
Not all RFCs are standards. Many RFCs are informational, experimental,  
historic, or even humorous[29] in nature. Furthermore not all RFCs are  
full-fledged Standards—they may not be that far along yet.
	In addition to the STD subseries for Internet Standards, there are two  
other RFC subseries that warrant special attention: the Best Current  
Practice Documents (BCP) and informational documents known as FYI.
	Each new protocol specification is drafted in accordance with RFC  
1111, “Request for Comments on Request for Comments: Instructions to  
RFC Authors,” which specifies guidelines, text formatting and  
otherwise, for drafting all RFCs. Likewise, FYI 1 (RFC 1150) titled  
“F.Y.I. on F.Y.I.: Introduction to the F.Y.I. Notes” outlines general  
formatting issues for the FYI series. Other such memos guide the  
composition of Internet-Drafts, as well as STDs and other documents.  
Useful information on drafting Internet standards is also found in RFCs  
2223 and 2360.[30]
	The standards track allows for a high level of due process. Openness,  
transparency and fairness are all virtues of the standards track.  
Extensive public discussion is par for the course.
 
Some of the RFCs are extremely important. RFCs 1122 and 1123 outline  
all the standards that must be followed by any computer that wishes to  
be connected to the Internet. Representing “the consensus of a large  
body of technical experience and wisdom,”[31] these two documents  
outline everything from email and transferring files to the basic  
protocols like IP that actually move data from one place to another.
     Other RFCs go into greater technical detail on a single  
technology. Released in September 1981, RFC 791 and RFC 793 are the two  
crucial documents in the creation of the Internet protocol suite TCP/IP  
as we know it today. In the early ‘70s Robert Kahn of DARPA and Vinton  
Cerf of Stanford University teamed up to create a new protocol for the  
intercommunication of different computer networks. In September 1973  
they presented their ideas at the University of Sussex in Brighton and  
soon afterwards completed writing the paper “A Protocol for Packet  
Network Intercommunication” which would be published in 1974 by the  
IEEE. The RFC Editor Jon Postel and others assisted in the final  
protocol design.[32] Eventually this new protocol was split in 1978  
into a two-part system consisting of TCP and IP. (As mentioned in  
earlier chapters TCP is a reliable protocol which is in charge of  
establishing connections and making sure packets are delivered, while  
IP is a connectionless protocol that is only interested in moving  
packets from one place to another.)
     One final technology worth mentioning in the context of protocol  
creation is the World Wide Web. The Web emerged largely from the  
efforts of one man, the British computer scientist Tim Berners-Lee.  
During the process of developing the Web, Berners-Lee wrote both the  
Hypertext Transfer Protocol (HTTP) and the Hypertext Markup Language  
(HTML), which form the core suite of protocols used broadly today by  
servers and browsers to transmit and display web pages. He also created  
the web address, called a Universal Resource Identifier (URI), of which  
today’s “URL” is a variant, which is a simple, direct way for locating  
any resource on the Web.
     Tim Berners-Lee:
 
The art was to define the few basic, common rules of “protocol” that  
would allow one computer to talk to another, in such a way that when  
all computer everywhere did it, the system would thrive, not break  
down. For the Web, those elements were, in decreasing order of  
importance, universal resource identifiers (URIs), the Hypertext  
Transfer Protocol (HTTP), and the Hypertext Markup Language (HTML).
 
So, like other protocol designers, Berners-Lee’s philosophy was to  
create a standard language for interoperation. By adopting his  
language, the computers would be able to exchange files. He continues:
 
What was often difficult for people to understand about the design was  
that there was nothing else beyond URIs, HTTP, and HTML. There was no  
central computer “controlling” the Web, no single network on which  
these protocols worked, not even an organization anywhere that “ran”  
the Web. The Web was not a physical “thing” that existed in a certain  
“place.” It was a “space” in which information could exist.[33]
 
This is also in line with other protocol scientists’s intentions—that  
an info-scape exists on the net with no centralized administration or  
control. (But as I have pointed out, it should not be inferred that a  
lack of centralized control means a lack of control as such.)
	Berners-Lee eventually took his ideas to the IETF and published  
“Universal Resource Identifiers in WWW” (RFC 1630) in 1994. This memo  
describes the correct technique for creating and decoding URIs for use  
on the Web. But, Berners-Lee admitted, “the IETF route didn’t seem to  
be working.”[34] 	
	Instead he established a separate standards group in October 1994  
called the World Wide Web Consortium (W3C). “I wanted the consortium to  
run on an open process like the IETF’s,” Berners-Lee remembers, “but  
one that was quicker and more efficient. [...] Like the IETF, W3C would  
develop open technical specifications. Unlike the IETF, W3C would have  
a small full-time staff to help design and develop the code where  
necessary. Like industry consortia, W3C would represent the power and  
authority of millions of developers, researchers and users. And like  
its member research institutions, it would leverage the most recent  
advances in information technology.”[35]
	The W3C creates the specifications for Web technologies, and releases  
“recommendations” and other technical reports. The design philosophies  
driving the W3C are similar to those at the IETF and other standards  
bodies. They promote a distributed (their word is “decentralized”)  
architecture, they promote interoperability in and among different  
protocols and different end systems, and so on.
In many ways the core protocols of the Internet had their development  
heyday in the ‘80s. But Web protocols are experiencing explosive growth  
today.
	The growth is due to an evolution of the concept of the Web into what  
Berners-Lee calls the Semantic Web. In the Semantic Web, information is  
not simply interconnected on the Internet using links and graphical  
markup—what he calls “a space in which information could permanently  
exist and be referred to”[36]--but it is enriched using descriptive  
protocols that say what the information actually is.
	For example, the word “Galloway” is meaningless to a machine. It is  
just a piece of information that says nothing about what it is or what  
it means. But wrapped inside a descriptive protocol it can be  
effectively parsed: “<surname>Galloway</surname>.” Now the machine  
knows that Galloway is a surname. The word has been enriched with  
semantic value. If one makes the descriptive protocols more complex,  
then one is able to say more complex things about information, i.e.  
that Galloway is my surname, and my given name is Alexander, and so on.  
The Semantic Web is simply the process of adding extra meta-layers on  
top of information so that it can be parsed according to its semantic  
value.
     Why is this significant? Before this, protocol had very little to  
do with meaningful information. Protocol does not interface with  
content, with semantic value. It is, as I say above, against  
interpretation. But with Berners-Lee comes a new strain of protocol:  
protocol that cares about meaning. This is what he means by a Semantic  
Web. It is, as he says, “machine-understandable information.”
     Does the Semantic Web, then, contradict my principle above that  
protocol is against interpretation? I’m not so sure. Protocols can  
certainly say things about their contents. A checksum does this. A  
file-size variable does this. But do they actually know the meaning of  
their contents? So it is a matter of debate as to whether descriptive  
protocols actually add intelligence to information, or if they are  
simply subjective descriptions (originally written by a human) that  
computers mimic but understand little about. Berners-Lee himself  
stresses that the Semantic Web is not an artificial intelligence  
machine.[37] He calls it “well-defined” data, not interpreted data—and  
in reality those are two very different things. I promised in the  
Introduction to skip all epistemological questions, and will leave this  
one to be debated by my betters.
	As this survey of protocological institutionalization shows, the  
primary source materials for any protocological analysis of Internet  
standards are the Request for Comments (RFC) memos. They began  
circulation in 1969 with Steve Crocker’s RFC “Host Software” and have  
documented all developments in protocol since.[38] “It was a modest and  
entirely forgettable memo,” Crocker remembers, “but it has significance  
because it was part of a broad initiative whose impact is still with us  
today.”[39]
	While generally opposed to the center-periphery model of  
communication—what some call the “downstream paradigm”[40]—Internet  
protocols describe all manner of computer-mediated communication over  
networks. There are RFCs for transporting messages from one place to  
another, and others for making sure it gets there in one piece. There  
are RFCs for email, for webpages, for news wires, and for graphic  
design.
	Some advertise distributed architectures (like IP routing), some  
hierarchical (like the DNS). Yet they all create the conditions for  
technological innovation based on a goal of standardization and  
organization. It is a peculiar type of anti-federalism through  
universalism—strange as it sounds—whereby universal techniques are  
levied in such a way as ultimately to revert much decision-making back  
to the local level.
	But during this process many local differences are elided in favor of  
universal consistencies. For example, protocols like HTML were  
specifically designed to allow for radical deviation in screen  
resolution, browser type and so on. And HTML (along with protocol as a  
whole) acts as a strict standardizing mechanism that homogenizes these  
deviations under the umbrella of a unilateral standard.
	Ironically, then, the Internet protocols which help engender a  
distributed system of organization are themselves underpinned by  
adistributed, bureaucratic institutions—be they entities like ICANN or  
technologies like DNS.
	Thus it is an oversight for theorists like Lawrence Lessig, despite  
his strengths, to suggest that the origin of Internet communication was  
one of total freedom and lack of control.[41] Instead, it is clear to  
me that the exact opposite of freedom, that is control, has been the  
outcome of the last forty years of developments in networked  
communications. The founding principle of the net is control, not  
freedom. Control has existed from the beginning.
	Perhaps it is a different type of control then we are used to seeing.  
It is a type of control based in openness, inclusion, universalism, and  
flexibility. It is control borne from high degrees of technical  
organization (protocol), not this or that limitation on individual  
freedom or decision making (fascism).
	Thus it is with complete sincerity that Web inventor Tim Berners-Lee  
writes:
 
I had (and still have) a dream that the web could be less of a  
television channel and more of an interactive sea of shared knowledge.  
I imagine it immersing us as a warm, friendly environment made of the  
things we and our friends have seen, heard, believe or have figured  
out.[42]
 
The irony is, of course, that in order to achieve this social utopia  
computer scientists like Berners-Lee had to develop the most highly  
controlled and extensive mass media yet known. Protocol gives us the  
ability to build a “warm, friendly” technological space. But it becomes  
warm and friendly through technical standardization, agreement,  
organized implementation, broad (sometimes universal) adoption, and  
directed participation.
     I stated in the introduction that protocol is based on a  
contradiction between two opposing machines, one machine that radically  
distributes control into autonomous locales, and another that focuses  
control into rigidly defined hierarchies. This chapter illustrates this  
reality in full detail. The generative contradiction that lies at the  
very heart of protocol is that in order to be politically progressive,  
protocol must be partially reactionary.
	To put it another way, in order for protocol to enable radically  
distributed communications between autonomous entities, it must employ  
a strategy of universalization, and of homogeneity. It must be  
anti-diversity. It must promote standardization in order to enable  
openness. It must organize peer groups into bureaucracies like the IEFT  
in order to create free technologies.
	To be sure, the two partners in this delicate two-step often exist in  
separate arenas. As protocol pioneer Bob Braden puts it, “There are  
several vital kinds of heterogeneity.”[43] That is to say, one sector  
can be standardized while another is heterogeneous. The core Internet  
protocols can be highly controlled while the actual administration of  
the net can be highly uncontrolled. Or, DNS can be arranged in a strict  
hierarchy while users’s actual experience of the net can be highly  
distributed.
	In short, control in distributed networks is not monolithic. It  
proceeds in multiple, parallel, contradictory and oftenunpredictable  
ways. It is a complex of interrelated currents and counter-currents.
	Perhaps I can term the institutional frameworks mentioned in this  
chapter a type of tactical standardization, in which certain short term  
goals are necessary in order to realize one’s longer term goals.  
Standardization is the politically reactionary tactic that enables  
radical openness. Or to give an example of this analogy in technical  
terms: the Domain Name System, with it’s hierarchical architecture and  
bureaucratic governance, is the politically reactionary tactic that  
enables the truly distributed and open architecture of the Internet  
Protocol. It is, as Barthes put it, our “Operation Margarine.” And this  
is the generative contradiction that fuels the net.

------------------------------------------------------------------------
[1] Jake Feinler, “30 Years of RFCs,” RFC 2555, April 7, 1999.

[2] See Vint Cerf’s memorial to Jon Postel’s life and work in “I  
Remember IANA,” RFC 2468, October 1988.

[3] Katie Hafner and Matthew Lyon, Where Wizards Stay up Late: The  
Origins of the Internet (New York: Touchstone, 1996), p. 145. For  
biographies of two dozen protocol pioneers see Gary Malkin’s “Who’s Who  
in the Internet: Biographies of IAB, IESG and IRSG Members,” RFC 1336,  
FYI 9, May 1992.

[4] Vinton Cerf, personal correspondence, September 23, 2002.

[5] Fred Baker, personal correspondence,  December 12, 2002.

[6] AT&T’s Otis Wilson who is cited in Peter Salus, A Quarter Century  
of Unix (New York: Addison-Wesley, 1994), p. 59.

[7] Salus, A Quarter Century of Unix, p. 2.

[8] See Dennis Ritchie, “The Development of the C Programming Language”  
in Thomas Bergin and Richard Gibson, eds., History of Programming  
Languages II (New York: ACM, 1996), p. 681.

[9] Bjarne Stroustrup, “Transcript of Presentation” in Bergin & Gibson,  
p. 761.

[10] S. J. Liebowitz and Stephen E. Margolis, “Path Dependence, Lock-In  
and History,” Journal of Law, Economics and Organization, April 1995.

[11] If not VHS then the VCR in general was aided greatly by the porn  
industry. David Morton writes that “many industry analysts credited the  
sales of erotic video tapes as one of the chief factors in the VCR’s  
early success. They took the place of adult movie theaters, but also  
could be purchased in areas where they were legal and viewed at home.”  
See Morton’s A History of Electronic Entertainment since 1945,  
http://www.ieee.org/organizations/history_center/research_guides/ 
entertainment, p. 56.

[12] Douglas Puffert, “Path Dependence in Economic Theory,”  
http://www.vwl.uni-muenchen.de/ls_komlos/pathe.pdf, p. 5.

[13] IEEE 2000 Annual Report (IEEE, 2000), p. 2.

[14] IEEE prefers to avoid associating their standards with  
trademarked, commercial, or otherwise proprietary technologies. Hence  
the IEEE definition eschews the word “Ethernet” which is associated  
with Xerox PARC where it was named. The 1985 IEEE standard for Ethernet  
is instead titled “IEEE 802.3 Carrier Sense Multiple Access with  
Collision Detection (CSMA/CD) Access Method and Physical Layer  
Specifications.”

[15] Paul Baran, Electrical Engineer, an oral history conducted in 1999  
by David Hochfelder, IEEE History Center, Rutgers University, New  
Brunswick, NJ, USA.

[16] ANSI, “National Standards Strategy for the United States,”  
http://www.ansi.org, emphasis in original.

[17] The name ISO is in fact not an acronym, but derives from a Greek  
word for “equal.” This way it avoids the problem of translating the  
organization’s name into different languages, which would produce  
different acronyms. The name ISO, then, is a type of semantic standard  
in itself.

[18] See http://www.iso.ch for more history of the ISO.

[19] The IEFT takes pride in having such an ethos. Jeanette Hofmann  
writes: “The IETF has traditionally understood itself as an elite in  
the technical development of communication networks. Gestures of  
superiority and a dim view of other standardisation committees are  
matched by unmistakable impatience with incompetence in their own  
ranks.” See “Government Technologies and Techniques of Government:  
Politics on the Net,” http://duplox.wz-berlin.de/final/jeanette.htm

[20] Another important organization to mention is the Internet  
Corporation for Assigned Names and Numbers (ICANN). ICANN is a  
nonprofit organization which has control over the Internet’s domain  
name system. Its Board of Directors has included Vinton Cerf,  
co-inventor of the Internet Protocol and founder of the Internet  
Society, and author Esther Dyson. “It is ICANN’s objective to operate  
as an open, transparent, and consensus-based body that is broadly  
representative of the diverse stakeholder communities of the global  
Internet” (see “ICANN Fact Sheet,” http://www.icann.org). Despite this  
rosy mission statement, ICANN has been the target of intense criticism  
in recent years. It is for many the central lightning rod for problems  
around issues of Internet governance. A close look at ICANN is  
unfortunately outside the scope of this book, but for an excellent  
examination of the organization see Milton Mueller’s Ruling the Root  
(Cambride: MIT, 2002).

[21] http://www.isoc.org.

[22] For a detailed description of the IAB see Brian Carpenter,  
“Charter of the Internet Architecture Board (IAB),” RFC 2850, BCP 39,  
May 2000.

[23] Gary Malkin, “The Tao of IETF: A Guide for New Attendees of the  
Internet Engineering Task Force,” RFC 1718, FYI 17, October 1993.

[24] Paul Hoffman and Scott Bradner, “Defining the IETF,” RFC 3233, BCP  
58, February 2002.

[25] This RFC is an interesting one because of the social relations it  
endorses within the IETF. Liberal, democratic values are the norm.  
“Intimidation or ad hominem attack” is to be avoided in IETF debates.   
Instead IETFers are encouraged to “think globally” and treat their  
fellow colleagues “with respect as persons.” Somewhat ironically this  
document also specifies that “English is the de facto language of the  
IETF.” See Susan Harris, “IETF Guidelines for Conduct,” RFC 3184, BCP  
54, October 2001.

[26] For more information on IETF Working Groups see Scott Bradner,  
“IETF Working Group Guidelines and Procedures,” RFC 2418, BCP 25,  
September 1998.

[27] That said, there are protocols that are given the status level of  
“required” for certain contexts. For example the Internet Protocol is a  
required protocol for anyone wishing to connect to the Internet. Other  
protocols may be give status levels of “recommended” or “elective”  
depending on how necessary they are for implementing a specific  
technology. The “required” status level should not be confused however  
with mandatory standards. These have legal implications and are  
enforced by regulatory agencies.

[28] Scott Bradner, "The Internet Standards Process -- Revision 3," RFC  
2026, BCP 9, October 1996.

[29] Most RFCs published on April 1st are suspect. Take for example RFC  
1149, "A Standard for the Transmission of IP Datagrams on Avian  
Carriers” (David Waitzman, April 1990), which describes how to send IP  
datagrams via carrier pigeon, lauding their “intrinsic collision  
avoidance system.” Thanks to Jonah Brucker-Cohen for first bringing  
this RFC to my attention. Brucker-Cohen  himself has devised a new  
protocol called “H2O/IP” for the transmission of IP datagrams using  
modulated streams of water. Consider also “The Infinite Monkey Protocol  
Suite (IMPS)” described in RFC 2795 (SteQven [sic] Christey, April  
2000) that describes “a protocol suite which supports an infinite  
number of monkeys that sit at an infinite number of typewriters in  
order to determine when they have either produced the entire works of  
William Shakespeare or a good television show.” Shakespeare would  
probably appreciate “SONET to Sonnet Translation” (April 1994, RFC  
1605) which uses fourteen line decasyllabic verse to optimize data  
transmission over Synchronous Optical Network (SONET). There is also  
the self-explanatory “Hyper Text Coffee Pot Control Protocol  
(HTCPCP/1.0)” (Larry Masinter, RFC 2324, April 1998), clearly required  
reading for any under-slept webmaster. Other examples of ridiculous  
technical standards include Eryk Salvaggio’s “Slowest Modem” which uses  
the US Postal Service to send data via diskette at a data transfer rate  
of only 0.002438095238095238095238 kb/s. He specifies that “[a]ll html  
links on the diskette must be set up as a href=’mailing address’ (where  
‘mailing address’ is, in fact, a mailing address)” (“Free Art Games #5,  
6 and 7,” Rhizome, September 26, 2000), and Cory Arcangel’s “Total  
Asshole” file compression system that, in fact, enlarges a file  
exponentially in size when it is compressed.

[30] See Jon Postel and Joyce Reynolds, "Instructions to RFC Authors,"  
RFC 2223, October 1997, and Gregor Scott, “Guide for Internet Standards  
Writers,” RFC 2360, BCP 22, June 1998.

[31] Robert Braden, “Requirements for Internet Hosts -- Communication  
Layers,” RFC 1122, STD 3, October 1989.

[32] Milton Mueller, Ruling the Root (Cambridge: MIT, 2002), p. 76.

[33] Tim Berners-Lee, Weaving the Web (New York: HarperCollins, 1999),  
p. 36.

[34] Ibid., p. 71.

[35] Ibid., pp. 92, 94.

[36] Ibid., p. 18.

[37] Tim Berners-Lee, “What the Semantic Web can represent,”  
http://www.w3.org/DesignIssues/RDFnot.html.

[38] One should not tie Crocker’s memo to the beginning of protocol per  
se. That honor should probably go to Paul Baran’s 1964 RAND publication  
“On Distributed Communications.” In many ways it serves as the origin  
text for the RFCs that would follow. Although it came before the RFCs  
and was not connected to it in any way, Baran’s memo essentially  
fulfilled the same function, that is, to outline for Baran’s peers a  
broad technological standard for digital communication over networks.
      Other RFC-like documents have also been important in the  
technical development of networking. The Internet Experiment Notes  
(IENs), published from 1977 to 1982 and edited by RFC editor Jon  
Postel, addressed issues connected to the then-fledgling Internet  
before merging with the RFC series. Vint Cerf also cites the ARPA  
Satellite System Notes and the PRNET Notes on packet radio (see RFC  
2555). There exists also the MIL-STD series maintained by the  
Department of Defense. Some of the MIL-STDs overlap with Internet  
Standards covered in the RFC series.

[39] Steve Crocker, “30 Years of RFCs,” RFC 2555, April 7, 1999.

[40] See Nelson Minar and Marc Hedlund, “A Network of Peers:  
Peer-to-Peer Models Through the History of the Internet,” in Andy Oram,  
Ed., Peer-to-Peer: Harnessing the Power of Disruptive Technologies  
(Sebastopol, CA: O’Reilly, 2001), p. 10.

[41] In his first book, Code and other Laws of Cyberspace (New York:  
Basic Books, 1999), Lessig sets up a before/after scenario for  
cyberspace. The “before” refers to what he calls the “promise of  
freedom” (6). The “after” is more ominous. Although as yet unfixed,  
this future is threatened by “an architecture that perfects control”  
(6). He continues this before/after narrative in The Future of Ideas:  
The Fate of the Commons in a Connected World (New York: Random House,  
2001) where he assumes that the network, in its nascent form, was what  
he calls free, that is, characterized by “an inability to control”  
(147). Yet “[t]his architecture is now changing” (239), Lessig claims.  
We are about to “embrace an architecture of control” (268) put in place  
by new commercial and legal concerns.
	Lessig’s discourse is always about a process of becoming, not of  
always having been. It is certainly correct for him to note that new  
capitalistic and juridical mandates are sculpting network  
communications in ugly new ways. But what is lacking from Lessig’s  
work, then, is the recognition that control is endemic to all  
distributed networks that are governed by protocol. Control was there  
from day one. It was not imported later by the corporations and courts.  
In fact distributed networks must establish a system of control, which  
I call protocol, in order to function properly. In this sense, computer  
networks are and always have been the exact opposite of Lessig’s  
“inability to control.” 
	While Lessig and I clearly come to very different conclusions, I  
attribute this largely to the fact that we have different objects of  
study. His are largely issues of governance and commerce while mine are  
technical and formal issues. My criticism of Lessig is less to deride  
his contribution, which is inspiring, than to point out our different  
approaches.

[42] Cited in Jeremie Miller, “Jabber,” in Oram, Ed., Peer-to-Peer, p.  
81.

[43] Bob Braden, personal correspondence, December 25, 2002.

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: majordomo@bbs.thing.net and "info nettime-l" in the msg body
#  archive: http://www.nettime.org contact: nettime@bbs.thing.net