NeTtImE'S_kreative_kommons on Wed, 2 Mar 2005 23:41:00 +0100 (CET)

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> double-plus-unfree digest [byfield, elloi]

t byfield <>
     Re: Michael Wolff: 'Free Information is Now the Topic in the Media
Morlock Elloi <>
     Re: Internet2: Orchestrating the End of the Internet?

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Date: Wed, 2 Mar 2005 14:30:28 -0500
From: t byfield <>
Subject: Re: <nettime> Michael Wolff: 'Free Information is Now the Topic in the Media (Tue 03/01/05 at 06:08 PM -0500):

> michael wolff essentially takes a sledgehammer to 
> the intellectual property / digital rights management 
> culture 
> affecting the mediosphere 
> in this article 
> located at  

It all sounds strangely familiar. Jamie Boyle's contribution from
the ~same fortnight's more interesting -- and I don't think he's
using similar phraseology because it's "now the topic in the media." 
Wolff could used a refresher course in immanent critique, imo.


< >

Public information wants to be free

By James Boyle

Published: February 24 2005 16:27 
Last updated: February 24 2005 16:27

The United States has much to learn from Europe about information
policy. The scattered US approach to data privacy, for example,
produces random islands of privacy protection in a sea of potential
vulnerability. Until recently, your video rental records were better
protected than your medical records. Europe, by contrast, has tried to
establish a holistic framework: a much more effective approach. But
there are places where the lessons should run the other way. Take
publicly generated data, the huge and hugely important flow of
information produced by government-funded activities -- from ordnance
survey maps and weather data, to state-produced texts, traffic studies
and scientific information. How is this flow of information
distributed? The norm turns out to be very different in the US and in

On one side of the Atlantic, state produced data flows are frequently
viewed as potential revenue sources. They are copyrighted or protected
by database rights. The departments which produce the data often
attempt to make a profit from user-fees, or at least recover their
entire operating costs. It is heresy to suggest that the taxpayer has
already paid for the production of this data and should not have to do
so again. The other side of the Atlantic practices a benign form of
information socialism. By law, any text produced by the central
government is free from copyright and passes immediately into the
public domain. Unoriginal compilations of fact -- public or private --
may not be owned. As for government data, the basic norm is that it
should be available at the cost of reproduction alone. It is easy to
guess which is which. Surely, the United States is the profit and
property-obsessed realm, Europe the place where the state takes pride
in providing data as a public service? No, actually it is the other
way around.

Take weather data. The United States makes complete weather data
available to anyone at the cost of reproduction. If the superb
government websites and data feeds aren't enough, for the price of a
box of blank DVD's you can have the entire history of weather records
across the continental US. European countries, by contrast, typically
claim government copyright over weather data and often require the
payment of substantial fees. Which approach is better? If I had to
suggest one article on this subject it would be the magisterial study
by Peter Weiss called "Borders in Cyberspace," published by the
National Academies of Science. Weiss suggests that the US approach
generates far more social wealth. True, the information is initially
provided for free, but a thriving private weather industry has sprung
up which takes the publicly funded data as its raw material and then
adds value to it. The US weather risk management industry, for
example, is ten times bigger than the European one, employing more
people, producing more valuable products, generating more social
wealth. Another study estimates that Europe invests EUR9.5bn in
weather data and gets approximately EUR68bn back in economic value --
in everything from more efficient farming and construction decisions,
to better holiday planning -- a 7-fold multiplier. The United States,
by contrast invests twice as much -- EUR19bn -- but gets back a return
of EUR750bn, a 39-fold multiplier. Other studies suggest similar
patterns in areas ranging from geo-spatial data to traffic patterns
and agriculture. "Free" information flow is better at priming the pump
of economic activity.

Some readers may not thrill to this way of looking at things because
it smacks of private corporations getting a "free ride" on the public
purse -- social wealth be damned. But the benefits of open data
policies go further. Every year the monsoon season kills hundreds and
causes massive property damage in South East Asia. This year, one set
of monsoon rains alone killed 660 people in India and left 4.5 million
homeless. Researchers seeking to predict the monsoon sought complete
weather records from the US and from Europe so as to generate a model
based on global weather patterns. The US data was easily and cheaply
available at the cost of reproduction. The researchers could not
afford to pay the price asked by the European weather services,
precluding the "ensemble" analysis they sought to do. Weiss asks
rhetorically "What is the economic and social harm to over 1 billion
people from hampered research?" In the wake of the outpouring of
sympathy for the tsunami victims in the same region, this example
seems somehow even more tragic. Will the pattern be repeated with
seismographic, cartographic and satellite data? One hopes not.

The European attitude may be changing. Competition policy has already
been a powerful force pushing countries to rethink their attitudes to
government data. The European Directive on the re-use of public sector
information takes strides in the right direction, as do several
national initiatives. Unfortunately, though, most of these follow a
disappointing pattern. An initially strong draft is watered down and
the utterly crucial question of whether data must be provided at the
marginal cost of reproduction is fudged or avoided. This is a shame. I
have argued in these pages for evidence-based information policy. I
claimed in my last column that Europe's database laws have failed that
test. Sadly, up until now, its treatment of public sector data has
too. Is there a single explanation for these errors? That will be a
subject I take up in columns to come.

     The writer is William Neal Reynolds Professor of Law at Duke Law
     School, a board member of Creative Commons and the co-founder of the
     Center for the Study of the Public Domain

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Date: Wed, 2 Mar 2005 11:30:26 -0800 (PST)
From: Morlock Elloi <>
Subject: Re: <nettime> Internet2: Orchestrating the End of the Internet?

> It seems to me that a necessary (but not sufficient) condition for
> solving this problem is to use the tactic Richard Stallman came up
> with in 1984: make free content so people don't need unfree content. 
> Ignore Hollywood.  Use Creative Commons licences.  Create alternative
> funding models, as the free software movement has done.  Break out of
> the self-defeating spiral of self-reference.

There are two problems with this:


In software, releasing the *quality* code for free happens only when it's
likely it will be monetized via support contracts, promotion of the corporation
releasing it or promotion of individuals to paid-lecture-grade celebrity

In media content this likeliness of monetizing is much lower. Also, any content
which is not purely textual requires investment beyond computer, and
distribution of any video/live images expensive bandwidth.


'Value' of the content for the masses *is* created mostly by publishing labels.
You may have a genius in your neighbourhood that publishes his stuff for free,
but outside friends and family hardly anyone will notice. If (s)he, however,
gets picked by a label, and millions get invested in promotion and distribution
channels, then such content becomes 'good' by sheer force of expensive
advertizing. Just ask what people think of author before and after he becomes
label's selectee and notion that everyone likes him is manufactured.

So if the average bozo is uncapable of finding and seeing quality in
un-labelled content, then the said bozo *should* be charged for the
manufacturing process. Very much as corporate IT bozos pay thousands for
shrink-wrapped crappy software instead of using the free one.

(of original message)

Y-a*h*o-o (yes, they scan for this) spam follows:
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

#  distributed via <nettime>: no commercial use without permission
#  <nettime> is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: and "info nettime-l" in the msg body
#  archive: contact: