Patrice Riemens on Sun, 5 Apr 2009 19:07:06 +0200 (CEST)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Ippolita Collective: The Dark Side of Google (Chapter 5, part 2)


NB this book and translation are published under Creative Commons license
2.0 (Attribution, Non Commercial, Share Alike).
Commercial distribution requires the authorisation of the copyright
holders: Ippolita Collective and Feltrinelli Editore, Milano (.it)


Ippolita Collective

The Dark Side of Google (continued)


Chapter 5 (second part)
(continued from "Technological masturbation ...")


In May 2006 Google launched 'Google Web Toolkit' , a 'framework'  that
enables to develop Ajax applications written in Java script. Ajax (or
Asynchronous JavaScript and XML) is a technique for the development of
dynamic, interactive web applications using standard HTML (or XHTML)
together with CSS for the visual part, and with JavaScript for the dynamic
display of and interaction between data. What you then get are extremely
fast-moving sites since it is no longer necessary to download all
information of the page afresh every time. GMail uses Ajax for instance.
This is an important innovation which does transform the approach to
creating web applications, as it is written in a language with high level
objects (Java), which are then paired to GWT and networks compatible with
all browsers [French text probably alludes to the "write once, run
everywhere" mantra]. But on the other hand, there is no justification for
a high-pitched announcement to the effect that an imaginary 'Web 2.0' has
now come out, revolutionising the Internet by making it 'machine
readable'. After all, multiple platform software creation for bookmark
sharing, social networking, automatic data aggregation techniques, etc.
have been there for years. And the hypocrisy of large corporations like
Sun, hyping up an alleged entry into the 'Participation Age', occults the
fact that the aptitude to co-operation had been for decades the hallmark
of hacker culture. And so the most elementary parts of the innovations
that had been advanced by organisations such as the W3C for the Semantic
Web (like XML/RDF standardisation) [*N10] are being peddled as
revolutionary developments! Of course, Ajax and affiliated technologies do
solve the very recurrent problem of portability of web sites, which for
the time being are difficult to visualise on all browsers. The framework
code is available under a Apache license, and is thus Open Source, but -
as often happens with initiatives coming out of code.google.com - a number
of essential elements (in this case the Java-to-JavaScript compiler and
the 'hosted web browser') are distributed in binaries only, and one has to
subscribe to an ad hoc license, which to all practical purposes prevents
redistribution, further development of derivates, and its inclusion in
commercial products. Furthermore, every time one uses the 'hosted web
browser' permitting one to try applications out on one's machine before
launching them on the Internet, a connection is established with a  Google
server, officially in order to verify that the latest version of the
programme is being used. It is however obvious that this constitutes a
very efficient way of controlling developers rather than serving the
interests of users. Sure, the code they develop may, on the other hand, be
freely distributed, even in commercial products. [unclear to me!]

GWT in effect, is a very simple way to create sites that are perfectly
compatible with Google's indexation systems. For the time being, knowledge
of a specific {programming} language like Java is mandatory, but one can
easily fathom the moment that new developments will enable a first time
user to put web objects like taskbars or image galleries, menus of all
kinds and whatever what else on her page, without having to write {or to
know how to write} a single line of code. And indeed, there are already
extremely simple programmes for websites creation(e.g. WYSYWIG - What You
See Is What You Get), but GWT operates directly on the Web. Such contents
are thus immediately usable on static or mobile platforms of whichever
type, provided these are able to access the Web.

Let's now  imagine Google signing up commercial agreements for the
manufacture of bespoke devices, by proposing  to those who make use of its
services simple, PC, pal- , etc  visualisable web page making instruments,
which at the same time make it very simple to be indexed by Google's
spider. This because contrary to Microsoft, Google does not offer
programmes for money. I needs, as we have noted earlier, to spread its
standards around in order to manage profitably its economy of search.

And now for the Nokia-GTalk agreement. Gtalk is Google's VoIP service
[*N11], and has recently been meshed into GMail, Google's e-mail service,
so that the members of the 'Google communities' now can not only mail, but
also chat and talk in real time with each others. At the end of May 2006,
Gtalk became available on Nokia's new mobile platforms called 'Internet
tablets', a kind of mobile phones specifically designed for web browsing.
With this alliance, Google entered the world of mobile telephony through
the main gate, with the prospect of being rapidly integrated in the public
wireless networks (or wimax) that are being deployed in various cities,
airports, motorways rest areas, etc. And there is also a definite outlook
for an agreement on video distribution: video.google.com is a warehouse of
'tapes', and television on the mobile is {definitely} the next stage {in
product evolution} coming up.

To put it differently: Google provides the instruments to create contents
according to its own standards. In the domain of content creation for the
Web we see extreme personalisation, corresponding to the 'long tail'
mechanism (which means providing each individual consumer with precisely
the product demanded): the user creates 'exactly' what she wants  - in
Google's standard format. The complete decentralisation at the level of
content creation parallels complete decentralisation of the
advertisements, and hence the delivery of 'personalised' products.

What we are seeing is an invasive system imposing its own standards under
the formal appearance of being democratic because it is {allegedly} put
into the hands of the users, one click on the browser away. What is being
peddled as electronic democracy morphs into {far- reaching}
standardisation which makes it possible to absorb the contents created by
myriads of users and to target the most appropriate advertisement at them
in return.


Browsers as development environments

The outburst of ever more powerful new web services since 2005 has
transformed browsers from simple navigation instruments into
{full-fledged} development tools. There is a whole gamut of technologies
which trump the current web programming standards by putting in the hands
of  developers a cool, easy, complete, secure and multi-platform tool: the
browser itself.

Since a few years there has been a new trend in Internet sites creation,
especially because of more importance being given to portability and
accessibility of contents: this is what is clearly marked out in style
sheets (Cascading Style Sheets, standard CSS and CSS2) on formatting,
replacing the pure HTML of the validators, even of the XML standard itself
[?]. Graphic and web designers find their browsers to be excellent
auxiliaries, as these are ever more sophisticated and ever more compliant
with various standards. This enables then to realise websites that can be
visualised on various devices and platforms, yet while retaining, or even
increasing, their range of expressive possibilities.

The birth and rapid diffusion of the Mozilla browser demonstrated the
reality of a massive interaction between site developers and browser
developers, which enabled them to do away with nearly all bugs and bridge
almost all incompatibilities on web standards in a relatively short span
of time. The incompatibility between {the browsers} Internet Explorer,
Opera, and many others, whether proprietary or not, is a well-known
problem among all webpage developers. The synergy Mozilla achieved to
develop, which may look simple or even trivial, is an absolute novelty in
Web history. Another interesting characteristic of Mozilla products is the
modular structure which has been built around  the Gecko layout engine,
through which any functionality can be added. Real time stock market
quotes, local weather forecasts, and programmes eliminating ads from
websites are amongst the most widespread tools used.

Browsers have thus become ever more fiable instruments, enabling the
creation of complex websites and have now all the characteristics of
full-fledged programmes, so much so that they tend to replace more common
applications. One of the more tangible example is the Office suite of
tools Google offers as an alternative to Microsoft's, or even to the F/OSS
OpenOffice variant [*N12]. It is now possible to use 'Writely' (a product
developed by a company Google bought up) as text processor. {Other
'Internet in the clouds' options:}  Google spreadsheet, and Google Page
Creator { - the names say it all}. All these services are [were in 2007]
in beta-testing, on invitation only phase: needless to say, strictly for
Google account holders - Mountain View control rulez!

Developers, from their side, show increasing interest for the Web side of
things {Internet}, thanks especially to instruments like GWT. Naturally,
Microsoft is not taking all this lying down. Taking its cue from its
competitor's beta-testing strategy,/coming as we know from the practice of
F/OSS,/ it has already launched the beta version of its own Office System
(aka Office 2007), which integrates a lot of web-oriented tools, but
remains nonetheless an application that has to be installed beforehand.

Browser are hence in the process of becoming full-fledged development
environments for the sake of creation of standard content, also known as
SDK, Standard Development Kit. But what is exactly the innovation that
made browsers morph into SDKs? One can speak of a {truly} new paradigm in
programming, so much is clear: it has now become possible to create fully
multi-platform, client-side distributed programmes, which are hosted on a
server, and {therefore} need not installation of complex frameworks on the
users' boxes. Contents, including personal data, are stored in ever more
remote sites (on Google's own servers, for instance) accessed through the
Web /, i.e. bwo the browser/.

The choice for an 'Open Source' browser like Mozilla {Firefox} is often
driven by the simplicity of configuration and the fact that so many
powerful extensions go with it - for free. Developers use this
{particular} browser to engage in ever more complex and structured
programming. The emergence of programmes that live only on the web has two
far-reaching consequences on both the market and users: programmes with
binaries that need to be installed {on the users' machines} become
obsolete and browsers {themselves} become more complex {pieces of}
programme[s], they are very modular, and they gain increasing favor on the
IT market. Thus, one can therefore expect to see less '.exe' (MS Windows),
''.dmg' (Apple Macintosh), or Unix packs in future, and more
browser-integrated tools, and more extensions to read RSS feeds, from
GMail up to complete office software suites.

The detailed control {on use} Web service providers obtain through these
instruments make these dynamics potentially fraught with dangers for us
all, since all parties offering {this type of)[these] services know the
precise digital ID of their users, the length of time spent with the
software and the contents under elaboration, because they control every
step and know every detail about access and usage.

Seen from a technical point of view, this mechanism is based upon the fact
that there is a permanent connection between the 'client' (the browser)
and the server ({literally,} the service provider), which enables the
latter to constantly monitor the requests, the time spans, and the
intentions at stake. Moreover, allegedly in order to 'protect' the service
against all kinds of hackers and crackers attacks, the authentication
process is no longer taken care of by a compiled, source-less [?]
programme, but is directly hosted on the providers' servers. Now,
{malevolent hackers} bent on 'penetrating' a software programme, must
'crack' the remote server first.


Privacy, Paranoia, Power

The accumulation strategy pursued by Google has now enabled it to put {the
giant} Microsoft itself in a difficult position, foreboding a merciless
war of standardisation and of control of the access to the Web and to {all
other} [the] networks we use everyday. From the moment that the Google
phenomenon addresses the global mediation of information, it concerns all
the users of digital data, that is: us all. To go through Google's history
means thus to look at our own past as explorers of the Internet and of the
Web. Too often have we outsourced the management of our information, of
our sites, of our picture galleries, of our blogs, of our SMSs, of our
phone conversations, etc. etc., to companies that are everything but free
from ulterior motives.

The 'strategy of objectivity' pursued by Google emphasises {scientific}
research, academic excellence, technological superiority, and
sophisticated interfaces. {But) This is {merely} a veil occulting the
frightening prospect of a single access point to {all} data generated by
naive users.

The F/OSS strategy then, allows Google to adopt the collaborative methods
of development typical of digital communities - adapting it its own
'mission' in the process. But even in this case, as we have seen earlier,
Google makes preposterous claims by proposing so-called new methods to
exploit well-known dynamics, the 'Summer of Code' being a prime example.

Google's activities, therefore, constitute a clear and present danger to
all who think that privacy, or, at a higher level, the whole issue of due
diligence in the matter of 'being digital', is of primary importance. We
are witnessing the emergence of a power conglomerate which is gaining,
even as we speak today, far too much influence in the life of far too many
people. Google is the holder of confidential information which it analyses
all the time in order to achieve a steadily more personalised distribution
of the plague that are advertisements. And since the accumulation of
powers usually leads to the syndrome of domination, it becomes urgent to
look in depth at this phenomenon.

There is no such thing as a global answer to resolve once and for all the
issue of privacy. Big Brother does not exist, and like all paranoia, the
fear his image induces blots out possible escape routes: it only serves
the dominant power that thrives by it.

Secrecy, cryptography, steganography are examples of useful practices, but
they are not definitive solutions. Communication and sharing remain the
object of a desire, that only can be brought about by 'publication', i.e.
by making public. Conversely, obsession with secrecy rapidly leads to
paranoia and complot theory. Seen this way, what is the purpose of
constructing complicated alternatives to arrive at absolutely secure and
sheer impenetrable networks? Technology offers the opportunity for
openness and sharing. To make use of machines means to make use of hybrid
creatures, of material artifacts (which in this sense belong to the world
of 'nature') that have been invested with cultural meanings and values
(something that pertains to the domain of 'culture'). Networks are the
outcome of a co-evolutive dynamic of mechanical, biological, and
signifying machines: technology is essentially a mongrel. To create
networks mean to connect machines of different types. It means creating
methods of sharing, of exchange, of translation: one cannot withdraw in
one's shell. It becomes necessary to engage in self-questioning and
change.

We need informed research and analysis; it has become more urgent than
ever to denounce the mechanisms of technological domination. Conversely,
to renounce critical thinking {and attitude} amounts to the same  as
giving in to the syndrome of control, which is becoming increasingly
invasive. Google's history can be used in an exemplary manner to sketch
out {and promote} the ideas of openness and to imagine practices towards
the {autonomous} self-management of technologies. This because Google
represents the meeting point between the meritocratic habitus of the
University, the {desire for} boundless {and unfettered} innovation, and
the edgiest form of financial capitalism. Here then rises the occasion for
the development of autonomous and decentralised networks, and the
opportunity to confront the desire to 'explore' and to 'navigate' the
Internet, to the necessity to 'access' the data, this in order to focus
attention on the trajectory, rather than on the result.

END of Chapter 5

(to be continued)


--------------------------
Translated by Patrice Riemens
This translation project is supported and facilitated by:

The Center for Internet and Society, Bangalore
(http://cis-india.org)
The Tactical Technology Collective, Bangalore Office
(http://www.tacticaltech.org)
Visthar, Dodda Gubbi post, Kothanyur-Bangalore (till March 31st, 2009)
(http://www.visthar.org)
The Meyberg-Acosta Household, Pune (from April 2, 2009)


#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mail.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org