Amy Alexander on Mon, 4 Aug 2008 08:20:17 +0200 (CEST) |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: <nettime> Between Tracking and Formulating |
Apologies for coming to thread late - in fact, I don't have Jordan's original text, and can't find it online (perhaps someone could forward it?) So I'm trying to piece things together by inference, and my mental algorithm may have some bugs. </irony> (<-more irony) But I found this piece of the thread very interesting - the apparent miscommunication may reveal a lot. What is the "power of algorithms" that's under debate here? Quite often in discussions like this, the algorithm is dehumanized. But algorithms are human decisions expressed in code. They may be implemented by a computer, but humans have commanded the computer. The fact that we don't see the human, or even what we recognize as human language, helps us forget that. Although code is generally written in the imperative ("blow up country x") or declarative ("if country x has forbidden fruit then we declare country x to be a menace") it's often perceived as passive ("it was determined that due to country x's possession of forbidden fruit, country x would be declared a menace.") Danged computers, there they go again... So, I'm a believer in the power of algorithms: they are powerful expressions of human will that humans can hide behind to dodge responsibility for their actions. We should keep an eye on these buggers. I realize from the other pieces I have of the thread that the discussants (Jordan, Brian, Keith, lotu5) certainly aren't ignoring human responsibility. But as there appeared to me to be a "humans vs. algorithms" component to the discussion, I just wanted to comment on that aspect. -Amy 2008/7/30 Jordan Crandall <jcrandall@ucsd.edu>: > On 7/27/08 12:54 PM, "Keith Hart" <keith@thememorybank.co.uk> wrote: > >> But the revival of this thread encourages me to return to the central >> premise of Jordan Crandall's original article which, in its undiluted >> empiricism, is simply wrong. Taking issue with the rhetoric is one thing, >> but to let a basic fallacy go unchallenged is quite another. The following >> extract sums up his position: >> >> >Increasingly, the tracking apparatus is able to reach far back into the >> past, further back than was humanly possible, through the use of >> regressions. Regressions are statistical procedures that take raw >> historical data and estimate how various causal factors influence a single >> variable of interest (for example, the quality of wine, or an enemy's >> movement). A pattern is revealed, derived from the past, and this >> demonstrates a likelihood, a propensity, for what could happen today. This >> pattern might be stabilized, made operational, in a formula. You just plug >> in the specified attributes into a regression formula, and out comes your >> prediction. A moving phenomenon -- a stock price, a biological function, an >> enemy, a product or part -- is codified and understood in a >> historical trajectory, in order to extrapolate its subsequent position.< >> >> It is a scientistic fantasy that predictions can be made on the basis of >> statistical regularities observed in the past. Crandall's belief in the >> power of algorithms leads him to claim that number-crunching on a massive >> scale allows 'us' to dispense with theory altogether. > > Keith, thank you for your response. I want to point out that I don't <...> # distributed via <nettime>: no commercial use without permission # <nettime> is a moderated mailing list for net criticism, # collaborative text filtering and cultural politics of the nets # more info: http://mail.kein.org/mailman/listinfo/nettime-l # archive: http://www.nettime.org contact: nettime@kein.org