tbyfield on Mon, 18 Mar 2019 21:29:32 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> rage against the machine


I'm going to channel a bit of Morlock and Keith, for whom barbs aimed at the list have been a semi-regular feature of their emails, because no one who's weighed in with an opinion seems to know much about aviation. And why would they? I'm not saying anyone should have immersed themselves in the arcana of aerial malfunctions, but, absent detailed knowledge, discussion degenerates into woolly ideological rambling and ranting.
Take this, from Brian's reply to Morlock's original message:

The automatic function is called the Maneuvering Characteristics
Augmentation System (MCAS). Its sole purpose is to correct for an
upward pitching movement during takeoff, brought on by the
decision to gain fuel efficiency by using larger engines. At stake
is a feedback loop triggered by information from Angle of Attack
sensors - nothing that could reasonably be described as AI. The
MCAS is a bad patch on a badly designed plane. In addition to the
failure to inform pilots about its operation, the sensors
themselves appear to have malfunctioned during the Lion Air crash
in Indonesia.
This may be a nice distillation of a specific issue, but it lacks the 
kind of contextual knowledge that Brian values in — and often imposes 
on — areas he has thought about in depth. Like, where does this issue 
sit in a range of alternative schools of thought regarding design, 
integration, and implementation? What are the historical origins of 
Boeing's approach, and when and why did it diverge from other 
approaches? How do those other schools of thought relate to the 
different national / regional traditions and historical moments that 
shaped the relevant institutions? More specifically, how do other plane 
manufacturers address this kind of problem? Where else in the 737 might 
Boeing's approach become an issue? How do these various approaches 
affect the people, individually and collectively, who work with them? 
How do the FAA and other regulatory structures frame and evaluate this 
kind of metaphorical 'black box' in aviation design? Questions like this 
are part of the conceptual machinery of critical discussion. Without 
questions like this, specific explanations are basically an exercise in 
'de-plagiarizing' better-informed sources — rewording and reworking 
more ~expert explanations — to give illusory authority to his main 
point, that 'AI' has nothing to do with it.
But Morlock didn't say 'the relevant system directly implements AI.' He 
can correct me if I'm wrong, but he seemed to be making a more general 
point that faith in 'AI' has fundamentally transformed aviation. More 
specifically, it has redrawn the lines between airframe (basically, the 
sum total of a plane's mechanical infrastructure) and its avionics (its 
electronic systems, more or less) to such a degree that they're no 
longer distinct. But that happened several decades ago; IIRC, as of 1980 
or so some huge fraction of what was then the US's most advanced 
warplane, like 30% or 60% of them, were grounded at any given moment for 
reasons that couldn't be ascertained with certainty because each one 
needed a ground crew of 40–50 people, and the integration systems 
weren't up to the challenge.
Obviously, quite a lot has happened since then, and a big part of it has 
to do with the growing reliance on computation in every aspect of 
aviation. In short, the problem isn't limited to the plane as a 
technical object: it also applies to *the entire process of conceiving, 
designing, manufacturing, and maintaining planes*. This interpenetration 
has become so deep and dense that — at least, this is how I take 
Morlock's point — Boeing, as an organization, has lost sight of its 
basic responsibility: a regime — organizational, conceptual, technical 
— that *guarantees* their planes work, where 'work' means reliably 
move contents from point A to B without damaging the plane or the 
contents.
OK, so AI... What we've got in this thread is a failure to communicate, 
as Cool Hand Luke put it — and one that's hilariously nettime. It 
seems like Morlock, who I'd bet has forgotten more about AI than Brian 
knows, is using it in a loose 'cultural' way; whereas Brian, whose 
bailiwick is cultural, intends AI in a more ~technical way. But that 
kind of disparity in register applies to how 'AI' is used pretty much 
everywhere. In practice, 'AI' is a bunch of unicorn and rainbow stickers 
pasted onto a galaxy of speculative computing practices that are being 
implemented willy-nilly everywhere, very much including the aviation 
sector. You can be *sure* that Boeing, Airbus, Bombardier, Embraer, 
Tupolev, and Comac are awash in powerpoints pimping current applications 
and future promises of AI in every aspect of their operations: financial 
modeling, market projections, scenario-planning, capacity buildout, 
materials sourcing, quality assurance, parametric design, flexible 
manufacturing processes, maintenance and upgrade logistics, etc, etc, 
and — last but not least — human factors. Some of it really is AI, 
some of it isn't, but whatever. One net effect is to blur one thing 
that, historically, airplane manufacturers have done well to a genuinely 
breathtaking degree: breaking down the challenge of defying gravity in 
rigorous and predictable ways that allow talented, trained, and 
experienced people to *reliably* improvise.
But I began by pointing out that most of the opinions in this thread 
have been light on facts, so let's see if we can fix that a bit.
The gold standard for thinking about this kind of problem is Richard 
Feynman's 'minority report' on the Challenger disaster. It's still 
relevant because the broad engineering approach used to define and 
design the Space Shuttle became the dominant paradigm for commercial 
aviation. in that context, AI is just an overinflated version of the 
problem that brought down the shuttle. Feynman's report is just 14 pages 
long, and every word is worth reading.
	https://science.ksc.nasa.gov/shuttle/missions/51-l/docs/rogers-commission/Appendix-F.txt

For anyone who's interested in accessible nitty-gritty discussions of the hows and whys of critical malfunctions in planes, RISKS Digest (the ACM Forum on Risks to the Public in Computers and Related Systems) is a gold mine: it includes almost 35 years of extremely well-informed discussions, many of which get into the kinds of critical questions I mentioned above.
	https://catless.ncl.ac.uk/Risks/

One of my favorite papers ever, anywhere, is Peter Gallison's "An Accident of History," which is in part a historiographical meditation on plane crashes. It first appeared in Galison and Alex Roland, _Atmospheric Flight in the Twentieth Century_ (Dordrecht/Boston: Kluwer Academic, 2000). We need many more books like it.
	https://galison.scholar.harvard.edu/publications/accident-history

A messier example is the series of essays Elaine Scarry published in the _New York Review of Books_ in 1998-2000 about a series of crashes: TWA 800 off Long Island NY, Swissair 111 off Nova Scotia, and EgyptAir 990 off Nantucket MA. I have no idea if her allegations about 'external' electromagnetic interference are right — I think the consensus is that she was off on a frolic her own — but her essays are an important marker in public discussions of plane crashes. In particular, they (including the subsequent debates) show how difficult it is for individuals outside of the field to say anything useful, because the divide between expert and non-expert is so vast. As above, that divide applies equally to the problem of the countless specialities involved in aviation — which is another reason big 'AI' would be so appealing within manufacturers like Boeing: it promises to solve what management struggles with, namely integrate complexity. This article trashes her work, but I'll provide a pointer because it gives a succinct overview:
	https://www.mse.berkeley.edu/faculty/deFontaine/CommentaryIII.html

But, overall, cultural analysis of aviation is in a pitiable state. That's changing, partly driven by newer fields like STS, partly by changing attitudes to 'generational' experiences like WW2 and the Space Race. But work that combines sophisticated cultural analysis with legit aviation geekery is as rare as hens' teeth, and it's unlikely to get better as long as cult studs remains attracted, like moths to a flame, to lefty verities about how War is Evil etc — because the history of aviation can't be disentangled from war. Herman Kahn sort of nailed the problem with this infamous line (in Frank Armbruster, Raymond Gastil, Kahn, William Pfaff and Edmund Stillman, _Can We Win in Vietnam?_ [NYC: Praeger, 1968]):
Obviously it is difficult not to sympathize with those European and American audiences who, when shown films of fighter-bomber pilots visibly exhilarated by successful napalm bombing runs on Viet-Cong targets, react with horror and disgust. Yet, it is unreasonable to expect the U. S. Government to obtain pilots who are so appalled by the damage they may be doing that they cannot carry out their missions or become excessively depressed or guilt-ridden.
An analogous dilemma applies to plane crashes: just swap Boeing 
employees and tone-deaf corporate crisis management strategies for 
pilots and visible exhilaration, civilian passengers for Viet Cong, and 
— crucially — cult studs for European and American audiences. A good 
example of how to negotiate that swap is Wade Robison, Roger Boisjoly, 
David Hoeker, Stefan Young, "Representation and Misrepresentation: Tufte 
and the Morton Thiokol Engineers on the Challenger" (Science and 
Engineering Ethics 81):
	https://people.rit.edu/wlrgsh/FINRobison.pdf

That's more than enough for now.

Cheers,
Ted
#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: