Minding the Machine: Preventing Technological Disasters by W. M. Evan, M. Manion
|Reviewed By:||Thomas D. Beamish|
|Reviewed in:||Journal of Contingencies and Crisis Management|
|Date accepted online:||25/11/2005|
|Published in print:||Volume 13, Issue 1, Pages 32-37|
The stated purpose of the book
In Part I, the authors introduce their subject, distinguishing disaster types and causes. Chapters 1 and 2 are especially clear in setting up the book's intentions and provide an excellent basis for the new student of technological disasters and failures. Chapter 3, essentially a case study of Y2K, illustrates a number of points made in the opening chapters.
Part II introduces theories and "root causes" of technological disasters. In Chapter 4 the authors do a fairly good job synopsising a major polemic in the field-normal accidents theory vs. high reliability theory-and augment it with attention to human factors and "socio-technical systems analysis." Chapter 5 further develops points made in the previous chapter, as the authors endeavour to integrate the field and identify the fundamental causes of technological disaster, which they identify in a fourfold typology as
Part III, Chapters 6 and 7 introduce an excellent review of the history behind socio-technical revolutions, new socio-technical regimes, and correspondingly new disaster propensities, relying on a handful of classic theoretical accounts that emphasise socio-technical arguments such as those of Lewis Mumford, Alfred Chandler, and Daniel Bell.
Part IV, Chapter 8 presents twelve archetypal cases of technological disaster and Chapter 9 develops the lessons learned from those cases given materials presented in Chapters 1-8.
Finally, Part V supplies a reflective accounting of the roles played in technological disaster by prominent societal institutions: corporations and commercial interests, the legal system, formal risk analysis techniques, and democratic processes and technological decision-making. Specifically, Chapter 10 outlines the role and responsibility of engineers and scientists that, surprisingly given the coverage of earlier chapters, lacks the same depth and critical analysis (more on this below). Chapter Eleven covers historical cases of technological disaster that demonstrate corporations and managers often know in advance, but do nothing to avert technological disasters for systemically based reasons. Chapter Twelve tackles issues of technology and governance at the U.S. federal level focusing on the executive, legislative, and judicial branches. Chapter 13 recounts traditional risk analysis techniques. Finally, Chapter 14 discusses technological decision making in a democratic society.
In response to the authors' request for suggestions on improving the book for future editions (p. xxi), I have three: (1) a minimalist restructuring of the tome would significantly improve its readability; (2) Chapter 10 is underdeveloped; (3) and Chapters 13 and 14 should be integrated into one, stand alone chapter. First, the placement of the history in Chapters 6 and 7 (p. 161) seemed to come very late. In future editions, I would urge the authors to think of re-organising their efforts, with the history provided nearer the opening since it provides the student of technology and disaster a very useful frame for comparison and contrast.
What is more, the volume is extensive, but its arrangement unnecessarily exacerbates its lengthiness. The authors could develop shortened in text reference to the cases, now provided in Chapters Three, Eight and Nine, where they illustrate key points and append the full and extensive case accounts in a stand alone "Appendix of Cases." This way, the reader or instructor of a course could spend more time and concerted attention on the specific cases if they desired, but the text itself would not have the cumbersome breaks in continuity it currently does. I would stress, the cases are excellent and must be included, but their inclusion in an appendix would be preferable.
Second, in Chapter 10, through its focus on ethical decision-making in engineering and science, the authors imply that significant change would be the outcome of such attentions. Placing greater emphasis on ethics would be a positive development, but realistically engineers and scientists rarely control the motivations or budgets of the companies or governments for which they conduct research or manage technologies. What is more, professional societies could no doubt put greater emphasis on ethics, but in the end, they do not wield enough power to change very much as compared to the corporations and governments who are responsible for technological systems. The formal organisations that manage complex technological systems in industrial societies typically exercise both
Finally, the authors should integrate Chapters 13 and 14 into an overarching chapter on "Democracy, Risky Technologies, and Decision Making" (or something similar). Simple distinctions between "risk evaluation" and "risk perceptions" -distinct approaches to risk studies-would nicely tie the chapter together. Each holds lessons that are interrelated and speak directly to the place of "power" in contemporary "democratic processes" in advanced industrial contexts as the authors summarily allude to in opening of Chapter 14. Separating them out, as they currently appear, lends conventional risk evaluation methods precisely the kind of distinction that reinforces their continued power in decision-making contexts, often at the expense of other voices and interests (i.e., democracy). This even though research has repeatedly shown that such methods, based on the precepts they operationalise, tend to favour powerful and self-interested organisations and institutions and work against local, egalitarian decision-making processes. By combining the two chapters, the authors would downplay the privileging of "formal risk evaluation methods" as prima facie superior, showing them figuratively alongside lay-public perceptions and preferences and thus emphasise that both belong
Outside of these quibbles, overall