Only ten to twelve percent of Americans would voluntarily live within a mile of a nuclear plant or hazardous waste facility. But industry spokespersons claim that such risk aversion represents ignorance and paranoia, and they lament that citizen protests have delayed valuable projects and increased their costs. Who is right? In "Risk and Rationality", Kristin Shrader-Frechette argues that neither charges of irresponsible endangerment nor countercharges of scientific illiteracy frame the issues properly. She examines the debate over methodological norms for risk evaluation and finds analysts arrayed in a spectrum.Points of view extend from cultural relativists who believe that any risk can be justified (since no rational standards are ultimately possible) to naive positivists who believe that risk evaluation can be objective, neutral, and value free. Both camps, she argues, are wrong, because risk evaluation as a social process is rational and objective, even though all risk-evaluation rules are value-laden. Shrader-Frechette defends a middle position called "scientific proceduralism". She shows why extremist views are unreliable, reveals misconceptions underlying current risk-evaluation methods and strategies, and sketches the reforms needed to set hazard assessment and risk evaluation on a publicly defensible foundation. These reforms involve mathematical, economic, ethical, and legal procedures.They constitute a new paradigm for assessment when acceptance of public hazards is rational, recognizing that laypersons are often more rational in their evaluation of societal risks than either experts or governments have acknowledged. Such reforms would provide citizens with more influence in risk decisions and focus on mediating ethical conflicts, rather than seeking to impose the will of experts. Science, she argues, need not preclude democracy.
"synopsis" may belong to another edition of this title.
Kristin Shrader-Frechette holds degrees in mathematics, physics, and philosophy and is Distinguished Graduate Research Professor at the University of South Florida. She has published five other books and edits the Oxford University Press series "Environmental Ethics and Science Policy."
Guerrilla action and political unrest are not limited to places like El Salvador, Nicaragua, or Angola. In Michigan, for example, local residents put nails and tacks on their highways to prevent the state from burying cattle contaminated by polybrominated biphenyls. In New Jersey, citizens took public officials hostage when they were excluded from decisionmaking regarding a hazardous waste facility in their neighborhood. And in Illinois, townspeople halted the operation of a landfill by digging trenches across its access roads.1
Citizen protests such as these have resulted, in part, from the perceived failure of government and industry to protect the health and safety of the people. Acts of civil disobedience, in turn, have also helped to mobilize public awareness of a variety of environmental risks. For example, 75 percent of residents recently surveyed in Santa Clara County, California, charged that their water was "unsafe to drink" after they discovered chemical contamination in three local public wells.2 More generally, a recent poll sponsored by the Council on Environmental Quality and funded by Resources for the Future found that only 10 to 12 percent of the U.S. population would voluntarily live a mile or less from a nuclear power plant or hazardous waste facility.3 As a result, some communities are trying to discourage the establishment of treatment or storage facilities for chemical wastes; they are charging up to $100,000 for permit application fees.4
Hazardous waste facilities are not the only environmental risks repeatedly rejected by the public. In Delaware, Shell Oil was forced to leave the state in order to find a refinery site. And Alumax abandoned Oregon after a ten-year controversy over the siting of an aluminum-smelting plant. Likewise, Dow Chemical Company gave up its proposed petrochemical-complex site on the Sacramento River in California, after spending $4.5 million in a futile attempt to gain the required ap-
provals. In fact, in the last ten years, approximately 50 percent of attempted sitings of oil refineries have failed because of public opposition. Likewise, no large metropolitan airport has been sited in the United States since the Dallas-Fort Worth facility was built in the early 1960s.5 In a similar vein, there have been no new U.S. commercial orders for nuclear plants since 1974.6 Although the government predicted in 1973 that the United States would have one thousand commercial reactors by the year 2000, citizen opposition and rising costs make it unlikely that the country will have even two hundred of the plants.7
Aversion to Risks:Public Paranoia or Technological Oppression?
Industry spokespersons attribute the blocking of oil refineries, nuclear reactors, and toxic waste dumps to public ignorance and mass paranoia. They charge that misguided and irrational citizens have successfully delayed so many technological facilities, driving up their costs, that wise investors now avoid them.8
Pete Seeger, however, has another story. He and the members of the Clamshell Alliance, as well as many other environmental and consumer activists, would claim that, just as the people created the moral victories won by the civil rights movements and the Vietnam protests, so also the people have successfully challenged potential technological oppressors. In their view, just as the people rejected a war fought without their free, informed consent, they also are rejecting public environmental risks likewise imposed on them without their free, informed consent. For them, to delay or stop construction of risky industrial facilities is a great moral triumph for populist democracy.
Industry sympathizers do not agree. They claim that laypersons' aversion to societal risks stems not so much from any real or apparent danger, such as toxic waste contamination, but from group attitudes that are anti-industry, antigovernment, and antiscience. They charge that the paranoid, neo-Luddite baby boomers who now dominate the environmental movement cut their political teeth during the Vietnam-era protests and then went on to become Yuppie lawyers, professors, and social workers. Holding their earlier political beliefs, they have merely transferred their activism from military to environmental issues. Thus, Pete Seeger now sings about "nukes," not "Nam." And Seeger's hair has turned gray, while the baby boomers long ago cut theirs, probably for an important job interview.9
Who is right? Is public aversion to societal risks caused by mass
paranoia and ignorance of science? Or by yet another form of oppression inflicted by "big industry," "big technology," and "big government"? Not surprisingly, I shall argue that the correct answer lies between these two extremes. Despite a regrettable and widespread ignorance of science, nevertheless environmentalism is not merely the product of an irrational "construct." Despite rampant technological illiteracy, irrationality is not the sole explanation of typical public aversion to involuntarily imposed societal risks. Likewise, it cannot account for widespread distrust of technologies having the potential to cause catastrophic accidents and increased cancers.
The main purpose of this volume is to sketch a middle path between the industrial charges of scientific illiteracy and the populist charges of technological oppression. In so doing, I shall argue for an alternative approach to contemporary, societally imposed risks. My focus is not on personally chosen risks, like diet drinks or oral contraceptives, since each of us is able to avoid such hazards. If my analysis is correct, then we need a new "paradigm," a new account of when the acceptance of public hazards is rational. We also need to recognize that laypersons are often more rational, in their evaluation of societal risks, than either experts or governments appear to have recognized.
The Rise of Risk Assessment and Evaluation
As Chapter Four will explain in greater detail, government and industry experts perform most risk or hazard assessments. Their analyses include three main stages: (1) identification of some public or societal hazard; (2) estimation of the level and extent of potential harm associated with it; and (3) evaluation of the acceptability of the danger, relative to other hazards.10 (Most of the discussion in this volume will focus on the third stage,risk evaluation. ) Once assessors have completed these three assessment tasks, policymakers then determine the best way to accomplish risk management of a particular public threat—for example, through regulation, prohibition, or taxation.
As a specific tool for societal decisionmaking, risk or hazard analysis is relatively new. Although Mesopotamian priests, before the time of Christ, regularly evaluated the impacts of proposed technological projects, risk assessment as a "developing science" did not arise until the late 1960s and the early 1970s.11 Public concern about the human and environmental risks of thousands of technologies arose in part because of tragedies like Love Canal and because of works like Rachel Carson's Silent Spring. 12 Another important milestone in raising environmental consciousness was the Club of Rome's famous 1972 report, Limits to
Growth. It predicted global human, economic, and environmental catastrophe in the twenty-first century unless we were able to stop exponential increases in pollution, resource depletion, population, and production.13
Widespread worries about impending environmental catastrophe and a rapidly increasing cancer rate were evident as early as 1969, as evidenced by the passage of the U.S. National Environmental Policy Act (NEPA), "the Magna Carta of environmental protection."14 NEPA required, among other things, that all federal agencies prepare an environmental impact statement (EIS) every time they considered a proposal for federal actions significantly affecting the quality of the environment.
In addition to the passage of NEPA, much risk-analysis effort also arose as a direct consequence of the creation of new federal agencies, such as the Occupational Safety and Health Administration (OSHA). Pressured by growing public concern about environmental risks, and faced with approximately 100,000 occupation-induced fatalities per year,15 the United States created OSHA in 1970. Many of the first hazard assessments—for example, regarding asbestos—were done under the direction of OSHA or other federal regulatory agencies, such as the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC).
One of the main difficulties with risk assessments done in the 1970s and 1980s, however, was that there were inadequate standards for the practice of this new set of techniques. As a consequence, some hazards, such as carcinogens, were being monitored and regulated very stringently, whereas others, equally dangerous, were evaluated more leniently. To help address these methodological inconsistencies and regulatory difficulties, in 1982 the U.S. Congress passed the Risk Analysis Research and Demonstration Act (RARADA). This bill established a program, under the coordination of the Office of Science and Technology Policy, to help perfect the use of hazard assessment by federal agencies concerned with regulatory decisions related to the protection of human life, health, and the environment.16 Numerous risk assessors, prior to the RARADA, bemoaned the fact that government safety regulations for the automobile industry, for example, presupposed an expenditure of $30,000 for the life of each automobile passenger saved, whereas analogous government regulations for the steel industry pre-supposed an expenditure of $5 million for the life of each steelworker saved.17
Despite the passage of the RARADA, however, quantitative risk assessment is still practiced in somewhat divergent ways; for example,
the monetized "value of life" presupposed by government regulations (and used at the third, or evaluation, stage of assessment) varies dramatically from one federal agency to another.18 This value, in turn, has a great effect on the acceptability judgments associated with various risks, particularly if the evaluation is accomplished in a benefit-cost framework. Even for the same hazard, risk analyses often do not agree, in part because there are many ways to evaluate harms at the third stage of assessment. There are many ways to answer the question "How much risk (in a given area) is socially, politically, economically, and ethically acceptable?" Hazard evaluations often contradict one another, not only because scientists frequently dispute the relevant facts but also because policymakers and the public disagree about what responses to risk are rational. Some persons claim that only technical experts are capable of making rational judgments about risk acceptability, whereas others assert that only potential victims, usually laypeople, are in a position to be truly rational about evaluation of possible hazards.
"Rationality" in Risk Evaluation and Philosophy of Science
'Rational', however, is a highly normative term. Controversies about the "rationality" of various evaluations of risk are no easier to settle than analogous debates in science. Conflicts among philosophers of science (about what methodological rules, if any, guarantee the rationality of science) generate alternative accounts of scientific explanation, as well as disputes over which scientific theory is correct. Likewise, conflicts among risk assessors (about what methodological rules, if any, guarantee the rationality of responses to hazards) generate both alternative accounts of acceptable harm and disputes over whose risk-evaluation theory is correct.
In the debate over the rationality of science, philosophers and scientists are arrayed on a spectrum extending from pluralist or relativist views to logical-empiricist positions. At the left end of the spectrum, the pluralist end, are epistemological anarchist Paul Feyerabend and others who believe that there is no scientific method, that "anything goes," and that "no system of [scientific] rules and standards is ever safe."19 At the other end of the spectrum are logical empiricists, such as Israel Scheffler and Rudolf Carnap, who believe that there are at least some universal and fixed criteria for theory choice and that these criteria guarantee the rationality of science.20 Somewhere in the middle, between the relativists and the logical empiricists, are the so-called naturalists, such as Dudley Shapere, Larry Laudan, and Ronald Giere. They
maintain that theory evaluation can be rational even though there are no absolute rules for science, applicable in every situation.21
The challenge, for any philosopher of science who holds some sort of middle position (between the relativists and the logical empiricists), is to show precisely how theory choice or theory evaluation can be rational, even though there are no universal, absolute rules of scientific method that apply to every situation. Perhaps the dominant issue in contemporary philosophy of science is whether, and if so how, one can successfully develop and defend some sort of naturalistic middle position, as Larry Laudan, Ronald Giere, and Thomas Kuhn, for example, have tried to do.22
An analogous problem faces the hazard evaluator trying to articulate a middle position. In the debate over what methodological norms, if any, guarantee the rationality of risk evaluation, analysts are arrayed on a spectrum extending from the relativists to the naive positivists. At the left end of the spectrum are the cultural relativists,23 such as anthropologist Mary Douglas and political scientist Aaron Wildavsky. They believe that "risks are social constructs," that "any form of life can be justified. . . . no one is to say that any one is better or worse,"24 that there is "no correct description of the right behavior [regarding risks],"25 and therefore that the third stage of risk assessment, risk evaluation, is wholly relative.26 At the other, naive-positivist, end of the spectrum are engineers such as Chauncey Starr and Christopher Whipple. They maintain that risk evaluation is objective in the sense that different risks may be evaluated according to the same rule—for example, a rule stipulating that risks below a certain level of probability are insignificant.27 They also claim that risk assessment, at least at the stage of calculating probabilities associated with harms and estimating their effects, is completely objective, neutral, and value free.28 In their view, the objectivity of risk identification and estimation guarantees the rationality of specific evaluations of various hazards.
The challenge, for any risk evaluator who holds some sort of middle position (between the cultural relativists and the naive positivists), is to show how risk evaluation (the third stage of assessment) can be rational and objective, even though there are no completely value-free rules applicable to every risk-evaluation situation. My purpose in this volume is (1) to articulate why and how both the cultural relativists and the naive positivists err in their general accounts of risk evaluation; (2) to explain the misconceptions in a number of specific risk-evaluation strategies allegedly deemed "rational"; and (3) to argue for a "middle position" on the methodological spectrum of views about how to guarantee the rationality of risk evaluation. I call this middle position "sci-
entific proceduralism," and I defend it by means of arguments drawn from analogous debates over naturalism in contemporary philosophy of science.
Outline of the Chapters: Risk Evaluation Is Both Scientific and Democratic
In Chapter Two, "Science against the People," 1 introduce the problem of conflict over rational evaluations of risk. Specifically, I show how the cultural relativists and the naive positivists have wrongly dismissed lay evaluations of risk as irrational. The bulk of this chapter focuses on faulty epistemological assumptions underlying relativist and naive-positivist arguments about risk evaluation.
After defusing these arguments against "the people," in Chapter Three ("Rejecting Reductionist Risk Evaluation") I analyze in greater detail the two most basic risk frameworks out of which such antipopulist arguments arise. I show that both of these frameworks, naive positivism and cultural relativism, err in being reductionistic. The cultural relativists attempt to reduce risk to a sociological construct, underestimating or dismissing its scientific components. The naive positivists attempt to reduce risk to a purely scientific reality, underestimating or dismissing its ethical components. I argue that the sociological reductionists err in overemphasizing the role of values in risk evaluation, whereas the scientific reductionists err in underemphasizing the role of ethical values and democratic procedure in risk evaluation.
Because locating the flaws in accounts of rational risk evaluation comes down to clarifying the appropriate role of values at the third stage of hazard assessment, Chapters Four and Five attempt to provide a general overview of the various evaluative assumptions that are integral to all three stages of risk analysis. Chapter Four ("Objectivity and Values in Risk Evaluation") shows that—despite the presence of cognitive or methodological value judgments, even in pure science—science itself is objective in several important senses. After outlining the value judgments that arise in the three stages of risk assessment (risk identification, risk estimation, and risk evaluation), the chapter presents a case study from the field of energy studies. The case study shows how alternative value judgments at the first two stages of assessment can lead to radically different policy conclusions (third stage) regarding hazard acceptability.
Chapter Five ("Five Dilemmas of Risk Evaluation") shows how epistemic value judgments arise in the more scientific stages of risk assessment (viz., risk identification and estimation). It sketches some
analogous difficulties arising at the third, or risk-evaluation, stage. It argues not only that methodological value judgments are unavoidable in risk evaluation, but also that the judgments often pose both methodological and ethical dilemmas, problems for which there are no zero-cost solutions. Chapters in the last section of the volume (Part Three) argue that these five dilemmas raise troubling ethical questions and thus provide a basis for improving hazard evaluation.
Whereas the first part of the book (Chapters One through Five) provides an overview of risk analysis and evaluation and a discussion of the flaws in the two most general accounts of hazard evaluation, Part Two of the book (Chapters Six through Ten) addresses more specific difficulties in risk evaluation. Each chapter in Part Two evaluates a questionable methodological strategy common in various methods of risk evaluation.
Chapter Six, "Perceived Risk and the Expert-Judgment Strategy," argues that risk assessors' tendencies to distinguish "perceived risk" from "actual risk" are partially misguided. Typically, they claim that only "actual risk" (usually defined by experts as an average annual probability of fatality) is objective, whereas "perceived risk" (based merely on the feelings and opinions of laypersons) is subjective. I argue that both "perceived risk" and "actual risk" are partially subjective, since both involve value judgments. Further, I suggest that the evaluation stage of risk assessment will be more successful if analysts do not overemphasize the distinction between perceived and actual risk. Instead, they should focus on mediating ethical conflicts between experts and laypeople over risk evaluation.
Continuing in a similar vein, Chapter Seven also identifies a problematic strategy associated with the attempt to define risk in a largely quantitative way. This chapter, "Democracy and the Probabilistic Strategy," attacks two common methodological assumptions about risk evaluation. One is that risk abatement ought to be directed at the hazards to which persons are most averse. The other is that risk aversion ought to be evaluated as directly proportional to the probability of fatality associated with a particular hazard. After arguing against this probabilistic strategy, I show that other, nonprobabilistic criteria for risk evaluation (equity of risk distribution, for example) are equally plausible, in part because accurate knowledge of probabilities is sometimes difficult to obtain. If one employs these other criteria, I argue, one can conclude that technical experts should not be the only persons chosen to evaluate risks and therefore dictate which societal hazards are acceptable. Control of risk evaluation needs to become more democratic.
Chapter Eight, "Uncertainty and the Utilitarian Strategy," argues that, in many risk evaluations, it is more reasonable to pursue a "maximin" strategy, as most laypersons request, rather than the utilitarian (or Bayesian) strategy used by most experts. The main argument of this chapter is that, in situations of uncertainty, Bayesian accounts of risk evaluation are often unable to provide for considerations of equity and democratic process.
Chapter Nine, "Uncertainty and the Producer Strategy," addresses another problematic method of risk evaluation, one closely related to Bayesianism. The chapter asks whether, in a situation of uncertainty, one ought to implement a technology that is environmentally unsafe but not recognized as such (thereby running a "consumer risk") or fail to implement a technology that is environmentally safe but not recognized to be so (a "producer risk"). In cases of doubt, on whose side ought one to err? Chapter Nine argues that there are scientific, ethical, and legal grounds for minimizing consumer risk and maximizing producer risk, especially in cases of uncertainty.
Just as experts tend to overemphasize producer risk and underemphasize consumer risk, they also tend to discount hazards that are spatially or temporally distant. Because of this "discounting" tendency, risk assessors in developed countries often ignore the hazards their nation imposes on those in underdeveloped areas. I call this tendency the "isolationist strategy." Chapter Ten, "Third-World Risks and the Isolationist Strategy," argues that this risk-evaluation strategy is unethical.
Discussion of the isolationist strategy in Chapter Ten marks the end of the second part of the volume. Although this second section criticizes several of the problematic risk-evaluation methods (such as the probabilistic strategy, the Bayesian strategy, and the isolationist strategy) employed both by contemporary hazard assessors and by moral philosophers, it provides neither a technical nor an exhaustive account of all the questionable risk-evaluation methodologies.29 Instead, its purpose is both to provide an overview of representative risk-evaluation errors (the strategies criticized in Part Two) and to cast doubt on the thesis that expert assessment dictates the only risk evaluations that are 'rational'. Instead, rational risk evaluation and behavior may be more widely defined than has been supposed. And if so, there are grounds for doubting experts' claims that lay responses to, and evaluations of, societal risks are irrational.
Together, the chapters in the first and second sections of the volume provide an overview of much of what is wrong with contemporary hazard assessment and with allegedly rational risk evaluation. The pur-
pose of the third section is to sketch some solutions to the problems outlined in the two earlier parts of the book. Chapter Eleven, "Risk Evaluation: Methodological Reforms," makes a number of specific suggestions in this regard. It begins by offering an alternative risk-evaluation paradigm, "scientific proceduralism." According to this paradigm, risk evaluation is procedural in that it ought to be guided by democratic processes and ethical principles. It is scientific or "objective" in at least three senses: (1) It can be the subject of rational debate and criticism. (2) It is partially dependent on probabilities that can be affected by empirical events. (3) It can be criticized in terms of how well it serves the scientific end or goal of explaining and predicting hazardous events and persons' responses to them.
After arguing that risk evaluation is largely objective, because it is based in part on probabilities, and because it is assessed on the basis of its explanatory and predictive power, I also argue that risk evaluation ought to be defined in terms of social and ethical values. Explaining how risk evaluation can be both objective and evaluative, Chapter Eleven outlines a number of specific suggestions for methodological improvements in hazard evaluation—for example, the use of ethically weighted risk-cost-benefit analysis (RCBA) and the ranking of experts' risk opinions on the basis of their past successful predictions. The chapter also substantiates the claim that hazard assessment—although burdened both with reductionist definitions of risk (Chapter Three) and with a number of biased methodological strategies (Chapters Six through Ten)—is objective in important ways. Hence, it makes sense to continue to use quantified risk analysis (QRA). That is, although in practice problems of risk evaluation have led to poor policy, in principle they are capable of being solved by means of improved risk-evaluation methods and more participatory styles of hazard management.
Having considered the methodological solutions to some of the difficulties with QRA and risk evaluation, 1 conclude by addressing certain procedural and institutional reforms needed to make risk management more rational. Chapter Twelve, "Risk Management: Procedural Reforms," argues that, consistent with a more naturalized view of all knowledge, we must place less emphasis on whose hazard evaluations are correct or incorrect and instead focus on negotiating workable risk-management principles and practices. In addition, we ought to make use of several insights from medical ethics, such as requiring free, informed consent prior to imposing risks; guaranteeing legal rights to due process and compensation for all unavoidable risks and harms; and applying the theory of "market share liability," as in the celebrated DES case.
These chapters come nowhere close, of course, to providing a complete explanation of what makes a risk evaluation rational. This account does not pretend to be complete, in part because the problems of risk evaluation are too numerous to be treated in a single, nontechnical volume, and in part because I attempted (as far as possible) to avoid repeating analyses given in my earlier works.30 These chapters will have accomplished their modest aim if they enable us to be more critical of existing attempts to define "risk evaluation" in highly stipulative and question-begging ways. They will have taken us in the right direction if they teach us to be suspicious whenever someone gratuitously attributes motives and causes to those allegedly exhibiting "irrational" risk evaluations. They will have helped us if they encourage us to consider alternative models of rationality and to remember that chronic errors in risk-evaluation heuristics are not limited to laypeople alone.31 This being so, determining when a risk evaluation is rational is as much the prerogative of the people as of the experts. Science need not co-opt democracy.
Excerpted from Risk and Rationality: Philosophical Foundations for Populist Reformsby K. S. Shrader-Frechette Copyright © 1991 by K. S. Shrader-Frechette. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
"About this title" may belong to another edition of this title.
Seller: Better World Books, Mishawaka, IN, U.S.A.
Condition: Good. Former library book; may include library markings. Used book that is in clean, average condition without any missing pages. Seller Inventory # 5885627-6