Precautionary Principles
The basic idea underlying a precautionary principle (PP) is often summarized as “better safe than sorry.” Even if it is uncertain whether an activity will lead to harm, par exemple, to the environment or to human health, measures should be taken to prevent harm. This demand is partly motivated by the consequences of regulatory practices of the past. Often, chances of harm were disregarded because there was no scientific proof of a causal connection between an activity or substance and chances of harm, par exemple, between asbestos and lung diseases. When this connection was finally established, it was often too late to prevent severe damage.
Toutefois, it is highly controversial how the vague intuition behind “better safe than sorry” should be understood as a principle. En conséquence, we find a multitude of interpretations ranging from decision rules over epistemic principles to procedural frameworks. To acknowledge this diversity, it makes sense to speak of precautionary principles (PPs) in the plural. PPs are not without critics. Par exemple, it has been argued that they are paralyzing, unscientific, or promote a culture of irrational fear.
This article systematizes the different interpretations of PPs according to their functions, gives an overview about the main lines of argument in favor of PPs, and outlines the most frequent and important objections made to them.
Table des matières
The Idea of Precaution and Precautionary Principles
Interpretations of Precautionary Principles
Action-Guiding Interpretations
Decision Rules
Context-Sensitive Principles
Epistemic Interpretations
Standards of Evidence
Type I and Type II Errors
Precautionary Defaults
Procedural Interpretations
Argumentative, or “Meta”-PPs
Transformative Decision Rules
Reversing the Burden of Proof
Procedures for Determining Precautionary Measures
Integrated Interpretations
Particular Principles for Specific Contexts
An Adjustable Principle with Procedural Instructions
Justifications for Precautionary Principles
Practical Rationality
Ordinary Risk Management
PPs in the Framework of Ordinary Risk Management
Reforming Ordinary Risk Management
Moral Justifications for Precaution
Environmental Ethics
Harm-Based Justifications
Justice-Based Justifications
Rights-Based Justifications
Ethics of Risk and Risk Impositions
Main Objections and Possible Rejoinders
PPs Cannot Guide Our Decisions
PPs are Redundant
PPs are Irrational
Références et lectures complémentaires
1. The Idea of Precaution and Precautionary Principles
We can identify three main motivations behind the postulation of a PP. D'abord, it stems from a deep dissatisfaction with how decisions were made in the past: Often, early warnings have been disregarded, leading to significant damage which could have been avoided by timely precautionary action (Harremoës and others 2001). This motivation for a PP rests on some sort of “inductive evidence” that we should reform (or maybe even replace) our current practices of risk regulation, demanding that uncertainty must not be a reason for inaction (John 2007).
Deuxième, it expresses specific moral concerns, usually pertaining to the environment, human health, and/or future generations. This second motivation is often related to the call for sustainability and sustainable development in order to not destroy important resources for short-time gains, but to leave future generations with an intact environment.
Troisième, PPs are discussed as principles of rational choice under conditions of uncertainty and/or ignorance. Typiquement, rational decision theory is well suited for situations where we know the possible outcomes of our actions and can assign probabilities to them (a situation of “risk” in the decision-theoretic sense). Toutefois, the situation is different for decision-theoretic uncertainty (where we know the possible outcomes, but cannot assign any, or at least no meaningful and precise, probabilities to them) or decision-theoretic ignorance (where we do not know the complete set of possible outcomes). Although there are several suggestions for decision rules under these circumstances, it is far from clear what is the most rational way to decide when we are lacking important information and the stakes are high. PPs are one proposal to fill this gap.
Although they are often asserted individually, these motivations also complement each other: Si, as following from the first motivation, uncertainty is not allowed to be a reason for inaction, then we need some guidance for how to decide under such circumstances, par exemple, in the form of a decision principle. And in many cases, it is the second motivation—concerns for the environment or human health—which makes the demand for precautionary action before obtaining scientific certainty especially pressing.
Many existing official documents cite the demand for precaution. One often-quoted example for a PP is principle 15 of the Rio Declaration on Environment and Development, a result of the United Nations Conference on Environment and Development (UNCED) in 1992. It refers to a “precautionary approach”:
Rio PP—In order to protect the environment, the precautionary approach shall be widely applied by states according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. (United Nations Conference on Environment and Development 1992, Principle 15)
Another prominent example is the formulation that resulted from the Wingspread Conference on the Precautionary Principle 1998, where around 35 scientists, lawyers, policy makers and environmentalists from the United States, Canada and Europe met to define a PP:
Wingspread PP—When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. The process of applying the precautionary principle must be open, informed and democratic and must include potentially affected parties. It must also involve an examination of the full range of alternatives, including no action. (Science & Environmental Health Network (SEHN) 1998)
Both formulations are often cited as paradigmatic examples of PPs. Although they both mention uncertain threats and measures to prevent them, they also differ in important points, for example their strength: The Rio PP makes a weaker claim, stating that uncertainty is not a reason for inaction, whereas the Wingspread PP puts more emphasis on the fact that measures should be taken. They both give rise to a variety of questions: What counts as “serious or irreversible damage”? What does “(lack of) scientific certainty” mean? How plausible does a threat have to be in order to warrant precaution? What counts as precautionary measures? En plus, PPs face many criticisms, like being too vague to be action-guiding, paralyzing the decision-process, or being anti-scientific and promoting a culture of irrational fear.
Ainsi, inspired by these regulatory principles in official documents, a lively debate has developed around how PPs should be interpreted in order to arrive at a version applicable in practical decision-making. This resulted in a multitude of PP proposals that are formulated and defended (or criticized) in different theoretical and practical contexts. Most of the existing PP formulations share the elements of uncertainty, harm, et (precautionary) action. Different ways of spelling out these elements result in different PPs (Sandin 1999, Manson 2002). Par exemple, they can vary in how serious a harm has to be in order to trigger precaution, or which amount of evidence is needed. En plus, PP interpretations differ with respect to the function they are intended to fulfill. They are typically classified based on some combination of the following categories according to their function (Sandin 2007, 2009; Munthe 2011; Steel 2014):
Action-guiding principles tell us which course of action to choose given certain circumstances;
(sets of) epistemic principles tell us what we should reasonably believe under conditions of uncertainty;
procedural principles express requirements for decision-making, and tell us how we should choose a course of action.
These categories can overlap, par exemple, when action- or decision-guiding principles come with at least some indication for how they should be applied. Some interpretations explicitly aim at integrating the different functions, and warrant their own category:
Integrated PP interpretations: Approaches that integrate action-guiding, épistémique, and procedural elements associated with PPs. Par conséquent, they tell us which course of action should be chosen through which procedure, and on what epistemic base.
This article starts in Section 2 with an overview of different PP interpretations according to this functional categorization. Section 3 describes the main lines of arguments that have been presented in favor of PPs, and Section 4 presents the most frequent and most important objections that PPs face, along with possible rejoinders.
2. Interpretations of Precautionary Principles
À. Action-Guiding Interpretations
Action-guiding PPs are often seen on a par with decision rules from rational decision theory. D'un côté, authors formalize PPs by using decision rules already established in decision theory, like maximin. D'autre part, they formulate new principles. While not necessarily located within the framework of decision theory, those are intended to work at the same level. Understood as principles of risk management, they are supposed to help to determine a course of action given our knowledge and our values.
J’ai. Decision Rules
The terms used for decision-theoretic categories of non-certainty differ. Dans cet article, they are used as follows: Decision-theoretic risk denotes situations in which we know the possible outcomes of actions and can assign probabilities to them. Decision-theoretic uncertainty refers to situations in which we know the possible outcomes, but either no or only partial or imprecise probability information is available (Hansson 2005a, 27). When we don’t even know the full set of possible outcomes, we have a situation of decision-theoretic ignorance. When formulated as decision rules, the “(scientifique) uncertainty” component of PPs is often spelled out as decision-theoretic uncertainty.
Maximin
The idea to operationalize a PP with the maximin decision rule occurred early within the debate and is therefore often associated with PPs (par exemple, Hansson 1997; Sunstein 2005b; Gardiner 2006; Aldred 2013).
In order to be able to apply the maximin rule, we have to know the possible outcomes of our actions and be able to at least rank them on an ordinal scale (meaning that for each outcome, we can tell whether it is better, pire, or equally good than each other possible outcome). It then tells us to select the option with the best worst case in order to “maximize the minimum”. Ainsi, the maximin rule seems like a promising candidate for a PP. It pays special attention to the prevention of threats, and is applicable under conditions of uncertainty. Toutefois, as has repeatedly been pointed out, maximin is not a plausible rule of choice in general. Consider the decision matrix in Table 1.
Scenario1 Scenario2
Alternative1 7 6
Alternative2 15 5
Tableau 1: Simplified Decision-Matrix with Two Alternative Courses of Action.
Maximin selects Alternative1. This seems excessively risk-averse because the best case in Alternative2 is much better, and the worst case is only slightly worse, as long as we assume (À) that the utilities in this example are cardinal utilities, et (b) that there is not some kind of relevant threshold passed. If we knew that the probability for Scenario1 is 0.99 and the probability for Scenario2 only 0.01, then it would arguably be absurd to apply maximin. Proponents of interpreting a PP with maximin thus have stressed that it needs be qualified by some additional criteria in order to provide a plausible PP interpretation.
The most prominent example is Gardiner (2006), who draws on criteria suggested by Rawls to determine conditions under which the application of maximin is plausible:
Knowledge of likelihoods for the possible outcomes of the actions is impossible or at best extremely insecure;
the decision-makers care relatively little for potential gains that might be made above the minimum that can be guaranteed by the maximin approach;
the alternatives that will be rejected by maximin have unacceptable outcomes; et
the outcomes considered are in some adequate sense “realistic”, c'est, only credible threats should be considered.
Condition (3) makes it clear that the guaranteed minimum (condition 2) needs to be acceptable to the decision-makers (see also Rawls 2001, 98). What it means that ‘gains above the guaranteed minimum are relatively little cared for’ (condition 2) has been spelled out by Aldred (2013) in terms of incommensurability between outcome values, c'est, that some outcomes are so bad that they cannot be outweighed by potential gains. It is thus better to choose an option that promises only little gains but guarantees that the extremely bad outcome can’t materialize.
Gardiner argues that a maximin rule that is qualified by these criteria fits well with some core cases where we agree that precaution is necessary and calls it the “Rawlsian Core Precautionary Principle (RCPP)”. He names the purchase of insurance as an everyday-example where his RCPP fits well with our intuitive judgments and where precaution seems already justified on its own. According to Gardiner, it also fits well with often-named paradigmatic cases for precaution like climate change: The controversy whether or not we should take precautions in the climate case is not a debate around the right interpretation of the RCPP but rather about whether the conditions for its application are fulfilled—for example, which outcomes are unacceptable (Gardiner 2006, 56).
Minimax Regret
Another decision rule that is discussed in the context of PPs is the minimax regret rule. Whereas maximin selects the course of action with the best worst case, minimax regret selects the course of action with the lowest maximal regret. The regret of an outcome is calculated by subtracting its utility from the highest utility one could have achieved under this state by selecting another course of action. This strategy tries to minimize one’s regret for not having made the superior choice in hindsight. The minimax regret rule does not presuppose any probability information, like the maximin rule. Toutefois, while for the maximin rule it is enough if outcomes can be ranked on an ordinal scale, the minimax rule requires that we are able to assign cardinal utilities to the possible outcomes. Sinon, regret cannot be calculated.
Take the following example from Hansson (1997), in which a lake seems to be dying for reasons that we do not fully understand: “We can choose between adding substantial amounts of iron acetate, and doing nothing. There are three scientific opinions about the effects of adding iron acetate to the lake. According to opinion (1), the lake will be saved if iron acetate is added, otherwise not. According to opinion (2), the lake will self-repair anyhow, and the addition of iron acetate makes no difference. According to opinion (3), the lake will die whether iron acetate is added or not.” The consensus is that the addition of iron acetate will have certain negative effects on land animals that drink water from the lake, but that effect is less serious than the death of the lake. Assigning the value -12 to the death of the lake and -5 to the negative effects of iron acetate in the drinking water, we arrive at the utility matrix in Table 2.
(1) (2) (3)
Add iron acetate 5 -5 -17
Do nothing -12 0 -12
Tableau 2: Utility-Matrix for the Dying-Lake Case
We can then obtain the regret table by subtracting the utility of each outcome from the highest utility in each column, the result being Table 3. Minimax regret then selects the option to add iron acetate to the lake.
(1) (2) (3)
Add iron acetate 0 5 5
Do nothing 7 0 0
Tableau 3: Regret-Matrix for the Dying-Lake Case
Chisholm and Clarke (1993) strongly support the minimax regret rule. They argue that it is better suited for PP than maximin, since it gives some weight to foregone benefits. They also show that even if it is uncertain whether precautionary measures will be effective, minimax regret still recommends them as long as the expected damage from not implementing them is large enough. They advocate so-called “dual purpose” policies, where precautionary measures have other positive effects, even if they do not fulfill their main purpose. One example is measures that are aimed at abating global climate change, but at the same time have direct positive effects on local environmental problems. Contrarily, Hansson (1997) argues that to take precautions means to avoid bad outcomes, and especially to avoid worst cases. Par conséquent, he defends maximin and not minimax regret as the adequate PP interpretation. Maximin would, as Table 2 shows, select to not add iron acetate to the lake. According to Hansson, this is the precautionary choice as adding iron acetate could lead to a worse outcome than not adding it.
ii.Context-Sensitive Principles
Other interpretations of PPs as action-guiding principles differ from stand-alone if-this-then-that decision rules. They stress that principles have to be interpreted and concretized depending on the specific context (Fisher 2002; Randall 2011).
A Virtue Principle
Sandin (2009) argues that one can reinterpret a PP as an action-guiding principle not by reference to decision theory, but by using cautiousness as a virtue. He formulates an action-guiding virtue principle of precaution (VPP):
VPP—Perform those, and only those, actions that a cautious agent would perform in the circumstances. (Sandin 2009, 98)
Although virtue principles are commonly criticized as not being action-guiding, Sandin argues that understanding a PP in this way actually makes it more action-guiding. “Cautious” is interpreted as a virtue term that refers to a property of an agent, like “courageous” or “honest”. Sandin states that it is often possible to identify what the virtuous agent would do: Either because it is obvious, or because at least some agreement can be reached. Even the uncertain cases VPP is dealing with belong to classes of situations where we have experience with, par exemple, failed regulations of the past, and therefore can assess what the cautious agent would (not) have done and extrapolate from that to other cases (Sandin 2009, 99). According to Sandin, interpreting a PP as a virtue principle will avoid both objections of extremism and paralysis. It is unlikely that the virtuous agent will choose courses of action which will, in the long run, have overall negative effects or are self-refuting (like “ban activity a and do not ban activity a!”). Toutefois, even if one accepts that it makes sense to interpret “cautious” as a virtue, “the circumstances” under which one should choose the course of action that the cautious agent would choose are not specified in the VPP as it is formulated by Sandin. This makes it an incomplete proposal.
Reasonableness and Plausibility
Another important example is the PP interpretation by Resnik (2003, 2004), who defends a PP as an alternative to maximin and other strategies for decision-making in situations where we lack the type of empirical evidence that one would need for a risk management that uses probabilities obtained from risk assessment. His PP interpretation, which we can call the “reasonable measures precautionary principle” (RMPP), reads as follows:
RMPP—One should take reasonable measures to prevent or mitigate threats that are plausible and serious.
The seriousness of a threat relates to its potential for harm, as well as to whether or not the possible damage is seen as reversible or not (Resnik 2004, 289). Resnik emphasizes that reasonableness is a highly pragmatic and situation-specific concept. He names some neither exhaustive nor necessary criteria for reasonable responses: They should be effective, proportional to the nature of the threat, take a realistic attitude toward the threat, be cost-effective, and be applied consistently (Resnik 2003, 341–42). Dernièrement, that threats have to be credible means that there have to be scientific arguments for the plausibility of a hypothesis. These can be based on epistemic and/or pragmatic criteria, including for example coherence, explanatory power, analogy, precedence, precision, or simplicity. Resnik stresses that a threat being plausible is not the same as a threat being even minimally probable: We might accept threats as plausible that we think to be all but impossible to come to fruition (Resnik 2003, 341).
This shows that the question when a threat should count as plausible enough to warrant precautionary measures is very important for the application of an action-guiding PP. Par conséquent, such PPs are often very sensitive to how a problem is framed. Some authors took these aspects—the weighing of evidence and the description of the decision problem—to be central points of PPs, and interpreted them as epistemic principles, c'est, principles at the level of risk assessment.
b. Epistemic Interpretations
Authors that defend an epistemic PP interpretation argue that we should accept that PPs are not principles that can guide our actions, but that this is neither a problem nor against their spirit. Instead of telling us how to act when facing uncertain threats of harm, they propose that PPs tell us something about how we should perceive these threats, and what we should take as a basis for our actions, par exemple, by relaxing the standard for the amount of evidence required to take action.
J’ai. Standards of Evidence
One interpretation of an epistemic PP is to give more weight to evidence suggesting a causal link between an activity and threats of serious and irreversible harm than one gives to evidence suggesting less dangerous, or beneficial, effects. This could mean to assign a higher probability for an effect to occur than one would in other circumstances based on the same evidence. Discutablement, the underlying idea of this PP can be traced back to the German philosopher Hans Jonas, who proposed a “heuristic of fear”, c'est, to give more weight to pessimistic forecasts than to optimistic ones (Jonas 2003). Toutefois, this PP interpretation has been criticized on the basis that it systematically discounts evidence pointing in one direction, but not in the other. This could lead to distorted beliefs about the world in the long run, being detrimental to our epistemic and scientific progress and eventually doing more harm than good (Harris and Holm 2002).
Toutefois, other authors point out that we might have to distinguish between “regulatory science” and “normal science”. Different epistemic standards are appropriate for the two contexts since they have different aims: In normal science, we are searching for truth; in regulatory science, we are primarily interested in reducing risk and avoiding harm (John 2010). Par conséquent, Peterson (2007a) refers in his epistemic PP interpretation only to decision makers—not scientists—who find themselves in situations involving risk or uncertainty. He argues that in such cases, decision-makers should strive to acquire beliefs that are likely to protect human health, and that it is less important whether they are also likely to be true. One principle that has been promoted in order to capture this idea is the preference for false positives, c'est, for type I errors over type II errors.
Ii. Type I and Type II Errors
Is it worse to falsely assert that there is a relationship between two classes of events, which does not exist (false positives), or to fail to assert such a relationship, when it in fact exists (false negatives)? Par exemple, would you prefer a virus software on your computer which classifies a harmless program as a virus (false positive) or rather one that misses a malicious program (false negative)? Statistical hypotheses testing tests the so-called null-hypothesis, which is the default view that there is no relationship between two classes of events, or groups. Rejecting a true null hypothesis is called a type I error, whereas failing to reject a false null hypothesis is a type II error. Which type of possible error should we try to minimize, if we cannot minimize both at once?
Dans (normal) science, it is valued higher not to include false assertions into the body of knowledge, which would distort it in the long term. Ainsi, the default assumption—the null hypothesis—is that there is no connection between two classes of events, and typically statistical procedures are used that minimize type I errors (false positives) even if this might mean that an existing connection is missed (at least at first, or for a long time) (John 2010). To believe that a certain existing deterministic or probabilistic connection between two classes of events does not exist might slow down the scientific progress in normal science aiming at truth. Toutefois, in regulatory contexts it might be disastrous to believe falsely that a substance is safe when it is not. Par conséquent, a prominent interpretation of an epistemic PP takes it to entail a preference for type I errors over type II errors in regulatory contexts (see for example Lemons, Shrader-Frechette, and Cranor 1997; Peterson 2007a; John 2010).
Merely favoring one type of error over another might not be enough. It has been argued that the underlying methodology of either rejecting or accepting hypotheses does not sufficiently allow for identifying and tracking uncertainties. If a PP is understood as a principle that relaxes the standard for the amount of evidence required to take action, then a new epistemology might be needed: One that allows integrating the uncertainty about the causal connection between, par exemple, a drug and a harm, in the decision (Osimani 2013).
iii. Precautionary Defaults
The use of precautionary regulatory defaults is one proposal for how to deal with having to make regulatory decisions in the face of insufficient information (Sandin and Hansson 2002; Sandin, Bengtsson, and others 2004). In regulatory contexts, there are often situations in which a decision has to be made on how to treat a potentially harmful substance that also has some (potentiel) benefits. Other than in normal science, it is not possible to wait and collect further evidence before a verdict is made. The substance has to be treated one way or another while waiting for further evidence. Ainsi, it has been suggested that we should use regulatory defaults, c'est, assumptions that are used in the absence of adequate information and that should be replaced if such information were obtained. They should be precautionary defaults by building in special margins of safety in order to make sure that the environment and human health get sufficient protection. One example is the use of uncertainty factors in toxicology. Such uncertainty factors play a role in estimating reference doses which are acceptable for humans by dividing a level of exposure found acceptable in animal experiments by a number (usually 100) (Steel 2011, 356). This takes into account that there are significant uncertainties, par exemple, in extrapolating the results from animals to humans. Such defaults are a way to handle uncertain threats. Néanmoins, they should not be confused with actual judgments about what properties a particular substance has (Sandin, Bengtsson, and others 2004, 5). Par conséquent, an epistemic PP does not have to be understood as a belief-guiding principle, but as saying something on which methods for risk assessment are legitimate, par exemple, for quantifying uncertainties (Steel 2011). Selon cette vision, precautionary defaults like uncertainty factors in toxicology are methodological implications of a PP that allow to apply it in a scientifically sound way while protecting human health and the environment.
Given this, it might be misleading to interpret a PP as a purely epistemic principle, if it is not guiding our beliefs but telling us what assumptions to accept, c'est, to act as if certain things were true, as long as we do not have more information. Ainsi, it has been argued that a PP is better interpreted as a procedural requirement, or as a principle that imposes several such procedural requirements (Sandin 2007, 103–4).
c. Procedural Interpretations
It has been argued that we should shift our attention when interpreting PPs from the question of what action to take to the question of what is the best way to reach decisions.
J’ai. Argumentative, or “Meta”-PPs
Argumentative PPs are procedural principles specifying what kinds of arguments are admissible in decision-making (Sandin, Peterson, and others 2002). They are different from prescriptive, or action-guiding, PPs in that they do not directly prescribe actions that should be taken. Take principle 15 of the Rio Declaration on Environment and Development. On one interpretation, it states that arguments for inaction which are based solely on the ground that we are lacking full scientific certainty, are not acceptable arguments in the decision-making procedure:
Rio PP—“In order to protect the environment, the precautionary approach shall be widely applied by states according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” (United Nations Conference on Environment and Development 1992, Principle 15)
Such an argumentative PP is seen as a meta-rule that places real constraints on what types of decision rules should be used: Par exemple, by entailing that decision-procedures should be used that are applicable under conditions of uncertainty, it recommends against some of the traditional approaches in risk regulation like cost-benefit analysis (Steel 2014). De la même manière, it has been proposed that the idea behind PPs is best interpreted as a general norm that demands a fundamental shift in our way of risk regulation, based on an obligation to learn from regulatory mistakes of the past (Whiteside 2006).
Ii. Transformative Decision Rules
Similar to argumentative principles, an interpretation of a PP as a transformative decision rule doesn’t tell us which action should be taken, but it puts constraints on which actions can be considered as valid options. Informellement, a transformative decision rule is defined as a decision rule that takes one decision problem as input, and yields a new decision problem as output (Sandin 2004, 7). Par exemple, the following formulation of a PP as a transformative decision rule (TPP) has been proposed by Peterson (2003):
TPP—If there is a non-zero probability that the outcome of an alternative act is very low, c'est, below some constant c, then this act should be removed from the decision-maker’s list of options.
Ainsi, the TPP excludes courses of actions that could lead, par exemple, to catastrophic outcomes, from the options available to the decision maker. Toutefois, it does not tell us which of the remaining options should be chosen.
iii. Reversing the Burden of Proof
The requirement of reversal of burden of proof is one of the most prominent specific procedural requirements that are named in connection with PPs. Par exemple, in the influential communication on the PP from the Wingspread Conference on the Precautionary Principle (1998), it is stated, “the proponent of an activity, rather than the public bears the burden of proof.”
One common misconception is that the proponent of a potentially dangerous activity would have to prove with absolute certainty that the activity is safe. This gave rise to the objection that PPs are too demanding, and therefore would bring every progress to a halt (Harris and Holm 2002). Toutefois, the idea is rather that we have to change our approach to regulatory policy: Proponents of an activity have to prove to a certain threshold that it is safe in order to employ it, instead of the opponents having to prove to a certain threshold that it is harmful in order to ban it.
Ainsi, whether or not the situation is one in which the burden of proof is reversed depends on the status quo. Instead of speaking of shifting the burden of proof, it seems more sensible to ask what has to be proven, and who has to provide what kind of evidence for it. The important point that then remains to be clarified is what standards of proof are accepted.
An alternative proposal to shifting the burden of proof is that both regulators and proponents of an activity (Arcuri 2007) should share it: If opponents want to regulate an activity, they should at least provide some evidence that the activity might lead to serious or irreversible harm, even though we are lacking evidence to prove it with certainty. Proponents, d'autre part, should provide certain information about the activity in order to get it approved. Who has the burden of proof can play an important role in the production of information: If proponents have to show (to a certain standard) that their activity is safe, this generates an incentive to gather information about the activity, whereas in the other case—“safe until proven otherwise”—they might deliberately refrain from this (Arcuri 2007, 15).
iv. Procedures for Determining Precautionary Measures
Interpreted in a procedural way, a PP puts constraints on how a problem should be described or how a decision should be made. It does not dictate a specific decision or action. This is in line with one interpretation of what it means to be a principle as opposed to a rule. While rules specify precise consequences that follow automatically when certain conditions are met, principles are understood as guidelines whose interpretation will depend on specific contexts (Fisher 2002; Arcuri 2007).
Developing a procedural precautionary framework that integrates different procedural requirements is a way to enable the context-dependent specification and implementation of such a PP. One example is Tickner’s (2001) “precautionary assessment” framework, which consists of six steps that are supposed to guide decision-making as a heuristic device. The first five steps—(1) Problem Scoping, (2) Participant Analysis, (3) Burden/Responsibility Allocation Analysis, (4) Environment and Health Impact Analysis, et (5) Alternatives Assessment—serve to describe the problem, identify stakeholders, and assess possible consequences as well as available alternatives. In the final step, (6) Precautionary Action Analysis, the appropriate precautionary measure(s) are determined based on the results from the other steps. These decisions are not permanent, but should be part of a continuous process of increasing understanding and reducing overall impacts.
That the components are clarified on a case-by-case basis is a big advantage of such procedural implementations of PPs. It avoids an oversimplification of the decision process and takes the complexity of decisions under uncertainty into account. Toutefois, they are criticized for losing the “principle” part of PPs: Par exemple, Sandin (2007) argues that procedural requirements form a heterogeneous category. A procedural PP would soon dissolve beyond recognition because it is intermingled with other (rationnel, légal, morale, et ainsi de suite) principles and rules. As an answer, some authors try to preserve the “principle” in PPs, while also taking into account procedural as well as epistemic elements.
d. Integrated Interpretations
We can find two main strategies for formulating a PP that is still identifiable as an action-guiding principle while integrating procedural as well as epistemic considerations: Either (1) developing particular principles that are specific to a certain context, and accompanied by a procedural framework for this context; ou (2) describing the structure and the main elements of a PP plus naming criteria for adjusting those elements on a case-by-case basis.
J’ai. Particular Principles for Specific Contexts
It has been argued that the general talk of “the” PP should be abandoned in favor of formulating distinct precautionary principles (Hartzell-Nichols 2013). This strategy aims to arrive at action-guiding and coherent principles by formulating particular PPs that apply to a narrow range of threats and express a specific obligation. One example is the “Catastrophic Harm PP (CHPP)” of Hartzell-Nichols (2012, 2017), which is restricted to catastrophic threats. It consists of eight conditions that specify when precautionary measures have to be taken, spelling out (À) what counts as a catastrophe, (b) the knowledge requirements for taking precaution, et (c) criteria for appropriate precautionary measures. The CHPP is accompanied by a “Catastrophic Precautionary Decision-Making Framework” which guides the assessment of threats in order to decide whether they meet the CHPP’s criteria, and guides decision-makers in determining what precautionary measures should be taken against a particular threat of catastrophe. This framework lists key considerations and steps that should be performed when applying the CHPP, par exemple, drawing on all available sources of information, assessing likelihoods of potential harmful outcomes under different scenarios, identifying all available courses of precautionary action and their effectiveness, and identifying specific actors who should be held responsible for taking the prescribed precautionary measures.
Ii. An Adjustable Principle with Procedural Instructions
Identifying main elements of a PP and accompanying them with rules for adjusting them on a case-by-case basis is another strategy to preserve the idea of a precautionary principle while avoiding both inconsistency as well as vagueness. It has been shown that as diverse as PP formulations are, they typically share the elements of uncertainty, harm, et (precautionary) action (Sandin 1999, Manson 2002). By explicating these concepts and, most importantly, by defining criteria for how they should be adjusted with respect to each other, some authors obtain a substantial PP that can be adjusted on a case-by-case basis without becoming arbitrary.
One example is the PP that Randall (2011) develops in the context of an in-depth analysis of traditional, or as he calls it, ordinary risk management (ORM). Randall identifies the following “general conceptual form of PP”:
If there is evidence stronger than E that an activity raises a threat more serious than T, we should invoke a remedy more potent than R.
Threat, J, is explicated as chance of harm, meaning that threats are assessed and compared according to their magnitude and likelihood. Our knowledge of outcomes and likelihoods is explicated with the concept of evidence, E, referring to uncertainty in the sense of our incomplete knowledge about the world. The precautionary response is conceptualized as remedy, R, which covers a wide range of responses from averting the threat, remediating its damage, mitigating harm, and adapting to changed conditions after other remedies have been exhausted. Remedies should fulfill a double function, (1) providing protection from a plausible threat, while at the same time (2) generating additional evidence about the nature of the threat and the effectiveness of various remedial actions. The main relations between the three elements are that the higher the likelihood that the remedy-process will generate more evidence, the smaller is the threat-standard and the lower is the evidence-standard that should be required before invoking the remedy even if we have concerns about its effectiveness (Randall 2011, 167).
Having clarified the concepts used in the ETR-framework, Randall specifies them in order to formulate a PP that accounts for the weaknesses of ORM:
Credible scientific evidence of plausible threat of disproportionate and (mostly but not always) asymmetric harm calls for avoidance and remediation measures beyond those recommended by ordinary risk management. (Randall 2011, 186)
He then goes on to integrate this PP and ORM together into an integrated risk management framework. Randall makes sure to stress that a PP cannot determine the decision-process on its own. As a moral principle, it has to be weighed against other moral, politique, economic, and legal considerations. Ainsi, he also calls for the development of a procedural framework to ensure that its substantial normative commitments will be implemented on the ground (Randall 2011, 207).
Steel (2014, 2013) develops a comprehensive PP interpretation which is intended to be “a procedural requirement, a decision rule, and an epistemic rule” (Steel 2014, dix). Referring to the Rio Declaration, Steel argues that such a formulation of a PP states that our decision-process should be structured differently, namely that decision-rules should be used that can be applied in an informative way under uncertainty. Toutefois, he does not take this procedural element to be the whole PP, but interprets it as a “meta”-rule which guides the application and specification of the precautionary “tripod” of threat, uncertainty, and precautionary action. Plus précisément, Steel’s proposed PP consists of three core elements:
The Meta Precautionary Principle (MPP): Uncertainty must not be a reason for inaction in the face of serious threats.
The Precautionary Tripod: The elements that have to be specified in order to obtain an action-guiding precautionary principle version, à savoir: If there is a threat that meets the harm condition under a given knowledge condition then a recommended precaution should be taken.
Proportionality: Demands that the elements of the Precautionary Tripod are adjusted proportionally to each other, understood as Consistency: The recommended precaution must not be recommended against by the same PP version, and Efficiency: Among those precautionary measures that can be consistently recommended by a PP version, the least costly one should be chosen.
An application of this PP requires selecting what Steel calls a “relevant version of PP," c'est, a specific instance of the Precautionary Tripod that meets the constraints from both MPP and Proportionality. To obtain such a version, Steel (2014, 30) proposes the following strategy: (1) select a desired safety target and define the harm condition as a failure to meet this target, (2) select the least stringent knowledge condition that results in a consistently applicable version of PP given the harm condition. To comply with the MPP, uncertainty must neither turn the PP version inapplicable nor lead to continual delay in taking measures to prevent harm.
Ainsi, Steel’s PP proposal guides decision-makers both in formulating the appropriate PP version as well as in its application. The process of formulating the particular version already deals with many questions like how evidence should be assessed, who has to prove what, to what kind of threats we should react, and what appropriate precautionary measures would be. Discutablement, this PP can thereby be action-guiding, since it helps to select specific measures, without being a rigid prescriptive rule that is not suited for decisions under uncertainty.
En plus, proposals like the ones of Randall and Steel have the advantage that they are not rigidly tied to a specific category of decision-theoretic non-certainty, c'est, decision-theoretic risk, uncertainty, or ignorance. They can be adjusted with respect to varying degrees of knowledge and available evidence, taking into account that we typically have some imprecise or vague sense of how likely various outcomes are, but not enough of a sense to assign meaningful precise probabilities to the outcomes. While these situations do not amount to decision-theoretic risk, they nonetheless include more information than what is often taken to be available in decision-theoretic uncertainty. Discutablement, this better corresponds to the notion of “scientific uncertainty” than to equate the latter with decision-theoretic uncertainty (see Steel 2014, Chapitre 4).
3. Justifications for Precautionary Principles
This section surveys different normative backgrounds that have been used to defend a PP. It starts by addressing arguments that can be located in the framework of practical rationality, before moving to substantial moral justifications for precautions.
À. Practical Rationality
When PPs are proposed as principles of practical rationality, they are typically seen as principles of risk regulation. This includes, but is not reduced to, rational choice theory. When we examine the justifications for PPs in this context, we have to do this against the background of established risk regulation practices. We can identify a rather standardized approach to the assessment and management of risks, which Randall (2011, 43) calls “ordinary risk management (ORM).»
J’ai. Ordinary Risk Management
Although there are different understandings of ORM, we can identify a rather robust “core” of two main parts. D'abord, a scientific risk assessment is conducted, where potential outcomes are identified and their extent and likelihood estimated (compare Randall 2011, 43-46). Typiquement, risk assessment is understood as a quantitative endeavor, expressing numerical results (Zander 2010, 17). Deuxième, on the basis of the data obtained from the risk assessment, the risk management phase takes place. Ici, alternative regulatory courses of action as response to the scientifically estimated risks are discussed, and a choice is made between them. While the risk assessment phase should be as objective and value-free as possible, the decisions that take place in the risk management phase should be, although informed by science, based on the values and interests of the parties involved. In ORM, cost-benefit analysis (CBA) is a powerful and widely used tool for making these decisions in the risk-management phase. To conduct a CBA, the results from the risk assessment, c'est, what outcomes are possible under which course of action, are evaluated according to the willingness to pay (WTP) or willingness to accept compensation (WTA) of individuals in order to estimate the benefits and costs of different courses of actions. That means that non-economic values, like human lives or environmental preservation, are getting monetized in order to be comparable on a common ratio-scale. Since we rarely if ever are facing cases of certainty, where each course of action has exactly one outcome which will materialize if we chose it, these so-reached utilities are then probability-weighed and added up in order to arrive at the expected utility of the different courses of action. Sur cette base, it is possible to calculate which regulatory actions have the highest expected net benefits (Randall 2011, 47), c'est, to apply the principle of maximizing expected utility (MEU) and to choose the option with the highest expected utility. CBA is seen as a tool that enables decision-makers to rationally compare costs and benefits, helping them to come to an informed decision (Zander 2010, 4).
In the context of ORM, we can distinguish two main lines of argumentation for PPs: D'un côté, authors argue that PPs are rational by trying to show that they gain support from ORM. D'autre part, authors argue that ORM itself is problematic in some aspects, and propose PPs as a supplement or alternative to it. In both cases, we find justifications for PPs as decision rules for risk management as well as principles that pertain to the risk assessment stage and are concerned with problem-framing (this includes epistemic and value-related questions).
Ii. PPs in the Framework of Ordinary Risk Management
To begin, here are some ways in which people propose to locate and defend PPs within ORM.
Expected Utility Theory
Some authors claim that as long as we can assign probabilities to the various outcomes, c'est, as long as we are in a situation of decision-theoretic risk, precaution is already “built in” into ORM (Chisholm and Clarke 1993; Gardiner 2006; Sunstein 2007). The argument is roughly that no additional PP is necessary because expected utility theory in combination with the assumption of decreasing marginal utility allows for risk aversion by placing greater weight on the disutility of large damages. Not to choose options with possibly catastrophic outcomes, even if they only have a small probability, would thus be recommended by the principle of maximizing expected utility (MEU) as a consequence of their large disutility.
This argumentation does not go unchallenged, as the next subsection (3.a.iii) shows. En plus, MEU itself is not uncontroversial (see Buchak 2013). Toujours, even if we accept it, we cannot use MEU under conditions of decision-theoretic uncertainty, since it relies on probability information. Par conséquent, authors proposed PPs for decisions under uncertainty in order to fill this “gap” in the ORM framework. They argue that under decision-theoretic uncertainty, it is rational to be risk-averse, and try to demonstrate this with arguments based on rational choice theory. Toutefois, it is not always clear if the discussed decision rule is used to justify a—somehow—already formulated PP, or if the decision rule is proposed as a PP itself.
Maximin and Minimax Regret
Both the maximin rule—selecting the course of action with the best worst case—and the minimax regret rule—selecting the course of action where under each possible scenario, the maximal regret is the smallest—have been proposed and discussed as possible formalizations of a PP within the ORM framework. It has been argued that maximin captures the underlying intuitions of PPs (à savoir, that the worst should be avoided) and that it yields rational decisions in relevant cases (Hansson 1997). Although the rationality of maximin is contested (Harsanyi 1975; Bognar 2011), it is argued that we can qualify it with criteria to single out the cases in which it can—and should—rationally be applied (Gardiner 2006). This is done by showing that a so-qualified maximin rule fits with paradigm cases of precaution and commonsense-decisions that we make, arguing that it is plausible to adopt it also for further cases.
Chisholm and Clarke (1993) argue that the minimax regret rule leads to the prevention of uncertain harm in line with the basic idea of a PP, while also giving some weight to forgone benefits. Against minimax regret and in favor of maximin, Hansson (1997, 297) fait valoir que, premièrement, minimax regret presupposes more information, since we need to be able to assign numerical utilities to outcomes. Deuxièmement, he uses a specific example to show that minimax regret and maximin can lead to conflicting recommendations. According to Hansson, the recommendation made by maximin expresses a higher degree of precaution.
Quasi-Option Value
Irreversible harm is mentioned in many PP formulations, for example in the Rio Declaration. One proposal to justify why “irreversibility” justifies precautions refers to the concept of “(quasi-)option value” (Chisholm and Clarke 1993; Sunstein 2005a, 2009), which was first introduced by Arrow and Fisher (1974). They show that when regulators are confronted with decision problems where they are (À) uncertain about the outcomes of the options, but there are (b) chances for resolving or reducing these uncertainties in the future, et (c) one or more of the options might entail irreversible outcomes, then they should attach an extra-value, c'est, an option-value to the reversible options. This takes into account the value of the options that choosing an alternative with irreversible outcome would foreclose. To illustrate this, think of the logging of (a part of) a rain forest: It is a very complex ecosystem, which we could use in many ways. But once it is clear-cut, it is almost impossible to restore to its original state. By choosing the option to cut it down, all options to use the rain forest in any other way would practically be lost forever. As Chisholm and Clarke (1993, 115) point out, irreversibility might sometimes be associated with not taking actions now: Not mitigating greenhouse gas (GHG) emissions means that more and more GHG aggregate in the atmosphere, where they stay for a century or more. They argue that introducing the concept of quasi-option value supports the application of a PP even if decision makers are not risk-averse.
iii. Reforming Ordinary Risk Management
After reviewing attempts to justify a PP in the ORM framework, without challenging the framework itself, let us now examine justifications for PPs that are partially based on criticisms of ORM.
Deficits of ORM
As a first point, ORM as a regulatory practice tends toward oversimplification that neglects uncertainty and imprecision, leading to irrational and harmful decisions. This is seen as a systematic deficit of ORM itself, not only of its users (see Randall 2011, 77), and not only as a problem under decision-theoretic uncertainty, c'est, situations where no reliable probabilities are available, but already under decision-theoretic risk. D'abord, decision makers tend to ignore low probabilities as irrelevant, focusing on the “more realistic,” higher ones. This means that low, but significant probabilities for catastrophe are ignored, par exemple, so called “fat tails” in climate scenarios (Randall 2011, 77). Deuxième, decision makers are often “myopic”, placing higher weight on current costs than on future benefits, avoiding high costs today. This often leads to even higher costs in the future. Troisième, disutilities might get calculated too optimistically, neglecting so-called “secondary effects” or “social amplifications,” for example, the psychological and social effects of catastrophes (see Sunstein 2007, 7). Dernièrement, since cost-benefit analysis (CBA) provides such a clear view, there is a tendency to apply it even if the conditions for its application are not fulfilled. We tend to assume more than we know, and to decide according to the MEU criterion although no reliable probability information and/or no precise utility information is available. This so-called “tuxedo fallacy” is seen as a dangerous fallacy because it creates an “illusion of control” (Hansson 2008, 426–27).
Since PPs are seen as principles that address exactly such problems—drawing our attention on unlikely catastrophic possibilities, demanding action besides uncertainty, to consider the worst possible outcomes, and not to assume more than we know—they gain indirect support from these arguments. ORM in its current form allures us to apply it incorrectly and to neglect rational precautionary action. At least some sort of overarching PP that reminds us of correct practices seems necessary.
As a second point, it is argued that the regulatory practice of ORM has not only the “built-in” tendency to miss-apply its tools, but that it has fundamental flaws in itself which should be corrected by a PP. Randall (2011, 46–70) criticizes risk assessment in ORM on the grounds that it is typically built on simple models of the threatened system, par exemple, the climate system. Those neglect systemic risks like the possibility of feedback effects or sudden regime shifts. By depending on the law of large numbers, ORM is also not a decision framework that is suitable to deal with potential catastrophes, since they are singular events (Randall 2011, 52). De la même manière, Chisholm and Clarke (1993, 112) argue that expected utility theory is only useful as long as “probabilities and possible outcomes are within the normal range of human experience.” Examples for such probabilities and outcomes in the normal range of human experience are insurances like car and fire insurance: We have statistics about the probabilities of accidents or fires, and can calculate reasonable insurance premiums based on the law of large numbers. En outre, we have experience with how to handle them, and have institutions in place like fire departments. None of this is true for singular events like anthropogenic climate change. Par conséquent, it is argued that we cannot just leave ORM relatively unaltered, and support it with a PP for decisions under uncertainty, and perhaps a more general, overarching PP as a normative guideline. Plutôt, it is demanded that we also have to reform the existing ORM framework in order to include precautionary elements.
Historical Arguments for Revising ORM
In the past, failures to take precautionary measures often resulted in substantial, widespread, and long-term harm to the environment and human health (Harremoës and others 2001, Gee and others 2013). This insight has been used to defend adopting a precautionary principle as a corrective to existing practices: For John (2007, 222), these past failures can be used as “inductive evidence” in an argument for reforming our regulatory policies. Whiteside (2006, 146) defends a PP as a product of social learning from past mistakes. According to Whiteside, these past mistakes reveal that (À) our knowledge about the influences of our actions on complex ecological systems is insufficient, et (b) that how decisions were reached was an important part of their inefficiency, leading to insufficient protection of the environment and human health. Ainsi, to Whiteside, the PP generates a normative obligation to re-structure our decision-procedures (Whiteside 2006, 114). The most elaborate historical argument is made by Steel (2014, Chapitre 5). Steel’s argument rests on the following premise:
If a systematic pattern of serious errors of a specific type has occurred, then a corrective for that type of error should be sought. (Steel 2014, 91)
By critically examining not only cases of failed precautions and harmful outcomes, but also counter-examples of allegedly “excessive” precaution, Steel shows that such a pattern of serious errors in fact exists. Cases such as the ones described in “Late Lessons from Early Warnings” (Harremoës and others 2001) demonstrate that continuous delays in response to emerging threats have frequently led to serious and persistent harms. Steel (2014, 74–77) goes on to examine cases that have been named as examples of excessive precaution. He finds that in fact, often no regulation whatsoever was implemented in the first place. And in cases where regulations were put in place, they were mostly very restricted, had only minimal negative effects, and were relatively easily reversible. Par exemple, one of the “excessive precautions” consisted in putting a warning label on products containing saccharine in the US. According to Steel (2014, 82), the historical argument thus supports a PP as a corrective against a systematic bias that is entrenched in our practices. This bias emerges because there are informational and political asymmetries that make continual delays more likely than precautionary measures when there are trade-offs between short-term economic gain for an influential party against harms that are uncertain or distant in terms of space or time (or all three).
Epistemic Implications
The justifications presented so far all concern PPs aiming at the management of risks, c'est, action-guiding interpretations. But we can also find discussions of a PP for the assessment of threats, so called “epistemic” PPs. It is not enough to just supply existing practices with a PP; clearly, risk assessment has to be changed, aussi, in order to be able to apply a PP. This means that uncertainties have to be taken seriously and to be communicated clearly, that we need to employ more adequate models which take into account the existence of systemic risks (Randall 2011, 77–78), that we need criteria to identify plausible (as opposed to “mere”) possibilities, et ainsi de suite. Toutefois, this is more a question of the implications of adopting a PP, not an expression of a genuine PP itself. Ainsi, these kinds of argument are either presuppositions for a PP, because we need to identify uncertain harms first in order to do something about them. Or they are implications from a PP, because it is not admissible to conduct a risk assessment that makes it impossible to apply a PP.
Procedural Precaution
Authors who favor a procedural interpretation of PPs stress that they are concerned especially with decisions under conditions of uncertainty. They point out that while ORM, with its focus on cost-effectiveness and maximizing benefits, might be appropriate for conditions of decision-theoretic risk, the situation is fundamentally different if we have to make decisions under decision-theoretic uncertainty or even decision-theoretic ignorance. Par exemple, Arcuri (2007, 20) points out that since PPs are principles particularly for decisions under decision-theoretic uncertainty, they cannot be prescriptive rules which tell us what the best course of action is—because the situation is essentially characterized by the fact that we are uncertain about the possible outcomes to which our actions can lead. Tickner (2001, 14) claims that this should lead to redirecting the questions that are asked in environmental decision-making: The focus should be moved from the hazards associated with a narrow range of options to solutions and opportunities. Ainsi, the assessment of alternatives is a central point of implementing PPs in procedural frameworks:
In the end, acceptance of a risk must be a function not only of hazard and exposure but also of uncertainty, magnitude of potential impacts and the availability of alternatives or preventive options. (Tickner 2001, 122)
Although (économique) efficiency should not be completely dismissed and still should have its place in decision-making, proponents of a procedural PP proclaim that we should shift our aim in risk regulation from maximizing benefits to minimizing threats, especially in the environmental domain where harms are often irreversible (compare Whiteside 2006, 75). They also advocate democratic participation, pointing out that a decision-making process under scientific uncertainty cannot be a purely scientific one (Whiteside 2006, 30–31; Arcuri 2007, 27). They thus see procedural interpretations of PPs as justified with respect to the goal of ensuring that decisions are made in a responsible and defensible way, which is especially important when there are substantial uncertainties about their outcomes.
Challenging the Underlying Value Assumptions
In addition to scientific uncertainty, Un vrai homme (2003, 334) distinguishes another kind of uncertainty, which he calls “axiological uncertainty.” Both kinds make it difficult to implement ORM in making decisions. While scientific uncertainty arises due to our lack of empirical evidence, axiological uncertainty is concerned with our value assumptions. This kind of uncertainty can take on different forms: We can be unsure about how to measure utilities—in dollars lost/saved, lives lost/saved, species lost/saved, or something else? Alors, we can be uncertain how to aggregate costs and benefits, and how to compare, par exemple, economic values with ecological ones. Values cannot always be measured on a common ordinal scale, much less on a common cardinal scale (as ORM requires, at least in some senses such as those including the use of a version of cost-benefit analysis). Ainsi, it is irrational to treat them as if they would fulfill this requirement (Thalos 2012, 176–77; Aldred 2013). This challenges the value assumptions underlying ORM, and is seen as a problem that should be fixed by a PP.
En plus, authors like Hansson (2005b, dix) criticize that it is essentially problematic that costs and benefits get aggregated without regard to who has them, and that person-related aspects like autonomy, or if a risk is willingly taken or imposed by others, are unjustly neglected.
Pour résumer, we can say that when the underlying value assumptions of ORM are challenged, either the criticism pertains to how values are estimated and assigned, or the utilitarian decision criterion of maximizing overall expected utility is criticized. In both cases, we are arguably leaving the framework of rational choice and ORM, and move toward genuine moral justifications for PPs.
b. Moral Justifications for Precaution
Some authors stress that, regardless of whether a PP is thought to supplement ordinary risk management (ORM) or whether it is a more substantive claim, a PP is essentially a moral principle, and has to be justified on explicitly moral grounds. (Note that depending on the moral position one holds, many of the considerations in 3.a can also be seen as discussions of PPs from a moral standpoint; most prominently utilitarianism, since ORM uses the rule of maximizing expected utility.) They argue that taking precautionary measures under uncertainty is morally demanded, because otherwise we risk damages that are in some way morally unacceptable.
J’ai. Environmental Ethics
PPs are often associated with environmental ethics, and the concept of sustainable development (O’Riordan and Jordan 1995; Kaiser 1997; Westra 1997; McKinney and Hill 2000; Steele 2006; Paterson 2007). Some authors take environmental preservation to be at the core of PPs. PP formulations such as the Rio or the Wingspread PP emerged in a debate about the necessity to prevent environmental degradation, which explains why many PPs highlight environmental concerns. It seems plausible that a PP can be an important part of a broader approach to environmental preservation and sustainability (Ahteensuu 2008, 47). But it seems difficult to justify a PP with recourse to sustainability, since the concept itself is vague and contested. En effet, when PPs have been discussed in the context of sustainability, they are often proposed as ways to operationalize the vague concept into a principle for policymaking, along with other principles like the “polluter pays” principle (Dommen 1993; O’Riordan and Jordan 1995). Ainsi, while PPs are partly motivated by the insight that our way of life is not sustainable, and that we should change how we approach environmental issues, it is difficult to justify them solely on such grounds. Toutefois, the hope is that a clarification of the normative (morale) underpinnings of PPs will help to justify a PP for sustainable development. In the following, we will see that it might make sense to take special precautions with respect to ecological issues, not only because they often are complex and might entail unresolvable uncertainties (Randall 2011, 64–70), but also because harm to the environment can affect many other moral concerns, par exemple, human rights and both international and intergenerational justice. Comme nous le verrons, these moral issues might provide justifications for PPs on their own, without explicit reference to sustainability.
Ii. Harm-Based Justifications
PPs that apply to governmental regulatory decisions have been defended as an extension of the harm principle. There are different versions of the harm principle, but roughly, it states that the government is justified in restricting citizens’ individual liberty only to avoid harm to others.
The application of the harm principle normally presupposes that certain conditions are fulfilled, par exemple, that the harms in question must be (1) involuntarily taken, (2) sufficiently severe and (3) probable, et (4) the prescribed measures must be proportional to the harms (compare Jensen 2002, Petrenko and McArthur 2011). If these conditions are fulfilled, the prevention principle can be applied, prescribing proportional measures to prevent the harm in question from materializing. Toutefois, PPs apply to cases where we are unsure about the extent and/or the probability of a possible harm. Par conséquent, PPs are seen as a “clarifying amendment” (Jensen 2002, 44) which extends the normative foundation of the harm principle from prevention to precaution (Petrenko and McArthur 2011, 354): The impossibility to assign probabilities does not negate the obligation to act as long as possible harms are severe enough and scientifically plausible. Even for the prevention principle, it holds that the more severe a threat is, the less probable it has to be in order to warrant preventive measures. Ainsi, it has been argued that the probability of high-magnitude harms becomes almost irrelevant, as long as they are scientifically plausible (Petrenko and McArthur 2011, 354–55). En plus, some harm is seen as so serious that it warrants special precaution, par exemple, if it is irreversible or cannot be (fully) compensated (Jensen 2002, 49–50). In such situations, the government is justified in restricting liberties by, par exemple, prohibiting a technology, even if there remains uncertainty about whether or not the technology would actually have harmful effects.
A related idea is that governments have an institutional obligation not to harm the population, which overrides the weaker obligation to do good—meaning that it is worse if certain regulatory decisions of the government lead to harm than if they lead to foregone benefits (John 2007).
The question what exactly makes a threat severe enough to justify the implementation of precautionary measures has also been discussed with reference to justice- and rights-based considerations.
iii. Justice-Based Justifications
McKinnon (2009, 2012) presents two independent arguments for precautions, which both are justice-based. Those arguments are developed with respect to the possibility of a climate change catastrophe (CCC), and concern two alternative courses of action and their worst cases. The case of “Unnecessary Expenditure” means taking precautions which turn out to have been unnecessary, thereby wasting money which could have been spent for other, better purposes. “Methane Nightmare” describes the case of not taking precautions, leading to CCCs with catastrophic consequences, making survival on earth very difficult if not impossible. McKinnon argues that CCCs are uncertain in the sense that they are scientifically plausible, even though we cannot assign probabilities to them (McKinnon 2009, 189).
Playing it Safe
McKinnon’s first argument for why uncertain, yet plausible harm with the characteristics of CCCs justifies precautionary measures is called the “playing safe”– argument. It is based on two Rawlsian commitments about justice (McKinnon 2012, 56): (1) That treating people as equals means (entre autres) to ensure a distribution of (dis)advantage among them that makes the worst-off group as well off as possible, et (2) that justice is intergenerational in scope, governing relations across generations as well as within them.
McKinnon (2009, 191–92) argues that the distributive injustice would be so much higher if “Methane Nightmare” would materialize than if it came to “Unnecessary Expenditure” that we have to choose to take precautionary measures, even though we do not know how probable “Methane Nightmare” is. C'est-à-dire, such a situation warrants the application of the maximin-principle, because distributive justice in the sense of making the worst-off as well off as possible has lexical priority to maximizing the overall benefits for all. Choosing an option that has a way better best case, mais, in the worst-case, would lead to distributive injustice, over another option which might have a less-good best-case, but where the worst-case does not entail such distributive injustices, would be inadmissible.
Unbearable Strains of Commitment
As McKinnon notes, the “playing safe” justification only holds if one accepts a very specific understanding of distributive (dans)justice. Toutefois, she claims to have an even more fundamental argument for precautionary measures in this context, which is also based on Rawlsian arguments concerning intergenerational justice, but does not rely on a specific conception of distributive justice. It is called the “unbearable strains of commitment”-argument and is based on a combination of the “just savings”-principle for intergenerational justice together with the “impartiality”-principle. It states that we should not choose courses of actions that impose on future generations conditions which we ourselves could not agree to and which would undermine the bare possibility of justice itself (McKinnon 2012, 61). This justifies taking precautions against CCCs, since the worst-case in that option is “Unnecessary Expenditure”, qui, in contrast to “Methane Nightmare” would not lead to justice-jeopardizing consequences.
iv. Rights-Based Justifications
Strict precautionary measures concerning climate change have been demanded based on the possible rights violations that such climate change might entail. Par exemple, Caney (2009) claims that although other benefits and costs might be discounted, human rights are so fundamental that they must not be discounted. He argues that the possible harms involved in climate change justify precautions: An unmitigated climate change entails possible outcomes which would lead to serious or catastrophic right violations, while a policy of strict mitigation would not involve a loss of human rights—at least not if it is carried out by the affluent members of the world. En plus, “business as usual” from the affluent would mean to gamble with the conditions of those who already lack fundamental rights protection, because the negative effects of climate change would come to bear especially in poor countries. De plus, the benefits of taking the “risk of catastrophic climate change” outcomes would almost entirely result for the risk-takers, not the risk-bearers (Caney 2009, 177–79). If we extrapolate from this concrete application, the basic justification for precaution seems to be: If a rights violation is plausibly possible, and there are ways to avoid this possibility by choosing another course of action, which does not involve the plausible possibility of rights violations, then we have to choose the second option. It does not matter how likely the rights violations are going to happen; as long as they are plausible, we have to treat them as if they would materialize with certainty.
Ainsi, in this interpretation, precaution means making sure that no rights violations happen, even if we (because of uncertainty) “run the risk” of doing more than what would have been necessary—as long as we don’t have to jeopardize our own rights in order to do so.
v. Ethics of Risk and Risk Impositions
Some authors see the PP as an expression of a problem with what they call standard ethics (Hayenhjelm and Wolff 2012, e28). According to them, standard ethical theories, with their focus on evaluations of actions and their outcomes under conditions of certainty, fail to keep up with the challenges that technological development poses. PPs are then placed in the broader context of developing and defending an ethics of risk, c'est, a moral theory about the permissibility of risk impositions. Étonnamment, so far there are few explicit connections between the discussion of the ethics of risk impositions (see for example Hansson 2013, Lenman 2008, Suikkanen 2019) and the discussion of PPs.
One exemption is Munthe (2011), who argues that before we can formulate an acceptable and intelligible PP, we first need at least the basic structure of an ethical theory that deals directly with issues of creating and avoiding risks of harm. In Chapter 5 of his book, Munthe (2011) sets out to develop such a theory, which focuses on the responsibility of a decision, spécifiquement, responsibility as a property of decisions: Decisions and risk impositions may be morally appraised in their own right. When one does not know what the outcome of a decision will be, it is important to make responsible decisions, c'est, decisions that can still be defensible as being responsible given the information one had at the time the decision was made, even if the outcome is wrong. Toutefois, even though Munthe’s discussion starts out from the PP, he ultimately concludes that we do not need a PP, but a policy that expresses a proper degree of precaution: “What is needed is plausible theoretical considerations that may guide decision makers also employing their own judgement in specific cases. We do not need a precautionary principle, we need a policy that expresses a proper degree of precaution.” Thus, the idea seems to be that while a fully developed ethics of risk will justify demands commonly associated with PPs, it ultimately will replace the need for a PP.
4. Main Objections and Possible Rejoinders
This section presents the most frequent and the most important objections and challenges PPs face. They can be roughly divided into three groups. The first argues that there are fundamental conceptual problems with PPs, which make them unable to guide our decisions. The second claims that PPs, in any reasonable interpretation, are superfluous and can be reduced to existing practices done right. The third rejects PPs as irrational, saying that they are based on unfounded fears and that they contradict science, leading to undesirable consequences. While some objections are aimed at specific PP-proposals, others are intended as arguments against PPs in general. Toutefois, even the latter typically hold only for specific interpretations. This section shortly presents the main points of these criticisms, and then discusses how they might be answered.
À. PPs Cannot Guide Our Decisions
There are two main reasons why PPs are seen as unable to guide us in our decision-making: They are rejected either as incoherent, or as being vacuous and devoid of normative content.
Objection: PPs are incoherent
One frequent criticism, most prominently advanced by Sunstein (2005b), is that a “strong PP” leads to contradicting recommendations and is therefore paralyzing our decision-making. He understands “strong PP” as a very demanding principle which states that “regulation is required whenever there is a possible risk to health, sécurité, or the environment, even if the supporting evidence remains speculative and the economic costs of regulation are high” (Sunstein 2005b, 24). The problem is that every action poses such a possible risk, Et ainsi, both regulation and non-regulation would be prohibited by the “strong PP,” resulting in paralysis (Sunstein 2005b, 31). Donc, “strong PP” is rejected as an incoherent decision-rule, because it leads to contradicting recommendations.
Peterson (2006) makes another argument that rejects PPs as incoherent. He claims that he can prove formally as well as informally that every serious PP formulation is logically inconsistent with reasonable conditions of rational choice, and should therefore be given up as a decision-rule (Peterson 2006, 597).
Rejoinder
Both criticisms have been rejected as being based on a skewed PP interpretation. In the case of Sunstein’s argument, he is attacking a straw-man. His critique of the “strong PP” as paralyzing relies on two assumptions which are not made explicit, à savoir (À) that a PP is invoked by any and all risks, et (b) that risks of action and inaction are typically equally balanced (Randall 2011, 20). Toutefois, this is an atypical PP interpretation. Most formulations make explicit reference to severe dangers, meaning that not just any possible harm, no matter how small, will invoke a PP. Et, as the case studies in Harremoës and others (2001) illustrate, the possible harms from action and inaction—or, plus précisément, regulation or no regulation—are typically not equally balanced (see also Steel 2014, Chapitre 9). Toujours, Sunstein’s critique calls attention to the important point of risk-risk trade-offs, which every sound interpretation and application of a PP has to take into account: Taking precautions against a possible harm should not lead to an overall higher level of threat (Randall 2011, 84–85). Néanmoins, there seems to be no reason why a PP should not be able to take this into account, and the argument thus fails as a general rejection of PPs.
De la même manière, it can be contested whether Peterson’s (2006) PP formalization is a plausible PP candidate: He presupposes that we can completely enumerate the list of possible outcomes, that we have rational preferences that allow for a complete ordering of the outcomes, and that we can estimate at least the relative likelihood of the outcomes. As Randall (2011, 86) points out, this is an ideal setup for ordinary risk management (ORM), and the three conditions for rational choice that Peterson cites and with which he shows his PP to be inconsistent, have their place in the ORM- framework. Ainsi, one can object that it is not very surprising if a PP, which aims especially at situations in which the ideal conditions are not met, does not do very well under the ideal conditions.
Objection: PPs are vacuous
D'autre part, it is argued that if a PP is attenuated in order not to be paralyzing, it becomes such a weak claim that it is essentially vacuous. Soleilstein (2005b, 18) claims that weaker formulations of PPs are, although not incoherent, trivial: They merely state that lack of absolute scientific proof is no reason for inaction, qui, according to Sunstein, has no normative force because everyone is already complying with it. De la même manière, McKinnon (2009) takes a weak PP formulation to state that precautionary measures are permissible, which she also rejects as a hollow claim, stating that everyone could comply with it without ever taking any precautionary action.
En plus, PPs are rejected as vacuous because of the multitude of formulations and interpretations. Turner and Hartzell (2004), examining different formulations of PPs, come to the conclusion that they are all beset with unclarity and ambiguities. They argue that there is no common core of the different interpretations, and that the plausibility of a PP actually rests on its vagueness. This makes it unsuitable as a guide for decision-making. De la même manière, Peterson (2007b, 306) states that such a “weak” PP has no normative content and no implications for what ought to be done. He claims that in order to have normative content, a PP would need to give us a precise instruction for what to do for each input of information (Peterson 2007b, 306). By formulating a minimal normative PP interpretation and showing that it is incoherent, he argues that there cannot be a PP with normative content.
Rejoinder
Premièrement, let us address the criticism that PPs are vacuous because they express a claim that is too weak to have any impact on decision-making. Against this, Steel (2013, 2014) has argued that even if these supposedly “weak” or “argumentative” principles do not directly recommend a specific decision, they nonetheless have an impact on the decision-making process if taken seriously. He interprets them as a meta-principle that puts constraints on what decision-rules should be used, à savoir, none that would lead to inaction in the face of uncertainty. Comme, par exemple, cost-benefit analysis needs numerical probabilities to be applicable, the Meta PP will recommend against it in situations where no such probability information is available. This is a substantial constraint, meaning that the Meta PP is not vacuous. One can reasonably doubt that Sunstein is right that everyone follows such an allegedly “weak” principle anyway. There are many historical cases where there was some positive evidence that an activity caused harm, but the fact that the activity-harm link had not been irrefutably proven was used to argue against regulatory action (Harremoës and others 2001, Gee and others 2013). Ainsi, in cases where no proof, or at least no reliable probability information, concerning the possibility of harm is available, uncertainty is often used as a reason to not take precautionary action. En plus, this criticism clearly does not concern all forms of PPs, and only amounts to a full-fledged rejection of PPs if combined with the claim that so-called “stronger” PPs which are not trivial, will always be incoherent. And both Sunstein (2005b) and McKinnon (2009, 2012) do propose other PPs which express a stronger claim, albeit with a restricted scope (par exemple, only pertaining to catastrophic harm, or damage which entails specific kinds of injustice). This form of the “vacuous” objection can thus be seen not as an attack on the general idea of PPs, but more as the demand that the normative obligation they express should be made clear in order to avoid downplaying it.
Let us now consider the other form of the objection, namely the claim that PPs are essentially vague and that there cannot be a precise formulation of a PP that is both action-guiding and plausible. It is true that so far, there does not seem to exist a “one size fits all” PP that yields clear instructions for every input and that captures all the ideas commonly associated with PPs. Toutefois, even if this would be a correct interpretation of what a “principle” is (which many authors deny, compare for example Randall 2011, 97), it is not the only one. Peterson (2007b) presumes that only a strict “if this, then that” rule can have normative force, and consequently be action-guiding. En revanche, other authors stress the difference between a principle and a rule (Fisher 2002; Arcuri 2007; Randall 2011). According to them, while rules specify precise consequences that follow automatically when certain conditions are met, principles express normative obligations that need to be specified according to different contexts, and that need to be implemented and operationalized in rules, laws, policies, et ainsi de suite (Randall 2011, 97). When authors are rejecting PPs as incoherent (see the objection), they might sometimes make the same mistake, confusing a general principle that needs to be specified on a case-by-case basis with a stand-alone decision rule that should fit for any and all cases.
As for PPs being essentially vague: This criticism seems to presuppose that in order to formulate a clarified PP, we have to capture and unify everything that is associated with it. Toutefois, explicating a concept in a way that clarifies it and captures as many of the ideas associated with it as possible does not mean that we have to preserve all of the ideas commonly associated with it. The same is true for explicating a principle such as a PP. En plus, this article shows that many different ways of interpreting PPs in a precise way are possible, and not all of them exclude each other.
b. PPs are Redundant
Some authors reject PPs by arguing that they are just a narrow and complicated way of expressing what is already incorporated into established, more comprehensive approaches. Par exemple, Bognar (2011) compares Gardiner’s (2006) “Rawlsian Core PP”-interpretation with what he calls a “utilitarian principle” which consists of a combination of the principles of indifference and that of maximizing expected utility. He concludes that this “utilitarian principle” does lead to the same results as the RCPP in the cases where the RCPP applies, mais, contrary to it, this “utilitarian principle” is not restricted to such a narrow range of cases. His conclusion is that we can dispose of PPs, at least in formulations of maximin (Bognar 2011, 345).
In the same vein, Peterson (2007b, 600) asserts that if formulated in a consistent way, a PP would not be different from the “old” rules for risk-averse decision-making, while other authors have shown that we can use existing ordinary risk management (ORM) tools to implement a PP (Farrow 2004; Gollier, Moldovanu, and Ellingsen 2001). This allegedly would make PPs redundant (Randall 2011, 25; 87).
Rejoinder
Particularly against the criticism of Bognar (2011), one can counter that his “utilitarian principle” falls victim to the so-called “tuxedo fallacy” (Hansson 2008). Using the principle of indifference, c'est, treating all outcomes as equally probable when one does not have enough information to assign reliable probabilities, can be seen as creating an “illusion of control” by assuming that as long as no probability information is available, all outcomes are equally probable. Neither does it pay special attention to catastrophic harms, nor does it take the special challenges of decision-theoretic uncertainty adequately into account.
Plus généralement, one can make the following point: Even though there might be plausible ways how we can translate a PP into the ORM-framework and implement it using ORM-tools, there is more to it than that. Even if we use ORM-methods to implement precaution, in the end this might still be based on a normative obligation to enact precautionary measures. This obligation has to be spelled out, because ORM can allow for precaution, but does not demand it in itself (et, as a regulatory practice, tends to neglect it).
c. PPs are Irrational
The last line of criticism accuses PPs of being based on unfounded fears, expressing cognitive biases, and therefore leading to decisions with undesirable and overall harmful consequences.
Objection: Unfounded Panic
One criticism that is especially frequent in discussions aimed at a broader audience is that PPs lead to unrestrained regulation because they can be invoked by uncertain harm. Donc, the argument goes, PPs hold the danger of unnecessary expenditures to reduce insignificant risks, forego benefits by regulating or prohibiting potentially beneficial activities, and are prone to being exploited, par exemple, from interest groups or for protectionism in international trade (Peterson 2006). A PP would stifle innovation, resulting in an overall less safe society: Beaucoup (risk-reducing) beneficial innovations of the past were only possible because risks had been taken (Zander 2010, 9), and technical innovation takes place in a process of trial-and-error, which would be seriously disturbed by a PP (Graham 2004, 5).
These critics see this as a consequence of PPs, because PPs do not require scientific certainty in order to take action, which they interpret as making merely speculative harm a reason for strict regulation. Ainsi, science would be marginalized or even rejected as a basis for decision-making, giving way to cognitive biases of ordinary people.
Objection: Cognitive biases
Sunstein claims that PPs are based on cognitive biases of ordinary people, which tend to systematically mis-assess risks (Sunstein 2005b, Chapitre 4). By reducing the importance of scientific risk-assessment and marginalizing the role of experts, decisions resulting from the application of a PP will be influenced by these biases and result in negative consequences, the criticism goes.
Rejoinder
As has been pointed out by Randall (2011, 89), these criticisms seem to be misguided. Lower standards of evidence do not mean no standards at all. It is surely an important challenge for the implementation of a PP to find a way to define plausible possibilities, but this requires by no means less science. Plutôt, as Sandin, Bengtsson, et autres (2004) point out, more, and different scientific approaches are needed. Uncertainties need to be communicated more clearly and tools need to be developed that allow taking uncertainties better into account. For decisions where we lack scientific information, but great harms are possible, ways need to be found for how public concerns can be taken into consideration (Arcuri 2007, 35). Ce, cependant, seems more a question of implementation and neither of the formulation nor the justification of a PP.
5. Références et lectures complémentaires
Ahteensuu, Marko. 2008. “In Dubio Pro Natura? A Philosophical Analysis of the Precautionary Principle in Environmental and Health Risk Governance.” PhD thesis, Turku, Finland: University of Turku.
Aldred, Jonathan. 2013. “Justifying Precautionary Policies: Incommensurability and Uncertainty.” Ecological Economics 96 (December): 132–40.
Arcuri, Alessandra. 2007. “The Case for a Procedural Version of the Precautionary Principle Erring on the Side of Environmental Preservation.” SSRN Scholarly Paper ID 967779. Rochester, New York: Social Science Research Network.
Arrow, Kenneth J., and Anthony C. Fisher. 1974. “Environmental Preservation, Uncertainty, and Irreversibility.” The Quarterly Journal of Economics 88 (2): 312–19.
Buchak, Lara. 2013. Risk and Rationality. Presse universitaire d'Oxford.
Bognar, Greg. 2011. “Can the Maximin Principle Serve as a Basis for Climate Change Policy?” Edited by Sherwood J. B. Sugden. Monist 94 (3): 329–48. https://doi.org/10.5840/monist201194317.
Caney, Simon. 2009. “Climate Change and the Future: Discounting for Time, Wealth, and Risk.” Journal of Social Philosophy 40 (2): 163–86. http://onlinelibrary.wiley.com/doi/10.1111/j.1467-9833.2009.01445.x/full.
Chisholm, Anthony Hewlings, and Harry R. Clarke. 1993. “Natural Resource Management and the Precautionary Principle.” In Fair Principles for Sustainable Development: Essays on Environmental Policy and Developing Countries, edited by Edward Dommen, 109-22.
Dommen, Edouard (Éd.). 1993. Fair Principles for Sustainable Development: Essays on Environmental Policy and Developing Countries. Edward Elgar.
Farrow, Scott. 2004. “Using Risk Assessment, Benefit-Cost Analysis, and Real Options to Implement a Precautionary Principle.” Risk Analysis 24 (3): 727–35.
Fisher, Elizabeth. 2002. “Precaution, Precaution Everywhere: Developing a Common Understanding of the Precautionary Principle in the European Community.” Maastricht Journal of European and Comparative Law 9: 7.
Gardiner, Stephen M. 2006. “A Core Precautionary Principle.” Journal of Political Philosophy 14 (1): 33–60.
Gee, David, Philippe Grandjean, Steffen Foss Hansen, Sybille van den Hove, Malcolm MacGarvin, Jock Martin, Gitte Nielsen, David Quist and David Stanners. 2013. Late lessons from early warnings: Science, precaution, innovation. European Environment Agency.
Gollier, Christian, Benny Moldovanu, and Tore Ellingsen. 2001. “Should We Beware of the Precautionary Principle?” Economic Policy, 303–27.
Graham, John D.. 2004. The Perils of the Precautionary Principle: Lessons from the American and European Experience. Volume. 818. Heritage Foundation.
Hansson, Sven Ove. 1997. “The Limits of Precaution.” Foundations of Science 2 (2): 293–306.
Hansson, Sven Ove. 2005a. Decision Theory: A Brief Introduction, Uppsala University class notes.
Hansson, Sven Ove. 2005b. “Seven Myths of Risk.” Risk Management 7 (2): 7–17.
Hansson, Sven Ove. 2008. “From the Casino to the Jungle.” Synthese 168 (3): 423–32. https://doi.org/10.1007/s11229-008-9444-1.
Hansson, Sven Ove. 2013. The Ethics of Risk: Ethical Analysis in an Uncertain World. Palgrave Macmillan.
Harremoës, Poul, David Gee, Malcolm MacGarvin, Andy Stirling, Jane Keys, Brian Wynne, and Sofia Guedes Vaz. 2001. Late Lessons from Early Warnings: The Precautionary Principle 1896-2000. Office for Official Publications of the European Communities.
Harris, John, and Søren Holm. 2002. “Extending Human Lifespan and the Precautionary Paradox.” Journal of Medicine and Philosophy 27 (3): 355–68.
Harsanyi, John C. 1975. “Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls’s Theory.” Edited by John Rawls. The American Political Science Review 69 (2): 594–606. https://doi.org/10.2307/1959090.
Hartzell-Nichols, Lauren. 2013. “From ‘the’ Precautionary Principle to Precautionary Principles.” Ethics, Policy and Environment 16 (3): 308–20.
Hartzell-Nichols, Lauren. 2017. A Climate of Risk: Precautionary Principles, Catastrophes, and Climate Change. Taylor & Francis.
Hartzell-Nichols, Lauren. 2012. “Precaution and Solar Radiation Management.” Ethics, Policy & Environment 15 (2): 158–71. https://doi.org/10.1080/21550085.2012.685561.
Hayenhjelm, Madeleine, and Jonathan Wolff. 2012. “The Moral Problem of Risk Impositions: A Survey of the Literature.” European Journal of Philosophy 20 (S1): E26–E51.
Jensen, Karsten K. 2002. “The Moral Foundation of the Precautionary Principle.” Journal of Agricultural and Environmental Ethics, 15(1): 39–55. https://doi.org/10.1023/A:1013818230213
John, Stéphane. 2007. “How to Take Deontological Concerns Seriously in Risk–Cost–Benefit Analysis: A Re-Interpretation of the Precautionary Principle.” Journal of Medical Ethics 33 (4): 221–24.
John, Stéphane. 2010. “In Defence of Bad Science and Irrational Policies: An Alternative Account of the Precautionary Principle.” Ethical Theory and Moral Practice 13 (1): 3–18.
Jonas, Hans. 2003. Das Prinzip Verantwortung: Versuch Einer Ethik Für Die Technologische Zivilisation. 5th ed. Francfort-sur-le-Main: Suhrkamp Verlag.
Kaiser, Mathias. 1997. “Fish-Farming and the Precautionary Principle: Context and Values in Environmental Science for Policy.” Foundations of Science 2 (2): 307–41.
Lemons, John, Kristin Shrader-Frechette, and Carl Cranor. 1997. “The Precautionary Principle: Scientific Uncertainty and Type I and Type II Errors.” Foundations of Science 2 (2): 207–36.
Lenman, James. 2008. Contractualism and risk imposition. Politique, Philosophie & Economics, 7(1): 99–122. https://doi.org/10/fqkwg3
Manson, Neil A. 2002. “Formulating the precautionary principle.” Environmental Ethics 24(3): 263–274.
McKinney, William J., and H. Hammer Hill. 2000. “Of Sustainability and Precaution: The Logical, Epistemological, and Moral Problems of the Precautionary Principle and Their Implications for Sustainable Development.” Ethics and the Environment 5 (1): 77–87.
McKinnon, Catriona. 2009. “Runaway Climate Change: A Justice-Based Case for Precautions.” Journal of Social Philosophy 40 (2): 187–203.
McKinnon, Catriona. 2012. Climate Change and Future Justice: Precaution, Compensation and Triage. Routledge.
Munthe, Christian. 2011. The Price of Precaution and the Ethics of Risk. Volume. 6. The International Library of Ethics, Law and Technology. Springer.
O’Riordan, Timothée, and Andrew Jordan. 1995. “The Precautionary Principle in Contemporary Environmental Politics.” Environmental Values 4 (3): 191–212.
Osimani, Barbare. 2013. “An Epistemic Analysis of the Precautionary Principle.” Dilemata: International Journal of Applied Ethics, 149–67.
Paterson, John. 2007. “Sustainable Development, Sustainable Decisions and the Precautionary Principle.” Natural Hazards 42 (3): 515–28. https://doi.org/10.1007/s11069-006-9071-4.
Peterson, Martin. 2003. “Transformative Decision Rules.” Erkenntnis 58 (1): 71–85.
Peterson, Martin. 2006. “The Precautionary Principle Is Incoherent.” Risk Analysis 26 (3): 595–601. ll.
Peterson, Martin. 2007a. “Should the Precautionary Principle Guide Our Actions or Our Beliefs?” Journal of Medical Ethics 33 (1): 5–10. https://doi.org/10.1136/jme.2005.015495.
Peterson, Martin. 2007b. “The Precautionary Principle Should Not Be Used as a Basis for Decision‐making.” EMBO Reports 8 (4): 305–8. https://doi.org/10.1038/sj.embor.7400947.
Petrenko, Anton, and Dan McArthur. 2011. “High-Stakes Gambling with Unknown Outcomes: Justifying the Precautionary Principle.” Journal of Social Philosophy 42 (4): 346–62.
Randall, Alain. 2011. Risk and Precaution. la presse de l'Universite de Cambridge.
Rawls, John. 2001. Justice as fairness: A restatement. Belknap, Presse universitaire de Harvard.
Un vrai homme, David B.. 2003. “Is the Precautionary Principle Unscientific?” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 34 (2): 329–44.
Un vrai homme, David B.. 2004. “The Precautionary Principle and Medical Decision Making.” Journal of Medicine and Philosophy 29 (3): 281–99.
Sandin, Par. 1999. “Dimensions of the Precautionary Principle.” Human and Ecological Risk Assessment: An International Journal 5 (5): 889–907.
Sandin, Par. 2004. “Better Safe Than Sorry: Applying Philosophical Methods to the Debate on Risk and the Precautionary Principle.” PhD thesis, Stockholm.
Sandin, Par. 2007. “Common-Sense Precaution and Varieties of the Precautionary Principle.” In Risk: Philosophical Perspectives, edited by Tim Lewens, 99-112. Londres; New York.
Sandin, Par. 2009. “A New Virtue-Based Understanding of the Precautionary Principle.” Ethics of Protocells: Moral and Social Implications of Creating Life in the Laboratory, 88–104.
Sandin, Par, Bengt-Erik Bengtsson, Ake Bergman, Ingvar Brandt, Lennart Dencker, Per Eriksson, Lars Förlin, and others 2004. “Precautionary Defaults—a New Strategy for Chemical Risk Management.” Human and Ecological Risk Assessment 10 (1): 1–18.
Sandin, Par, and Sven Ove Hansson. 2002. “The Default Value Approach to the Precautionary Principle.” Human and Ecological Risk Assessment: An International Journal 8 (3): 463–71. https://doi.org/10.1080/10807030290879772.
Sandin, Par, Martin Peterson, Sven Ove Hansson, Christina Rudén, and André Juthe. 2002. “Five Charges Against the Precautionary Principle.” Journal of Risk Research 5 (4): 287–99.
Science & Environmental Health Network (SEHN). 1998. Wingspread Statement on the Precautionary Principle.
Steel, Daniel. 2011. “Extrapolation, Uncertainty Factors, and the Precautionary Principle.” Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 42 (3): 356–64.
Steel, Daniel. 2013. “The Precautionary Principle and the Dilemma Objection.” Ethics, Policy & Environment 16 (3): 321–40.
Steel, Daniel. 2014. Philosophy and the Precautionary Principle. la presse de l'Universite de Cambridge.
Steele, Katie. 2006. “The Precautionary Principle: A New Approach to Public Decision-Making?” Law, Probability and Risk 5 (1): 19–31. https://doi.org/10.1093/lpr/mgl010.
Suikkanen, Jussi. 2019. Ex Ante and Ex Post Contractualism: A Synthesis. The Journal of Ethics, 23(1): 77–98. https://doi.org/10/ggjn22
Soleilstein, Cass R. 2005a. “Irreversible and Catastrophic.” Cornell Law Review 91: 841–97.
Soleilstein, Cass R. 2007. “The Catastrophic Harm Precautionary Principle.” Issues in Legal Scholarship 6 (3).
Soleilstein, Cass R. 2009. Worst-Case Scenarios. Presse universitaire de Harvard.
Soleilstein, Cass R. 2005b. Laws of Fear: Beyond the Precautionary Principle. la presse de l'Universite de Cambridge.
Thalos, Mariam. 2012. “Precaution Has Its Reasons.” In Topics in Contemporary Philosophy 9: The Environment, Philosophie, Science and Ethics., edited by W. Kabasenche, M. O’Rourke, and M. Couvreur, 171–84. Cambridge, MA: AVEC Presse.
Tickner, Joel A. 2001. “Precautionary Assessment: A Framework for Integrating Science, Uncertainty, and Preventive Public Policy.” In The Role of Precaution in Chemicals Policy, edited by Elisabeth Freytag, Thomas Jakl, g. Loibl, and M. Wittmann, 113–27. Diplomatische Akademie Wien.
Turner, Derek, and Lauren Hartzell. 2004. “The Lack of Clarity in the Precautionary Principle.” Environmental Values 13 (4): 449–60.
United Nations Conference on Environment and Development. 1992. Rio Declaration on Environment and Development.
Westra, Laura. 1997. “Post-Normal Science, the Precautionary Principle and the Ethics of Integrity.” Foundations of Science 2 (2): 237–62.
Whiteside, Kerry H. 2006. Precautionary Politics: Principle and Practice in Confronting Environmental Risk. MIT Press Cambridge, MA.
Zander, Joakim. 2010. The Application of the Precautionary Principle in Practice: Comparative Dimensions. Cambridge: la presse de l'Universite de Cambridge.
Research for this article was part of the project “Reflective Equilibrium – Reconception and Application” (Swiss National Science Foundation grant no. 150251).
Informations sur l’auteur
Tanja Rechnitzer
Messagerie: [email protected]
University of Bern
Suisse