The Philosophy of Climate Science
Climate change is one of the defining challenges of the 21st century. But what is climate change, how do we know about it, and how should we react to it? This article summarizes the main conceptual issues and questions in the foundations of climate science, as well as of the parts of decision theory and economics that have been brought to bear on issues of climate in the wake of public discussions about an appropriate reaction to climate change.
We begin with a discussion of how to define climate. Even though “climate” and “climate change” have become ubiquitous terms, both in the popular media and in academic discourse, the correct definitions of both notions are hotly debated topics. We review different approaches and discuss their pros and cons. Climate models play an important role in many parts of climate science. We introduce different kinds of climate models and discuss their uses in detection and attribution, roughly the tasks of establishing that the climate of the Earth has changed and of identifying specific factors that cause these changes. The use of models in the study of climate change raises the question of how well-confirmed these models are and of what their predictive capabilities are. All this is subject to considerable debate, and we discuss a number of different positions. A recurring theme in discussions about climate models is uncertainty. But what is uncertainty and what kinds of uncertainties are there? We discuss different attempts to classify uncertainty and to pinpoint their sources. After these science-oriented topics, we turn to decision theory. Climate change raises difficult questions such as: What is the appropriate reaction to climate change? How much should we mitigate? To what extent should we adapt? What form should adaptation take? We discuss the framing of climate decision problems and then offer an examination of alternative decision rules in the context of climate decisions.
Sommario
introduzione
Defining Climate and Climate Change
Climate Models
Detection and Attribution of Climate Change
Confirmation and Predictive Power
Understanding and Quantifying Uncertainty
Conceptualising Decisions Under Uncertainty
Managing Uncertainty
Conclusione
Glossary
Riferimenti e approfondimenti
1. introduzione
Climate science is an umbrella term referring to scientific disciplines studying aspects of the Earth’s climate. It includes, tra gli altri, parts of atmospheric science, oceanography, and glaciology. In the wake of public discussions about an appropriate reaction to climate change, parts of decision theory and economics have also been brought to bear on issues of climate. Contributions from these disciplines that can be considered part of the application of climate science fall under the scope of this article. At the heart of the philosophy of climate science lies a reflection on the methodology used to reach various conclusions about how the climate may evolve and what we should do about it. The philosophy of climate science is a new sub-discipline of the philosophy of science that began to crystalize at the turn of the 21st century when philosophers of science started having a closer look at methods used in climate modelling. It comprises a reflection on almost all aspects of climate science, including observation and data, methods of detection and attribution, model ensembles, and decision-making under uncertainty. Since the devil is always in the detail, the philosophy of climate science operates in close contact with science itself and pays careful attention to the scientific details. Per questo motivo, there is no clear separation between climate science and the philosophy thereof, and conferences in the field are often attended by both scientists and philosophers.
This article summarizes the main problems and questions in the foundations of climate science. Section 2 presents the problem of defining climate. Section 3 introduces climate models. Section 4 discusses the problem of detecting and attributing climate change. Section 5 examines the confirmation of climate models and the limits of predictability. Section 6 reviews classifications of uncertainty and the use of model ensembles. Section 7 turns to decision theory and discusses the framing of climate decision problems. Section 8 introduces alternative decision rules. Section 9 offers a few conclusions.
Two qualifications are in order. Primo, we review issues and questions that arise in connection with climate science from a philosophy of science perspective, and with special focus on epistemological and decision-theoretic problems. Inutile dirlo, this is not the only perspective. Much can be said about climate science from other points of view, most notably science studies, sociology of science, political theory, ed etica. For want of space, we cannot review contributions from these fields.
Secondo, to guard against possible misunderstandings, it ought to be pointed out that engaging in a critical philosophical reflection on the aims and methods of climate science is in no way tantamount to adopting a position known as climate scepticism. Climate sceptics are a heterogeneous group of people who do not accept the results of ‘mainstream’ climate science, encompassing a broad spectrum from those who flat out deny the basic physics of the greenhouse effect (and the influence of human activities on the world’s climate) to a small minority who actively engage in scientific research and debate and reach conclusions at the lowest end of climate impacts. Critical philosophy of science is not the handmaiden of climate scepticism; nor are philosophers ipso facto climate sceptics. Così, it should be stressed here that we do not endorse climate scepticism. We aim to understand how climate science works, reflect on its methods, and understand the questions that it raises.
2. Defining Climate and Climate Change
Climate talk is ubiquitous in the popular media as well as in academic discourse, and climate change has become a familiar topic. This veils the fact that climate is a complex concept and that the correct definitions of climate and climate change are a matter of controversy. To gain an understanding of the notion of climate, it is important to distinguish it from weather. Intuitively speaking, the weather at a particular place and a particular time is the state of the atmosphere at that place and at the given time. Per esempio, the weather in central London at 2 pm on 1 January 2015 can be characterised by saying that the temperature is 12 degrees centigrade, the humidity is 65%, e così via. Al contrario,, climate is an aggregate of weather conditions: it is a distribution of particular variables (called the climate variables) arising for a particular configuration of the climate system.
The question is how to make this basic idea precise, and this is where different approaches diverge. 21st-century approaches to defining climate can be divided into two groups: those that define climate as a distribution over time, and those that define climate as an ensemble distribution. The climate variables in both approaches include those that describe the state of the atmosphere and the ocean, and sometimes also variables describing the state of glaciers and ice sheets [IPCC 2013].
Distribution over time. The state of the Earth depends on external conditions of the system such as the amount of energy received from the sun and volcanic activity. Assume that there is a period of time over which the external conditions are relatively stable in that they exhibit small fluctuations around a constant mean value c. One can then define the climate over this time period as the distribution of the climate variables over that period under constant external conditions c [Per esempio, Lorenz 1995]. Climate change then amounts to successive time periods being characterised by different distributions. Tuttavia, in reality the external conditions are not constant and even when there are just slight fluctuations around c, the resulting distributions may be very different. Hence this definition is unsatisfactory [Werndl 2015].
This problem can be avoided by defining climate as the empirically observed distribution over a specific period of time, where external conditions are allowed to vary. Di nuovo, climate change amounts to different distributions for successive time periods. This definition is popular because it is easy to estimate from the observations, Per esempio, from the statistics taken over thirty years that are published by the World Meteorological Organisation [Hulme et al. 2009]. A major problem of this definition can be illustrated by the example in which, in the middle of a period of time, the Earth is hit by a meteorite and becomes a much colder place. Chiaramente, the climate before and after the hit of the meteor differ. Yet this definition has no resources to recognize this because all it says is that climate is a distribution arising over a specific time period.
To circumvent this problem, Werndl [2015] introduces the idea of regimes of varying external conditions and suggests defining climate as the distribution over time of the climate variables arising under a specific regime of varying external conditions. The challenge for this account is to spell out what exactly is meant by a regime of varying external conditions.
Ensemble Distribution. An ensemble of climate systems (not to be confused with a model ensemble) is an infinite collection of virtual copies of the climate system. Consider the sub-ensemble of these that satisfy the condition that the present values of the climate variables lie in a specific interval around the values measured in the actual climate system (questo è, the values compatible with the measurement accuracy). Now assume again that there is period of time over which the external conditions are relatively stable in that they exhibit small fluctuations around a constant mean value c. Then climate at future time t is defined as the distribution of values of the climate variables that arises when all systems in the ensemble evolve from now to t under constant external conditions c [Per esempio, Lorenz 1995]. In altre parole, the climate in the future is the distribution of the climate variables over all possible climates that are consistent with current observations under the assumption of constant external conditions c.
As we have seen previously, in reality, external conditions are not constant and even small fluctuations around a mean value can lead to different distributions [Werndl 2015]. This worry can be addressed by tracing the development of the initial condition ensemble under actual external conditions. The climate at future time t then is the distribution of the climate variables that arises when the initial conditions ensemble is evolved forward for the actual path taken by the external conditions [Per esempio, Daron and Stainforth 2013].
This definition faces a number of conceptual challenges. Primo, it makes the world’s climate dependent on our knowledge (via measurement accuracy), but this is counterintuitive because we think of climate as something objective that is independent of our knowledge. Secondo, the above definition is a definition of future climate, and it is difficult to see how the present and past climate should be defined. Yet without a notion of the present and past climate one cannot define climate change. A third problem is that ensemble distributions (and thus climate) do not relate in a straightforward way to the past time series of observations of the actual Earth and this would imply that the climate cannot be estimated from them [compare, Werndl 2015].
These considerations show that defining climate is nontrivial and there is no generally accepted or uncontroversial definition of climate.
3. Climate Models
A climate model is a representation of particular aspects of the climate system. One of the simplest climate models is an energy-balance model, which treats the Earth as a flat surface with one layer of atmosphere above it. It is based on the simple principle that in equilibrium the incoming and outgoing radiation must be equal (see Dessler [2011], Chapters 3-6, for a discussion of such models). This model can be refined by dividing the Earth into zones, allowing energy transfer between zones, or describing a vertical profile of the atmospheric characteristics. Despite their simplicity, these models provide a good qualitative understanding of the greenhouse effect.
Modern climate science aims to construct models that integrate as much as possible of the known science (for an introduction to climate modelling see [McGuffie and Henderson-Sellers 2005]). Tipicamente, this is done by dividing the Earth (both the atmosphere and ocean) into grid cells. In 2020, global climate models have a horizontal grid scale of around 150 km. Climatic processes can then be conceptualised as flows of physical quantities such as heat or vapour from one cell to another. These flows are mathematically described by equations. These equations form the ‘dynamical core’ of a global circulation model (GCM). The equations typically are intractable with analytical methods, and powerful supercomputers are used to solve them. Per questo motivo, they are often referred to as ‘simulation models’. To solve equations numerically, time is discretised. Current state-of-the-art simulations use time steps of approximately 30 minutes, taking weeks or months in real time on supercomputers to simulate a century of climate evolution.
In order to compute a single hypothetical evolution of the climate system (a ‘model run’), we also require an initial condition and boundary conditions. The former is a mathematical description of the state of the climate system (projected into the model’s own domain) at the beginning of the period being simulated. The latter are values for any variables which affect the system, but which are not directly output by the calculations. These include, per esempio, the concentration of greenhouse gases, the amount of aerosols in the atmosphere at a given time, and the amount of solar radiation received by the Earth. Since these are drivers of climatic change, they are often referred to as external forcings or external conditions.
Where processes occur on a smaller scale than the grid, they may be included via parameterisation, whereby the net effect of the process is separately calculated as a function of the grid variables. Per esempio, cloud formation is a physical process that cannot be directly simulated because typical clouds are much smaller than the grid. Così, the net effect of clouds is usually parameterised (as a function of temperature, humidity, e così via) in each grid cell and fed back into the calculation. Sub-grid processes are one of the main sources of uncertainty in climate models.
There are now dozens of global climate models under continuous development by national modelling centres like NASA, the UK Met Office, and the Beijing Climate Center, as well as by smaller institutions. An exact count is difficult because many modelling centres maintain multiple versions based on the same foundation. As an indication, in 2020 there were 89 model-versions submitted to CMIP6 (Coupled Model Intercomparison Project phase 6), from 35 modelling groups, though not all of these should be thought of as being “independent” models since assumptions and algorithms are often shared between institutions. In order to be able to compare outputs of these different models, the Coupled Model Intercomparison Project (CMIP) defines a suite of standard experiments to be run for each climate model. One standard experiment is to run each model using the historical forcings experienced during the twentieth century so that the output can be directly compared against real climate system data.
Climate models are used in many places in climate science, and their use gives rise to important questions. These questions are discussed in the next three sections.
4. Detection and Attribution of Climate Change
Every empirical study of climate has to begin by observing the climate. Meteorological observatories measure a number of variables such as air temperature near the surface of the Earth using thermometers. But more or less systematic observations are available since about 1750, and hence to reconstruct the climate before then scientists have to rely on proxy data: data for climate variables that are derived from observing other natural phenomena such as tree rings, ice cores, and ocean sediments.
The use of proxy data raises a number of methodological problems centred around the statistical processing of such data, which are often sparse, highly uncertain, and several inferential steps away from the climate variable of interest. These issues were at the heart of what has become known as the Hockey Stick controversy, which broke at the turn of the century in connection with a proxy-based reconstruction of the Northern Hemisphere temperature record [Mann, Bradley and Hughes, 1998]. The sceptics pursued two lines of argument. They cast doubt on the reliability of the available data, and they argued that the methods used to process the data are such that they would produce a hockey-stick-shaped curve from almost any data [Per esempio, McIntyre and McKitrick 2003]. The papers published by the sceptics raised important issues and stimulated further research, but they were found to contain serious flaws undermining their conclusions. There are now more than two dozen reconstructions of this temperature record using various statistical methods and proxy data sources. Although there is indeed a wide range of plausible past temperatures, due to the constraints of the data and methods, these studies do robustly support the consensus that, over the past 1400 years, temperatures during the late 20th century are likely to have been the warmest [Frank et al. 2010].
Do rising temperatures indicate that there is climate change, e se è così, can the change be attributed to human action? These two problems are known as the problems of detection and attribution. The Intergovernmental Panel on Climate Change (IPCC) defines these as follows:
Detection of change is defined as the process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change. An identified change is detected in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small […]. Attribution is defined as ‘the process of evaluating the relative contributions of multiple causal factors to a change or event with an assignment of statistical confidence.’ [IPCC 2013]
These definitions raise a host of issues. The root cause of the difficulties is the clause that climate change has been detected only if an observed change in the climate is unlikely to be due to internal variability. Internal variability is the phenomenon that the values of climate variables such as temperature and precipitation would change over time due to the internal dynamics of the climate system even in the absence of a change in external conditions, because of fluctuations in the frequency of storms, ocean currents, e così via.
Taken at face value, this definition of detection has the consequence that there cannot be internal climate change. The ice ages, per esempio, would not count as climate change if they occurred because of internal variability. This is not only at odds with basic intuitions about climate and with the most common definitions of climate as a finite distribution over a relatively short time period (where internal climate change is possible); it also leads to difficulties with attribution: if detected climate change is ipso facto change not due to internal variability, then from the very beginning it is excluded that particular factors (vale a dire, internal climate dynamics) can lead to a change in the climate, which seems to be an unfortunate conclusion.
For the case of the ice ages, many researchers would stress that internal variability is different from natural variability. Da, Dire, orbital changes explain the ice ages, and orbital changes are natural but external, this is a case of external climate change. While this move solves some of the problems, there remains the problem that there is no generally accepted way to separate internal and external factors, and the same factor is sometimes classified as internal and sometimes as external. Per esempio, glaciation processes are sometimes treated as internal factors and sometimes as prescribed external factors. Allo stesso modo, sometimes the biosphere is treated as an external factor, but sometimes it is also internally modelled and treated as an internal factor. One could even go so far to ask whether human activity is an external forcing on the climate system or an internally-generated Earth system process. Research studies usually treat human activity as an external forcing, but it could consistently be argued that human activities are an internal dynamical process. The appropriate definition simply depends on the research question of interest. For a discussion of these issues, see Katzav and Parker [2018].
The effects of internal variability are present on all timescales, from the sub-daily fluctuations experienced as weather to the long-term changes due to cycles of glaciation. Since internal variability stems from processes in a highly complex nonlinear system, it is also unlikely that the statistical properties of internal variability are constant over time, which further compounds methodological difficulties. State-of-the-art climate models run with constant forcing show significant disagreements both on the magnitude of internal variability and the timescale of variations. (On http://www.climate-lab-book.ac.uk/2013/variable-variability/#more-1321 the reader finds a plot showing the internal variability of all CMIP5 models. The plot indicates that models exhibit significantly different internal variability, leaving considerable uncertainty.) The model must be deemed to simulate pre-industrial climate (including variability) sufficiently well before it can be used for such detection and attribution studies, but we do not have thousands of years of detailed observations upon which to base that judgement. Estimates of internal variability in the climate system are produced from climate models themselves [Hegerl et al. 2010], leading to potential circularity. This underscores the difficulties in making attribution statements based on the above definition, which recognises an observed change as climate change only if is unlikely to be due to internal variability.
Since the IPCC’s definitions are widely used by climate scientists, the discussion about detection and attribution in the remainder of this section is based on these definitions. Detection relies on statistical tests, and detection studies are often phrased in terms of the likelihood of a particular event or sequence of events happening in the absence of climate change. In pratica, the challenge is to define an appropriate null hypothesis (the expected behaviour of the system in the absence of changing external influences), against which the observed outcomes can be tested. Because the climate system is a dynamical system with processes and feedbacks operating on all scales, this is a non-trivial exercise. An indication of the importance of the null hypothesis is given by the results of Cohn and Lins [2005], who compare the same data against alternate null hypotheses, with results differing by 25 orders of magnitude of significance! This does not in itself show that either null is more appropriate, but it demonstrates the sensitivity of the result to the null hypothesis chosen. Questo, a sua volta, underscores the importance of the choice of null hypothesis and the difficulty of making any such choice if the underlying processes are poorly understood.
In pratica, the best available null hypothesis is often the best available model of the behaviour of the climate system, including internal variability, which for most climate variables usually means a state of the art GCM. This model is then used to perform long control runs with constant forcings in order to quantify the internal variability of the model (see discussion above). Climate change is then said to have been detected when the observed values fall outside a predefined range of the internal variability of the model. The difficulty with this method is that there is no single “best” model to choose: many such models exist, they are similarly well developed, Ma, as noted above, they have appreciably different patterns of internal variability.
The differences between different models are relatively unimportant for the clearest detection results such as recent increases in global mean temperature. Qui, as stressed by Parker [2010], detection is robust across different models (for a discussion of robustness see Section 6), e, Inoltre, there is a variety of different pieces of evidence all pointing to the conclusion that the global mean temperature has increased beyond that which can be attributed to internal variability. Tuttavia, the issues of which null hypothesis to use and how to quantify internal variability, can be important for the detection of subtler local climate change.
If climate change has been detected, then the question of attribution arises. This might be an attribution of any particular change (either a direct climatic change such as increased global mean temperature, or an impact such as the area burnt by forest fires) to any identified cause (such as increased CO2 in the atmosphere, volcanic eruptions, or human population density). Where an impact is considered, a two-step or multi-step approach may be appropriate. An example of this, taken from the IPCC Good Practice Guidance paper [Hegerl et al. 2010], is the attribution of coral reef calcification impacts to rising CO2 levels, in which an intermediate stage is used by first attributing changes in the carbonate ion concentration to rising CO2 levels, then attributing calcification processes to changes in the carbonate ion concentration. This also illustrates the need for a clear understanding of the physical mechanisms involved, in order to perform a reliable multi-step attribution in the presence of many potential confounding factors.
In the interpretation of attribution results, in particular those framed as a question of whether human activity has influenced a particular climatic change or event, there is a tendency to focus on whether the confidence interval of the estimated anthropogenic effect crosses zero. The absence of such a crossing indicates that change is likely to be due to human factors. This results in conservative attribution statements, but it reflects the focus of the present debate where, in the eyes of the public and media, “attribution” is often understood as confidence in ruling out non-human factors, rather than as giving a best estimate or relative contributions of different factors.
Statistical analysis quantifies the strength of the relationship, given the simplifying assumptions of the attribution framework, but the level of confidence in the simplifying assumptions must be assessed outside that framework. This level of confidence is standardised by the IPCC into discrete (though subjective) categories (“very high”, “high”, and so forth.), which aim to take account of the process knowledge, data limitations, adequacy of models used, and the presence of potential confounding factors. The conclusion that is reached will then have a form similar to the IPCC’s headline attribution statement:
It is extremely likely [³95% probability] that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. [IPCC 2013; Summary for Policymakers, section D.3].
One attribution method is optimal fingerprinting. The method seeks to define a spatio-temporal pattern of change (fingerprint) associated with each potential driver (such as the effect of greenhouse gases or of changes in solar radiation), normalised relative to the internal variability, and then perform a statistical regression of observed data with respect to linear combinations of these patterns. The residual variability after observations have been attributed to each factor should then be consistent with the internal variability; if not, this suggests that an important source of variability remains unaccounted for. Parker [2010] notes that fingerprint studies rely on several assumptions. Chief among them is linearity, questo è, that the response of the climate system when several forcing factors are present is equal to a linear combination of the effects of the forcings. Because the climate system is nonlinear, this is clearly a source of methodological difficulty, although for global-scale responses (in contrast to regional-scale responses) additivity has been shown to be a good approximation.
Levels of confidence in these attribution statements are primarily dependent on physical understanding of the processes involved. Where there is a clear, simple, well-understood mechanism, there should be greater confidence in the statistical result; where the mechanisms are loose, multi-factored or multi-step, or where a complex model is used as an intermediary, confidence is correspondingly lower. The Guidance Paper cautions that,
Where models are used in attribution, a model’s ability to properly represent the relevant causal link should be assessed. This should include an assessment of model biases and the model’s ability to capture the relevant processes and scales of interest. [Hegerl 2010, 5]
As Parker [2010] argues, there is also higher confidence in attribution results when the results are robust and there is a variety of evidence. Per esempio, the finding that late twentieth-century temperature increase was mainly caused by greenhouse gas forcing is found to be robust given a wide range of different models, different analysis techniques, and different forcings; and there is a variety of evidence all of which supports this claim. Thus our confidence that greenhouse gases explain global warming is high. (For further useful extended discussion of detection and attribution methods in climate science, see pages 872-878 of IPCC [2013] and in the Good Practice Guidance paper by Hegerl et al. [2010], and for a discussion of how such hypotheses are tested see Katzav [2013].)
In addition to the large-scale attribution of climate change, attribution of the degree to which individual weather events have become either more likely or more extreme as a result of increasing atmospheric greenhouse gas concentrations is now common. It has a particular public interest as it is perceived as a way both to communicate that climate impacts are happening already, perhaps quantifying risk numerically to price insurance, and offering a motivation for climate mitigation. There is therefore also an incentive to conduct these studies quickly, to inform timely news articles, and some groups have formed to respond quickly to reports of extreme weather and conduct attribution studies immediately. This relies on the availability of data, may suffer from unclear definitions of exactly what category of event is being analysed, and is open to criticism for publicity prior to peer review. There are also statistical implications of choosing to analyse only those events which have happened and not those that did not happen. For a discussion of event attribution see Lloyd and Oreskes [2019] and Lusk [2017].
5. Confirmation and Predictive Power
Two questions arise in connection with models: how are models confirmed and what is their predictive power? Confirmation concerns the question of whether, and to what degree, a specific model is supported by the data. Lloyd [2009] argues that many climate models are confirmed by past data. Parker [2009] objects to this claim. She argues that the idea that climate models per se are confirmed cannot be seriously entertained because all climate models are known to be wrong and empirically inadequate. Parker urges a shift in thinking from confirmation to adequacy for purpose: models can only be found to be adequate for specific purposes, but they cannot be confirmed wholesale. Per esempio, one might claim that a particular climate model adequately predicts the global temperature increase that will occur by 2100 (when run from particular initial conditions and relative to a particular emission scenario). Ancora, allo stesso tempo, one might hold that the predictions of global mean precipitation by 2100 by the same model cannot be trusted.
Katzav [2014] cautions that adequacy for purpose assessments are of limited use. He claims that these assessments are typically unachievable because it is far from clear which of the model’s observable implications can possibly be used to show that the model is adequate for the purpose. Invece, he argues that climate models can at best be confirmed as providing a range of possible futures. Katzav is right to stress that adequacy for purpose assessments are more difficult than appears at first sight. But the methodology of adequacy for purpose cannot be dismissed wholesale; Infatti, it is used successfully across the sciences (Per esempio, when ideal gas models are confirmed to be useful for particular purposes). Whether or not adequacy for purpose assessment is possible depends on the case at hand.
If one finds that one model predicts specific variables well and another model doesn’t, then one would like to know the reasons why the first model is successful and the second not. Lenhard and Winsberg [2010] argue that this is often very difficult, se non impossibile: For complex climate models a strong version of confirmation holism makes it impossible to tell where the failures and successes of climate models lie. In particolare, they claim that it is impossible to assess the merits and problems of sub-models and the parts of models. There is a question, Anche se, whether this confirmation holism affects all models and whether it is here to stay. Complex models have different modules for the atmosphere, the ocean, and ice. These modules can be run individually and also together. The aim of the many new Model Intercomparison Projects (MIPs) È, by comparing individual and combined runs, to obtain an understanding of the performance and physical merits of separate modules, which it is hoped will identify areas for improvement and eventually result in better performance of the entire model.
Another problem concerns the use of data in the construction of models. The values of model parameters are often estimated using observations, a process known as calibration. Per esempio, the magnitude of the aerosol forcing is sometimes estimated from data. When data have been used for calibration, the question arises whether the same data can be used again to confirm the model. If data are used for confirmation that have not already been used for calibration, they are use-novel. If data are used for both calibration and confirmation, this is referred to as double-counting.
Scientists and philosophers alike have argued that double-counting is illegitimate and that data have to be use-novel to be confirmatory [Lloyd 2010; Shackley et al. 1998; Worrall 2010]. Steele and Werndl [2013] oppose this conclusion and argue that on Bayesian and relative-likelihood accounts of confirmation double-counting is legitimate. Inoltre, Steele and Werndl [2015] argue that model selection theory presents a more nuanced picture of the use of data than the commonly endorsed positions. Frisch [2015] cautions that Bayesian as well as other inductive logics can be applied in better and worse ways to real problems such as climate prediction. Nothing in the logic prevents facts from being misinterpreted and their confirmatory power exaggerated (as in ‘the problem of old evidence’ which Frisch [2015] discusses). This is certainly a point worth emphasising. Infatti, Steele and Werndl [2013] stress that the same data cannot inform a prior probability for a hypothesis and also further (dis)confirm the hypothesis. But they do not address all the potential pitfalls in applying Bayesian or other logics to the climate and other settings. Their argument must be understood as a limited one: there is no univocal logical prohibition against the same data serving for calibration and confirmation. As far as non-Bayesian methods of model selection goes, there are two cases. Primo, there are methods such as cross-validation where the data are required to be use-novel. For cross-validation, the data are split up into two groups: the first group is used for calibration and the second for confirmation. Secondo, there are the methods such as the Akaike Information Criterion for which the data need not be use-novel, although information criteria methods are hard to apply in practice to climate models because the number of degrees of freedom is poorly defined.
This brings us to the second issue: prediction. In the climate context this is typically framed as the issue of projection. ‘Projection’ is a technical term in the climate modelling literature and refers to a prediction that is conditional on a particular forcing scenario and a particular initial conditions ensemble. The forcing scenario is specified either by the amount of greenhouse gas emissions and aerosols added to the atmosphere or directly by their atmospheric concentrations, and these in turn depend on future socioeconomic and technological developments.
Much research these days is undertaken with the aim of generating projections about the actual future evolution of the Earth system under a particular emission scenario, upon which policies are made and real-life decisions are taken. In questi casi, it is necessary to quantify and understand how good those projections are likely to be. It is doubtful that this question can be answered along traditional lines. One such line would be to refer to the confirmation of a model against historical data (Chapter 9 of IPCC [2013] discusses model evaluation in detail) and argue that the ability of a model to successfully reproduce historical data should give us confidence that it will perform well in the future too. It is unclear at best whether this is a viable answer. The problem is that climate projections for high forcing scenarios take the system well outside any previously experienced state, and at least prima facie there is no reason to assume that success in low forcing contexts is a guide to success in high-forcing contexts; Per esempio, a model calibrated on data from a world with the Arctic Sea covered in ice might no longer perform well when the sea ice is completely melted and the relevant dynamical processes are quite different. Per questo motivo, calibration to past data has at most limited relevance for the assessment of a model’s predictive success [Oreskes et al. 1994; Stainforth et al. 2007a, 2007b, Steele and Werndl 2013].
This brings into focus the fact that there is no general answer to the question of the trustworthiness of model outputs. There is widespread consensus that predictions are better for longer time averages, larger spatial averages, low specificity and better physical understanding; e, tutte le altre cose sono uguali, shorter lead times (nearer prediction horizons) are easier to predict than longer ones. Global mean temperature trends are considered trustworthy, and it is generally accepted that the observed upward trend will continue [Oreskes 2007], although the basis of this confidence is usually a physical understanding of the greenhouse effect with which the models are consistent, rather than a direct reliance on the output of models themselves. A 2013 IPCC report [IPCC 2013, Summary for Policymakers, section D.1] professes that modelled surface temperature patterns and trends are trustworthy on the global and continental scale, Ma, even in making this statement, assigns a probability of at least 66% (‘likely’) to the range within with 90% of model outcomes fall. In plainer terms, this is an expert-assigned probability of at least tens of percent that the models are substantially wrong even about global mean temperature.
There still are interesting questions about the epistemic grounds on which such assertions are made (and we return to them in the next section). A harder problem, Tuttavia, concerns the use of models as providers of detailed information about the future local climate. The United Kingdom Climate Impacts Programme produces projections that aim to make high-resolution probabilistic projections of the local climate up to the end of the century, and similar projects are run in many other countries [Thompson et al. 2016]. The Programme’s set of projections known as UKCP09 [Sexton et al. 2012, Sexton and Murphy 2012] produces projections of the climate up to 2100 based on HadCM3, a global climate model developed at the UK Met Office Hadley Centre. Probabilities are given for events on a 25km grid for finely defined specific events such as changes in the temperature of the warmest day in summer, the precipitation of the wettest day in winter, or the change in summer-mean cloud amount, with projections blocked into overlapping thirty-year segments which extend to 2100. It is projected, per esempio, that under a medium emission scenario the probability for a 20-30% reduction in summer mean precipitation in central London in 2080 is 0.5. There is a question of whether these projections are trustworthy and policy relevant. Frigg et al. urge caution on grounds that many of the UKCP09’s foundational assumptions seem to be questionable [2013, 2015] and that structural model error may have significant repercussions on small scales [2014]. Winsberg [2018] and Winsberg and Goodwin [2016] criticise these cautionary arguments as overstating the limitations of such projections. In 2019, the Programme launched a new set of projections, known as UKCP18 (https://www.metoffice.gov.uk/research/collaboration/ukcp). It is an open question whether these projections are open to the same objections, e, if so, how severe the limitations are.
6. Understanding and Quantifying Uncertainty
Uncertainty features prominently in discussions about climate models, and yet is a concept that is poorly understood and that raises many difficult questions. In most general terms, uncertainty is a lack of knowledge. The first challenge is to circumscribe more precisely what is meant by ‘uncertainty’ and what the sources of uncertainty are. A number of proposals have been made, but the discussion is still in a ‘pre-paradigmatic’ phase. Smith and Stern [2011] identify four relevant varieties of uncertainty: imprecision, ambiguity, intractability and indeterminacy. Spiegelhalter and Riesch [2011] consider a five-level structure with three within-model levels-event, parameter and model uncertainty-and two extra-model levels concerning acknowledged and unknown inadequacies in the modelling process. Wilby and Dessai [2010] discuss the issue with reference to what they call the cascade of uncertainty, studying how uncertainties magnify as one goes from assumptions about future global emissions of greenhouse gases to the implications of these for local adaption. Petersen [2012, Chapters 3 and 6] introduces a so-called uncertainty matrix listing the sources of uncertainty in the vertical and the sorts of uncertainty in the horizontal direction. Lahsen [2005] looks at the issue from a science studies point of view and discusses the distribution of uncertainty as a function of the distance from the site of knowledge production. And these are but a few of the many proposals.
The next problem is the one of measuring and quantifying uncertainty in climate predictions. Among the approaches that have been devised in response to this challenge, ensemble methods occupy centre stage. Current estimates of climate sensitivity and increase in global mean temperature under various emission scenarios, per esempio, include information derived from ensembles containing multiple climate models. Multi-model ensembles are sets of several different models which differ in mathematical structure and physical content. Such an ensemble is used to investigate how predictions of relevant climate variables vary (or do not vary) according to model structure and assumptions. A special kind of multi-model ensemble is known as a “perturbed parameter ensemble”. It contains models with the same mathematical structure in which particular parameters assume different values, thereby effectively conducting a sensitivity analysis on a single model by systematically varying some of the parameters and observing the effect on the outcomes. Early analyses such as the climateprediction.net simulations and the UKCP09 results rely on perturbed parameter ensembles only, due to resource limitations; international projects such as the Coupled Model Intercomparison Projects (CMIP) and the work that goes into the IPCC assessments are based on multi-model ensembles containing different model structures. The reason to use ensembles is the acknowledged uncertainties in individual models, which concerns both the model structure and the values of parameters in the model. It is a common assumption that ensembles help understand the effects of these uncertainties either by producing and identifying “robust” predictions, or by providing estimates of this uncertainty about future climate change. (Parker [2013] provides an excellent discussion of ensemble methods and the problems that attach to them.)
A model-result is robust if all or most models in the ensemble show the same result; for general discussion of robustness analysis see Weisberg [2006]. Se, per esempio, all models in an ensemble show more than 4º increase in global mean temperature by the end of the century when run under a specific emission scenario, this result is robust across the specified ensemble. Does robustness justify increased confidence? Lloyd [2010, 2015] argues that robustness arguments are powerful in connection with climate models and lend credibility at least to core claims such as the claim that there was global warming in the 20th Century. Parker [2011], al contrario, reaches a more sober conclusion: ‘When today’s climate models agree that an interesting hypothesis about future climate change is true, it cannot be inferred […] that the hypothesis is likely to be true or that scientists’ confidence in the hypothesis should be significantly increased or that a claim to have evidence for the hypothesis is now more secure’ [ibid. 579]. One of the main problems is that if today’s models share the same technological constraints posed by today’s computer architecture and understanding of the climate system, then they inevitably share some common errors. Infatti, such common errors have been widely acknowledged (Vedere, per esempio, Knutti et al. [2010]) and studies have demonstrated and discussed the lack of model independence [Bishop and Abramowitz 2013; Jun et al. 2008a; 2008b]. But if models are not independent, then there is a question about how much epistemic weight agreement between them carries.
When ensembles do not yield robust predictions, then the spread of results within the ensemble is sometimes used to estimate quantitatively the uncertainty of the outcome. There are two main approaches to this. The first approach aims to translate the histogram of model results directly into a probability distribution: in effetti, the guiding principle is that the probability of an outcome is proportional to the fraction of models in the ensemble which produce that result. The thinking behind this method seems to be to invoke some sort of frequentist approach to probabilities. The appeal to frequentism presupposes that models can be treated as exchangeable sources of information (in the sense that there is no reason to trust one ensemble member any more than any other). Tuttavia, as we have previously seen, the assumption that models are independent has been questioned. There is a further problem: MMEs are ‘ensembles of opportunity’, grouping together existing models. Even the best ensembles such as CMIP6 are not designed to systematically explore all possibilities. It is therefore not clear why the frequency of ensemble projections should double as a guide to probability. The IPCC acknowledges this limitation (see discussion in Chapter 12 of IPCC [2013]) and thus downgrade the assessed likelihood of ensemble-derived ranges, deeming it only “likely” (³66%) that the real-world global mean temperature will fall within the 90% model range (for a discussion of this case see Thompson et al [2016]).
A more modest approach regards ensemble outputs as a guide to possibility rather than probability. In questa vista, the spread of an ensemble presents the range of outcomes that cannot be ruled out. The bounds of this set of results-often referred to as a ‘non-discountable envelope’-provide a lower bound of the uncertainty [Stainforth et al. 2007b]. In this spirit Katzav [2014] argues that a focus on prediction is misguided and that models ought to be used to show that particular scenarios are real possibilities.
While undoubtedly less committal than the probability approach, also non-discountable envelopes raise questions. The first is the relation between non-discountability and possibility. Non-discountable results are ones that cannot be ruled out. How is this judgment reached? Do results which cannot be ruled out indicate possibilities? If not, what is their relevance for estimating lower bounds? E, could the model, if pushed more deliberately towards “interesting” behaviours, actually make that envelope wider? Inoltre, it is important to keep in mind that the envelope just represents some possibilities. Hence it does not indicate the complete range of possibilities, making particular types of formalised decision-making procedures impossible. For a further discussion of these issues see Betz [2009, 2010].
Finalmente, a number of authors emphasise the limitations of model-based methods (such as ensemble methods) and submit that any realistic assessment of uncertainties will also have to rely on other factors, most notably expert judgement. Petersen [2012, Capitolo 4] outlines the approach of the Netherlands Environmental Assessment Agency (PBL), which sees expert judgment and problem framings as essential components of uncertainty assessment. Aspinall [2010] suggests using methods of structured expert elicitation.
In light of the issues raised above, how should uncertainty in climate science be communicated to decision-makers? The most prominent framework for communicating uncertainty is the IPCC’s, which is used throughout the Fifth Assessment Report (AR5), is explicated in the ‘Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties’ and further explicated in [Mastrandrea et al. 2011]. The framework appeals to two measures for communicating uncertainty. The first, a qualitative ‘confidence’ scale, depends on both the type of evidence and the degree of agreement amongst experts. The second measure is a quantitative scale for representing statistical likelihoods (or more accurately, fuzzy likelihood intervals) for relevant climate/economic variables. The following statement exemplifies the use of these two measures for communicating uncertainty in AR5: ‘The global mean surface temperature change for the period 2016–2035 relative to 1986–2005 is similar for the four RCPs and will likely be in the range 0.3°C to 0.7°C (medium confidence). [IPCC 2013] A discussion of this framework can be found in Adler and Hirsch Hadorn [2014], Budescu et al. [2014], Mach et al. [2017], and Wüthrich [2017].
A questo punto, it should also be noted that the role of ethical and social values in relation to uncertainties in climate science is controversially debated. Winsberg [2012] appeals to complex simulation modelling to argue that it is infeasible for climate scientists to produce results that are not influenced by their ethical and social values. Più specificamente, he argues that assignments of probabilities to hypotheses about future climate change are influenced by ethical and social values because of the way these values come into play in the building and evaluating of climate models. Parker [2014] contends that pragmatic factors rather than social or ethical values often play a role in resolving these modelling choices. She further objects that Winsberg’s focus on precise probabilistic uncertainty estimates is misguided; coarser estimates like those used by the IPCC better reflect the extent of uncertainty and are less influenced by values. She concludes that Winsberg has exaggerated the influence of ethical and social values here but suggests that a more traditional challenge to the value-free ideal of science fits the climate case. Namely, one could argue that estimates of uncertainty are themselves always somewhat uncertain, and that the decision to offer a particular estimate of uncertainty thus might appropriately involve value judgments [compare, Douglas 2009].
7. Conceptualising Decisions Under Uncertainty
What is the appropriate reaction to climate change? How much should we mitigate? To what extent should we adapt? And what form should adaptation take? Should we build larger water reserves? Should we adapt houses, and our social infrastructure more generally, to a higher frequency of extreme weather events like droughts, heavy rainfalls, floods, and heatwaves, as well as the increased incidence of extremely high sea levels or the more frequent occurrence of particularly hot days are extreme weather events? The decisions that we make in response to these questions have consequences affecting both individuals and groups at different places and times. Inoltre, the circumstances of many of these decisions involve uncertainty and disagreement that is sometimes both severe and wide-ranging, concerning not only the state of the climate (come discusso sopra) and the broader social consequences of any action or inaction on our part, but also the range of actions available to us and what significance we should attach to their possible consequences. These considerations make climate decision-making both important and hard. The stakes are high, and so too are the difficulties for standard decision theory—plenty of reason for philosophical engagement with this particular application of decision theory.
Let us begin by looking at the actors in the climate domain and the kinds of decision problems that concern them. When introducing decision theory, it is common to distinguish three main domains: individual decision theory (which concerns the decision problem of a single agent who may be uncertain of her environment), game theory (which focuses on cases of strategic interaction amongst rational agents), and social choice theory (which concerns procedures by which a number of agents may ‘think’ and act collectively). All three realms are relevant to the climate-change predicament, whether the concern is adapting to climate change or mitigating climate change or both.
Determining the appropriate agential perspective and type of engagement between agents is important, because otherwise decision-modelling efforts may be in vain. Per esempio, it may be futile to focus on the plight of individual citizens when the power to affect change really lies with states. It may likewise be misguided to analyse the prospects for a collective action on climate policy, if the supposed members of the group do not see themselves as contributing to a shared decision that is good for the group as a whole. It would also be misleading to exclude from an individual agent’s decision model the impact of others who perceive that they are acting in a strategic environment. This is not, Tuttavia, to recommend a narrow view of the role of decision models-that they must always represent the decisions of agents as they see them, and can never be aspirational; the point is rather that we should not employ decision models with particular agential framings in a naïve way.
Getting the agential perspective right is just the first step in framing a decision problem so that it presents convincing reasons for action. There remains the task of representing the details of the decision problem from the appropriate epistemic and evaluative perspective. Our focus is individual decision theory, for reasons of space, and because most decision settings ultimately involve the decision of an individual, whether this be a single person or a group acting as an individual.
The standard model of (individual) decision-making under uncertainty used by decision theorists derives from the classic work of von Neumann and Morgenstern [1944] and Leonard Savage [1954]. It treats actions as functions from possible states of the world to consequences, these being the complete outcomes of performing the action in question in that state of the world. All uncertainty is taken to be uncertainty about the state of the world and is quantified by a single probability function over the possible states, where the probabilities in question measure either objective risk or the decision maker’s degrees of belief (or a combination of the two). The relative value of consequences is represented by an interval-scaled utility function over these consequences. Decision-makers are advised to choose the action with maximum expected utility (EU); where the EU for an action is the sum of the probability-weighted utility of the possible consequences of the action.
It is our contention that this model is inadequate for many climate-oriented decisions, because it fails to properly represent the multidimensional nature and severity of the uncertainty that decision-makers face. To begin with, not all the uncertainty that climate decision-makers face is empirical uncertainty about the actual state of the world (state uncertainty). There may be further empirical uncertainty about what options are available to them and what are the consequences of exercising each option for each respective state (option uncertainty). In what follows we use the term ‘empirical uncertainty’ to cover both state uncertainty and option uncertainty. Inoltre, decision-makers face a non-empirical kind of uncertainty-ethical uncertainty-about what values to assign to possible consequences.
Let us now turn to empirical uncertainty. Come sopra annotato, standard decision theory holds that all empirical uncertainty can be represented by a probability function over the possible states of the world. There are two issues here. The first is that confining all empirical uncertainty to the state space is rather unnatural for complex decision problems such as those associated with climate change. Infatti, decision models are less convoluted if we allow the uncertainty about states to depend on the actions that might be taken (compare, Richard Jeffrey’s [1965] expected utility theory), and if we also permit further uncertainty about what consequence will arise under each state, given the action taken (an aspect of option uncertainty). Per esempio, consider a crude version of the mitigation decision problem faced by the global planner: it may be useful to depict the decision problem with a state-space partition in terms of possible increases in average global temperature over a given time period. In questo caso, our beliefs about the states (how likely they each are) would be conditional on the mitigation option taken. Inoltre, for each respective mitigation option, the consequence arising in each of the states depends on further uncertain features of the world, for instance the extent to which, in media, regional conditions would be favourable to food production and whether social institutions would facilitate resilience in food production.
The second issue is that using a precise probability function to represent uncertainty about states (and consequences) can misrepresent the severity of this uncertainty. Per esempio, even if one assumes that the position of the scientific community may be reasonably well represented by a precise probability distribution over the state space, conditional on the mitigation option, precise probabilities over the possible food productions and other economic consequences, given this option and average global temperature rise, are less plausible. Note that the global social planner’s mitigation decision problem is typically analysed in terms of a so-called Integrated Assessment Model (IAM), which does indeed involve dependencies between mitigation strategies and both climate and economic variables. There is some disparity in the representation of empirical uncertainty: Nordhaus’s [2008] reliance on ‘best estimates’ for parameters like climate sensitivity can be compared with Stern’s [2007] use of ‘confidence intervals’. But these are relatively minor differences. Critics argue that all extant IAMs inadequately represent the uncertainty surrounding projections of future wealth under the status quo and alternative mitigation strategies [see Weitzman 2009, Frisch 2013, Stern 2013]. In particolare, both Nordhaus [2008] and Stern [2007] controversially assume increasing wealth over time (or positive consumption growth rate) even for the status quo where nothing is done to mitigate climate change.
Popular among philosophers is the use of sets of probability functions to represent severe uncertainty surrounding decision states/consequences, whether the uncertainty is due to evidential limitations or due to evidential/expert disagreement. This is a minimal generalisation of the standard decision model, in the sense that probability measures still feature: all'incirca, the more severe the uncertainty, the more probability measures over the space of possibilities needed to conjointly represent the epistemic situation (Vedere, per esempio, Walley [1991]). For maximal uncertainty all possibilities are on a par-they are effectively assigned probability [0, 1]. Indeed it is a strength of the imprecise probability representation that it generalises the two extreme cases, questo è, the precise probabilistic as well as the possibilistic frameworks. (See Halpern [2003] for a thorough treatment of frameworks, both qualitative and quantitative, for representing uncertainty.) In some contexts, it may be suitable to weight the possible probability distributions in terms of plausibility (as required for some of the decision rules discussed below). The weighting approach may in fact match the IPCC’s representation of the uncertainty surrounding decision-relevant climate and economic variables. Infatti, an important question is whether and how the IPCC’s representation of uncertainty can be translated into an imprecise probabilistic framework, as discussed here and in the next section. An alternative to the aforementioned proposal is that the IPCC’s confidence and likelihood measures for relevant variables should be combined to form an unweighted imprecise set of probability distributions, or even a precise probability distribution, suitable for input into an appropriate decision model.
Decision makers face uncertainty not only about what will or could happen, but also about what value to attach to these possibilities-in other words, they face ethical uncertainty. Such value or ethical uncertainty can have a number of different sources. The most important ones arise in connection with judgments about how to distribute the costs and benefits of mitigation and adaptation amongst different regions and countries, about how to take account of persons whose existence depends on what actions are chosen now, and about the degree to which future wellbeing should be discounted. (For discussion and debate about the ethical significance of various climate outcomes, particularly at the level of global rather than regional or national justice, see the articles in Gardiner et al.’s [2010] edited collection, Climate Ethics.) Di questi, the latter has been the subject of the most debate, because of the extent to which (the global planner’s) decisions about how drastically to cut carbon emissions are sensitive to the discount rate used in evaluating the possible outcomes of doing so (as highlighted in Broome [2008]). Discounting thus provides a good illustration of the importance of ethical uncertainty.
In many economic models, a discount rate is applied to a measure of total wellbeing at different points in time (the ‘pure rate of time preference’), with a positive rate implying that future wellbeing carries less weight in the evaluations of options than present wellbeing. Note that the overall ‘social discount rate’ in economic models is the sum of the pure rate of time preference and a second term pertaining to the discounting of goods or consumption rather than wellbeing per se. See Broome [1992] and Parfit [1984] for helpful discussions of the reasons for discounting goods that do not imply discounting wellbeing. (The consumption growth rate is an important component of this second discounting term that is subject to empirical uncertainty, come discusso sopra; see Greaves [2017] for an examination of all the assumptions underlying the ‘social discount rate’ and its role in the standard economic method for evaluating policy options.) Many philosophers regard any pure discounting of future wellbeing as completely unjustified from an objective point of view. This is not to deny that temporal location may nonetheless correlate with features of the distribution of wellbeing that are in fact ethically significant. If people will be better off in the future, per esempio, it is reasonable to be less concerned about their interests than those of the present generation, much as one might prioritise the less well-off within a single generation. But the mere fact of a benefit occurring at a particular time cannot be relevant to its value, at least from an impartial perspective.
Economists do nonetheless often discount wellbeing in their policy-oriented models, although they disagree considerably about what pure rate of time preference should be used. One view, exemplified by the Stern Review and representing the impartial perspective described above, is that only a very small rate (in the order of 0.5%) is justified, and this on the grounds of the small probability of the extinction of the human population. Other economists, Tuttavia, regard a partial rather than an impartial point of view more appropriate in their models. A view along these lines, exemplified by Nordhaus [2007] and Arrow [1995a], is that the pure rate of time preference should be determined by the preferences of current people. But typical derivations of average pure time discounting from observed market behaviour are much higher than those used by Stern (around 3% by Nordhaus’s estimate). Although the use of this data has been criticised for providing an inadequate measure of people’s reasoned preferences (Vedere, Per esempio, Sen [1982], Drèze and Stern [1990], Broome [1992]), the point remains that any plausible method for determining the current generation’s attitude to the wellbeing of future generations is likely to yield a rate higher than that advocated by the Stern Review. To the extent that this debate about the ethical basis for discounting remains unresolved, there will be ethical uncertainty about the discount rate in climate policy decisions. This ethical uncertainty may be represented analogously to empirical uncertainty-by replacing the standard precise utility function with a set of possible utility functions.
8. Managing Uncertainty
How should a decision-maker choose amongst the courses of action available to her when she must make the choice under conditions of severe uncertainty? The problem that climate decision-makers face is that, in these situations, the precise utility and probability values required by standard EU theory may not be readily available.
Ci sono, in generale, three possible responses to this problem.
(1) The decision-maker can simply bite the bullet and try to settle on precise probability and utility judgements for the relevant contingencies. Orthodox decision theorists argue that rationality requires that decisions be made as if they maximise the decision maker’s subjective expectation of benefit relative to her precise degrees of belief and values. Broome [2012, 129] gives an unflinching defence of this approach: “The lack of firm probabilities is not a reason to give up expected value theory […] Stick with expected value theory, since it is very well-founded, and do your best with probabilities and values.” This approach may seem rather bold, not least in the context of environmental decision making. Weitzman [2009], per esempio, argues that whether or not one assigns non-negligible probability to catastrophic climate consequences radically changes the assessment of mitigation options. Inoltre, in many circumstances there remains the question of how to follow Broome’s advice: How should the decision-maker settle, in a non-arbitrary way, on a precise opinion on decision-relevant issues in the face of an effectively ‘divided mind’? There are two interrelated strategies: she can deliberate further and/or aggregate conflicting views. The former aims for convergence in opinion, while the latter aims for an acceptable compromise in the face of persisting conflict. (For a discussion of deliberation see Fishkin and Luskin [2005]; for more on aggregation see, per esempio, Genest and Zidek [1986], Mongin [1995], Sen [1970], List and Puppe [2009]. There is a comparatively small formal literature on deliberation, a seminal contribution being Lehrer and Wagner’s [1981] model for updating probabilistic beliefs.)
(2) The decision-maker can try to delay making a decision, or at least postpone parts of it, in the hope that her uncertainty will become manageable as more information becomes available, or as disagreements resolve themselves through a change in attitudes. The basic motive for delaying a decision is to maintain flexibility at zero cost (see Koopmans [1962], Kreps and Porteus [1978], Arrow [1995b]). Suppose that we must decide between building a cheap but low sea wall or a high, but expensive, uno, and that the relative desirability of these two courses of action depends on unknown factors, such as the extent to which sea levels will rise. In this case it would be sensible to consider building a low wall first but leave open the possibility of raising it in the future. If this can be done at no additional cost, then it is clearly the best option. In many adaptation scenarios, the analogue of the ‘low sea wall’ may in fact be social-institutional measures that enable a delayed response to climate change, whatever the details of this change turn out to be. In molti casi, Tuttavia, the prospect of cost-free postponement of a decision (or part thereof) is simply a mirage, since delay often decreases rather than increases opportunities due to changes in the background environment. This is often true for climate-change adaptation decisions, not to mention mitigation decisions.
(3) The decision-maker can employ a different decision rule to that prescribed by EU theory; one that is much less demanding in terms of the information it requires. A great many different proposals for such rules exist in the literature, involving more or less radical departures from the orthodox theory and varying in the informational demands they make. It should be noted from the outset that there is one widely-agreed rationality constraint on these non-standard decision rules: ‘(EU)-dominated options’ are not admissible choices, questo è, if an option has lower expected utility than another option according to all permissible pairs of probability and utility functions, then the former dominated option is not an admissible choice. This is a relatively minimal constraint, but it may well yield a unique choice of action in some decision scenarios. In questi casi, the severe uncertainty is not in fact decision relevant. Per esempio, it may be the case that, from the global planner’s perspective, a given mitigation option is better than continuing with business as usual, whatever the uncertain details of the climate system. This is even more plausible to the extent that the mitigation option counts as a ‘win-win’ strategy [Maslin and Austin 2012], questo è, to the extent that it has other positive impacts, Dire, on air quality or energy security, regardless of mitigation results. In many more fine-grained or otherwise difficult decision contexts, Tuttavia, the non-EU-dominance constraint may exclude only a few of the available options as choice-worthy.
A consideration that is often appealed to in order to further discriminate between options is caution. Infatti, this is an important facet of the popular but ill-defined Precautionary Principle. (The Precautionary Principle is referred to in the IPCC [2014b] ARC-5 WGII report. Vedere, per esempio, Gardiner [2006] and Steele [2006] for discussion of what the Precautionary Principle does/could stand for.) Cautious decision rules give more weight to the ‘down-side’ risks; the possible negative implications of a choice of action. The Maxmin-EU rule, per esempio, recommends picking the action with greatest minimum expected utility (see Gilboa and Schmeidler [1989], Walley [1991]). The rule is simple to use, but arguably much too cautious, paying no attention at all to the full spread of possible expected utilities. The α-Maxmin rule, al contrario, recommends taking the action with the greatest α-weighted sum of the minimum and maximum expected utilities associated with it. The relative weights for the minimum and maximum expected utilities can be thought of as reflecting either the decision maker’s pessimism in the face of uncertainty or else their degree of caution (see Binmore [2009]). (For a comprehensive survey of non-standard decision theories for handling severe uncertainty in the economics literature, see Gilboa and Marinacci [2012]. For applications to climate policy see Heal and Millner [2014])
A more informationally-demanding set of rules are those that draw on considerations of confidence and/or reliability. The thought here is that an agent is more or less confident about the various probability and utility functions that characterise her uncertainty. Per esempio, when the estimates derive from different models or experts, the decision maker may regard some models as better corroborated by available evidence than others or else some experts as more reliable than others in their judgments. In questi casi, it is reasonable, ceteris paribus, to favour actions of which you are more confident that they will have beneficial consequences. Uno (rather sophisticated) way of doing this is to weight each of the expected utilities associated with an action in accordance with how confident you are about the judgements supporting them and then choose the action with the maximum confidence-weighted expected utility (see Klibanoff et al. [2005]). This rule is not very different from maximising expected utility and indeed one could regard confidence weighting as an aggregation technique rather than an alternative decision rule. But considerations of confidence may be appealed to even when precise confidence weights cannot be provided. Gärdenfors and Sahlin [1982/ 1988], per esempio, suggest simply excluding from consideration any estimates that fall below a reliability threshold and then picking cautiously from the remainder. Allo stesso modo, Hill [2013] uses an ordinal measure of confidence that allows for stake-sensitive thresholds of reliability that can then be combined with varying levels of caution. This rule has the advantage of allowing decision-makers to draw on the confidence grading of scientific claims adopted by the IPCC (see Bradley et al [2017]).
One might finally distinguish decision rules that are cautious in a slightly different way-that compare options in terms of ‘robustness’ to uncertainty, relative to a problem-specific satisfactory level of expected utility. Better options are those that are more assured of having an expected utility that is good enough or regret-free, in the face of uncertainty. The ‘information-gap theory’ developed by Ben-Haim [2001] provides one formalisation of this basic idea that has proved popular in environmental management theory. Another prominent approach to robust decision-making is that developed by Lempert, Popper and Bankes [2003]. These two frameworks are compared in Hall et al. [2012]. Recall that the uncertainty in question may be multi-faceted, concerning probabilities of states/outcomes, or values of final outcomes. Most decision rules that appeal to robustness assume that a best estimate for the relevant variables is available, and then consider deviations away from this estimate. A robust option is one that has a satisfactory expected utility relative to a class of estimates that deviate from the best one to some degree; the wider the class in question, the more robust the option. Much depends on what expected utility level is deemed satisfactory. For mitigation decision making, one salient satisfactory level of expected utility is that associated with a 50% chance of average global temperature rise of 2 degrees Celsius or less. Note that one may otherwise interpret any such mitigation temperature target in a different way, namely as a constraint on what counts as a feasible option. In altre parole, mitigation options that do not meet the target are simply prohibited options, not suitable for consideration. For adaptation decisions, the satisfactory level would depend on local context, but roughly speaking, robust options are those that yield reasonable outcomes for all the inopportune climate scenarios that have non-negligible probability given some range of uncertainty. These are plausibly adaptation options that focus on resilience to any and all of the aforesaid climate scenarios, perhaps via the development of social institutions that can coordinate responses to variability and change. (Robust decision-making is endorsed, per esempio, by Dessai et al. [2009] and Wilby and Dessai [2010], who indeed associate this kind of decision rule with resilience strategies. See also Linkov and others [2014] for discussion of resilience strategies vis-à-vis risk management.)
9. Conclusione
This article reviewed, from a philosophy of science perspective, issues and questions that arise in connection with climate science. Most of these issues are the subject matter of ongoing research, and they indeed deserve further attention. Rather than repeating these points, we would like to mention a topic that has not received the attention that it deserves: the epistemic significance of consensus in the acceptance of results. As the controversy over the Cook et al. [2013] paper shows, many people do seem to think that the level of expert consensus is an important reason to believe in climate change given that they themselves are not expert; e viceversa, attacking the consensus and sowing doubt is a classic tactic of the other side. The role of consensus in the context of climate change deserves more attention than it has received hitherto, but for some discussions about consensus see (Inmaculada de Melo-Martín, Kristen Intemann, 2014).
10. Glossary
Attribuzione (of climate change): The process of evaluating the relative contributions of multiple causal factors to a change or event with an assignment of statistical confidence.
Boundary conditions: Values for any variable which affect the system but which are not directly output by the calculations.
Calibration: The process of estimating values of model parameters which are most consistent with observations.
Climate model: A representation of certain aspects of the climate system.
Detection (of climate change): The process of demonstrating that climate or a system affected by climate has changed in some defined statistical sense without providing a reason for that change.
Double counting: The use of data for both calibration and confirmation.
Expected utility (for an action): The sum of the probability-weighted utility of the possible consequences of the action.
External conditions (of the climate system): Conditions that influence the state of the Earth such as the amount of energy received from the sun.
Initial conditions: A mathematical descriptions of the state of the climate system at the beginning of the period being simulated.
Internal variability: The phenomenon that climate variables such as temperature and precipitation would change over time due to the internal dynamics of the climate system even in the absence of changing external conditions.
Null hypothesis: The expected behaviour of the climate system in the absence of changing external influences.
Projection: The prediction of a climate model that is conditional on a certain forcing scenario.
Proxy data: The data for climate variables that derived from observing natural phenomena such as tree rings, ice cores and ocean sediments.
Robustness (of a result): A result is robust if separate (ideally independent) models or lines of evidence lead to the same conclusion.
Use novel data: Data that are used for confirmation and have not been used for calibration.
11. Riferimenti e approfondimenti
Adler C. E. e G. Hirsch Hadorn. (2014). The IPCC and treatment of uncertainties: topics and sources of dissensus. Wiley Interdisciplinary Reviews: Climate Change 5.5, 663-676.
Arrow K. J. (1995b). A Note on Freedom and Flexibility. Choice, Welfare and Development. (eds. K. Basu, P. Pattanaik, and K. Suzumura), 7-15. Oxford: la stampa dell'università di Oxford.
Arrow K. J. (1995a). ‘Discounting Climate Change: Planning for an Uncertain Future. Lecture given at Institut d’Économie Industrielle, Université des Sciences Sociales, Toulouse.’
Aspinall W. (2010). A route to more tractable expert advice. Nature 463, 294-295.
Ben-Haim Y. (2001). Information-Gap Theory: Decisions Under Severe Uncertainty, 330 pp. Londra: Academic Press.
Betz G. (2009). What range of future scenarios should climate policy be based on? Modal falsificationism and its limitations. Philosophia Naturalis 46, 133-158.
Betz G. (2010). What’s the worst case?. Analyse und Kritik 32, 87-106.
Binmore K. (2009). Rational Decisions, 216 pp. Princeton, New Jersey: Stampa dell'Università di Princeton.
Bishop C. H. e G. Abramowitz. (2013). Climate model dependence and the replicate Earth paradigm. Climate Dynamics 41, 885-900.
Bradley, R, Helgeson, C. e B. Hill (2017). Climate Change Assessments: Confidence, Probability and Decision, Philosophy of Science 84(3): 500-522.
Bradley, R, Helgeson, C. e B. Hill (2018). Combining Probability with Qualitative Degree-of-Certainty Assessment. Climatic Change 149 (3-4): 517-525,
Broome J. (2012). Climate Matters: Ethics in a Warming World, 192 pp. New York: Norton.
Broome J. (1992). Counting the Cost of Global Warming, 147 pp. Cambridge: The White Horse Press.
Broome J. (2008). The Ethics of Climate Change. Scientific American 298, 96-102.
Budescu, D. V., H. Por, S. B. Broomell and M. Smithson. (2014). The interpretation of IPCC probabilistic statements around the world. Nature Climate Change 4, 508-512.
Cohn T. UN. and H. F. Lins. (2005). Nature’s style: naturally trendy. Geophysical Research Letters 32, L23402.
Cook J. et al. (2013). Quantifying the consensus on the anthropogenic global warming in the scientific literature. Environmental Research Letters 8, 1-7.
Daron J. D. and D. Stainforth. (2013). On predicting climate under climate change. Environmental Research Letters 8, 1-8.
de Melo-Martín I., and K. Intemann (2014). Who’s afraid of dissent? Addressing concerns about undermining scientific consensus in public policy developments. Perspectives on Science 22.4, 593-615.
Dessai S. et al. (2009). Do We Need Better Predictions to Adapt to a Changing Climate? Eos 90.13, 111-112.
Dessler A. (2011). Introduction to Modern Climate Change. Cambridge: Pressa dell'Università di Cambridge.
Drèze J., and Stern, N. (1990). Policy reform, shadow prices, and market prices. Journal of Public Economics 42.1, 1-45.
Douglas H. (2009). Scienza, Policy, and the Value-Free Ideal. Pittsburgh: Pittsburgh University Press.
Fishkin J. S., and R. C. Luskin. (2005). Experimenting with a Democratic Ideal: Deliberative Polling and Public Opinion. Acta Politica 40, 284-298.
Frank D., J. Esper, E. Zorita and R. Wilson. (2010). A noodle, hockey stick, and spaghetti plate: A perspective on high-resolution paleoclimatology. Wiley Interdisciplinary Reviews: Climate Change 1.4, 507-516.
Frigg R. P., D. UN. Stainforth and L. UN. Fabbro. (2013). The Myopia of Imperfect Climate Models: The Case of UKCP09. Philosophy of Science 80.5, 886-897.
Frigg R. P., D. UN. Stainforth and L. UN. Fabbro. (2015). An Assessment of the Foundational Assumptions in High-Resolution Climate Projections: The Case of UKCP09 2015, draft under review.
Frigg R. P., S. Bradley, H. Du and L. UN. Fabbro. (2014a). Laplace’s Demon and the Adventures of His Apprentices. Philosophy of Science 81.1, 31-59.
Frisch M. (2013). Modeling Climate Policies: A Critical Look at Integrated Assessment Models. Philosophy and Technology 26, 117-137.
Frisch, M. (2015). Tuning climate models, predictivism, and the problem of old evidence. European Journal for Philosophy of Science 5.2, 171-190.
Gärdenfors P. and N.-E. Sahlin. [1982] (1988). Unreliable probabilities, risk taking, and decision making. Decision, Probability and Utility, (eds. P. Gärdenfors and N.-E. Sahlin), 313-334. Cambridge: Pressa dell'Università di Cambridge.
Gardiner S. (2006). A Core Precautionary Principle. The Journal of Political Philosophy 14.1, 33-60.
Gardiner S., S. Caney, D. Jamieson, H. Shue (2010). Climate Ethics: Essential Readings. Oxford: la stampa dell'università di Oxford
Genest C. e J. V. Zidek. (1986). Combining Probability Distributions: A Critique and Annotated Bibliography. Statistical Science 1.1, 113-135.
Gilboa I. and M. Marinacci. (2012). Ambiguity and the Bayesian Paradigm. Advances in Economics and Econometrics: Theory and Applications, Tenth World Congress of the Econometric Society (eds. D. Acemoglu, M. Arellano and E. Dekel), 179-242 Cambridge: Pressa dell'Università di Cambridge.
Gilboa I. and D. Schmeidler. (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics 18, 141-153.
Greaves, H. (2017). Discounting for public policy: A survey. Economics and Philosophy 33.3, 391-439.
Hall J. W., Lempert, R. J., Keller, K., Hackbarth, UN., Mijere, C., McInerney, D. J. (2012). Robust Climate Policies Under Uncertainty: A Comparison of Robust Decision-Making and Info-Gap Methods. Risk Analysis 32.10, 1657-1672.
Halpern J. E. (2003). Reasoning About Uncertainty, 483 pp. Cambridge, MA: CON Premere.
Heal. G. and A. Millner (2014) Uncertainty and Decision Making in Climate Change Economics. Review of Environmental Economics and Policy 8:120-137.
Hegerl G. C., O. Hoegh-Guldberg, G. Casassa, M. P. Hoerling, R. S. Kovats, C. Parmesan, D. W. Pierce, P. UN. Stott. (2010). Good Practice Guidance Paper on Detection and Attribution Related to Anthropogenic Climate Change. Meeting Report of the Intergovernmental Panel on Climate Change Expert Meeting on Detection and Attribution of Anthropogenic Climate Change (eds. T. F. Stocker, C. B. Field, D. Qin, V. Barros, G.-K. Plattner, M. Tignor, P. M. Midgley and K. l. Ebi. Bern). Svizzera: IPCC Working Group I Technical Support Unit, University of Bern.
Held I. M. (2005). The Gap between Simulation and Understanding in Climate Modeling. Bulletin of the American Meteorological Society 80, 1609-1614.
Hill B. (2013). Confidence and Decision. Games and Economic Behavior 82, 675-692.
Hulme M., S. Dessai, IO. Lorenzoni and D. Nelson. (2009). Unstable Climates: exploring the statistical and social constructions of climate. Geoforum 40, 197-206.
IPCC. (2013). Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge and New York: Pressa dell'Università di Cambridge.
IPCC. (2014). Climate Change 2014: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge and New York: Pressa dell'Università di Cambridge.
Jeffrey R. (1965). The Logic of Decision, 231 pp. Chicago: Stampa dell'Università di Chicago.
Jun M., R. Knutti, D. W Nychka. (2008). Local eigenvalue analysis of CMIP3 climate model errors. Tellus A 60.5, 992-1000.
Katzav J. (2013). Severe testing of climate change hypotheses. Studies in History and Philosophy of Philosophy of Modern Physics 44.4, 433-441.
Katzav J. (2014). The epistemology of climate models and some of its implications for climate science and the philosophy of science. Studies in History and Philosophy of Modern Physics 46, 228-238.
Katzav, J. & W. S. Parker (2018). Issues in the Theoretical Foundations of Climate Science. Studies in History and Philosophy of Modern Physics 63, 141-149.
Klibanoff P., M. Marinacci and S. Mukerji. (2005). A smooth model of decision making under ambiguity. Econometrica 73, 1849-1892.
Klintman M. (2019). Knowledge Resistance: How We Avoid Insight From Others. Manchester: Manchester University Press.
Knutti R., R. Furrer, C. Tebaldi, J. Cermak, e G. UN. Meehl. (2010). Challenges in Combining Projections from Multiple Climate Models. Journal of Climate 23.10, 2739-2758.
Koopmans T. C. (1962). On flexibility of future preference. Cowles Foundation for Research in Economics, Yale University, Cowles Foundation Discussion Papers 150.
Kreps D. M. and E. l. Porteus. (1978). Temporal resolution of uncertainty and dynamic choice theory. Econometrica 46.1, 185-200.
Lahsen M. (2005). Seductive Simulations? Uncertainty Distribution Around Climate Models. Social Studies of Science 35.6, 895-922.
Lehrer K. and Wagner, C. (1981). Rational Consensus in Science and Society, 165 pp. Dordrecht: Cosce.
Lempert R. J., Popper, S. W., Bankes, S. C. (2003). Shaping the Next One Hundred Years: New Methods for Quantitative Long-Term Policy Analysis, 208 pp. Santa Monica, circa: RAND Corporation, MR-1626-RPC.
Lenhard J. and E. Winsberg. (2010). Holism, entrenchment, and the future of climate model pluralism. Studies in History and Philosophy of Modern Physics 41, 253-262.
Linkov I. et al. (2014). Changing the resilience program. Nature Climate Change 4, 407-409.
List C. e C. Puppe. (2009). Judgment aggregation: a survey. Oxford Handbook of Rational and Social Choice (eds. P. Anand, C. Puppe and P. Pattanaik). Oxford: la stampa dell'università di Oxford.
Lorenz E. (1995). Climate is what you expect. Prepared for publication by NCAR. Unpublished, 1-33.
Lloyd E. UN. (2010). Confirmation and robustness of climate models. Philosophy of Science 77, 971-984.
Lloyd E. UN. (2015). Model robustness as a confirmatory virtue: The case of climate science. Studies in History and Philosophy of Science 49, 58-68.
Lloyd E. UN. (2009). Varieties of Support and Confirmation of Climate Models. Proceedings of the Aristotelian Society Supplementary Volume LXXXIII, 217-236.
Lloyd, E., N. Oreskes (2019). Climate Change Attribution: When Does it Make Sense to Add Methods? Epistemologia & Philosophy of Science 56.1, 185-201.
Lusk, G. (2017). The Social Utility of Event Attribution: Liability, Adaptation, and Justice-Based Loss and Damage. Climatic Change 143, 201–12.
Mach, K. J., M. D. Mastrandrea, P. T. Cittadino onorario, e C. B. Field (2017). Unleashing Expert Judgment in Assessment. Global Environmental Change 44, 1–14.
Mann M. E., R. S. Bradley and M.K. Hughes (1998). Global-scale temperature patterns and climate forcing over the past six centuries. Nature 392, 779-787.
Maslin M. e p. Austin. (2012). Climate models at their limit?. Nature 486, 183-184.
Mastrandrea M. D., K. J. Mach, G.-K. Plattner, O. Edenhofer, T. F. Stocker, C. B. Field, K. l. Ebi, e p. R. Matschoss. (2011). The IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups. Climatic Change 108, 675-691.
McGuffie K. and A. Henderson-Sellers. (2005). A Climate Modelling Primer, 217 pp. New Jersey: Wiley.
McIntyre S. and R. McKitrick. (2003). Corrections to the Mann et. al. (1998) proxy data base and northern hemispheric average temperature series. Energy & Environment 14.6, 751-771.
Mongin P. (1995). Consistent Bayesian Aggregation. Journal of Economic Theory 66.2, 313-51.
Nordhaus W. D. (2007). A Review of the Stern Review on the Economics of Climate Change. Journal of Economic Literature 45.3, 686-702.
Nordhaus W. C. (2008). A Question of Balance, 366 pp. Nuovo paradiso, CT: Stampa dell'Università di Yale.
Oreskes N. and E. M. Conway. (2012). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, 355 pp. New York: Bloomsbury Press.
Oreskes N. (2007) The Scientific Consensus on Climate Change: How Do We Know We’re Not Wrong? Climate Change: What It Means for Us, Our Children, and Our Grandchildren (eds. J. F. C. DiMento and P. Doughman), 65-99. Boston: CON Premere.
Oreskes N., K. Shrader-Frechette and K. Belitz. (1994). Verification, validation, and confirmation of numerical models in the Earth Science. Science New Series 263.5147, 641-646.
Parfit D. (1984). Reasons and Persons, 560 pp. Oxford: Clarendon Press.
Parker W. S. (2009). Confirmation and Adequacy for Purpose in Climate Modelling. Aristotelian Society Supplementary Volume 83.1 233-249.
Parker W. S. (2010). Comparative Process Tracing and Climate Change Fingerprints. Philosophy of Science 77, 1083-1095.
Parker W. S. (2011). When Climate Models Agree: The Significance of Robust Model Predictions. Philosophy of Science 78.4, 579-600.
Parker W. S. (2013). Ensemble modeling, uncertainty and robust predictions. Wiley Interdisciplinary Reviews: Climate Change 4.3, 213-223.
Parker W. S. (2014). Values and Uncertainties in Climate Prediction, Revisited. Studies in History and Philosophy of Science Part A 46, 24-30.
Petersen A. C. (2012). Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, 210 pp. Boca Raton, Florida: CRC Press.
Resnik M. (1987). Choices: an introduction to decision theory, 221 pp. Minneapolis: Stampa dell'Università del Minnesota.
Savage L. J. (1954). The Foundations of Statistics, 310 pp. New York: John Wiley & Sons.
Sen A. (1982). Approaches to the choice of discount rate for social benefit–cost analysis. Discounting for Time and Risk in Energy Policy (ed. R. C. Lind), 325-353. Washington, DC: Resources for the Future.
Sen A. (1970). Collective Choice and Social Welfar. San Francisco: Holden-Day Inc.
Sexton D. M. H., J. M. Murphy, M. Collins and M. J. Webb. (2012). Multivariate Probabilistic Projections Using Imperfect Climate Models. Part I: Outline of Methodology. Climate Dynamics 38, 2513-2542.
Sexton D. M. H., e J. M. Murphy. (2012). Multivariate Probabilistic Projections Using Imperfect Climate Models. Part II: Robustness of Methodological Choices and Consequences for Climate Sensitivity. Climate Dynamics 38, 2543-2558.
Shackley S., P. Giovane, S. Parkinson and B. Wynne. (1998). Uncertainty, Complexity and Concepts of Good Science in Climate Change Modelling: Are GCMs the Best Tools? Climatic Change 38, 159-205.
Smith L. UN. e n. Poppa. (2011). Uncertainty in science and its role in climate policy. Phil. Trans. R. Soc. A 369.1956, 4818-4841.
Spiegelhalter D. J. and H. Riesch. (2011). Don’t know, can’t know: embracing deeper uncertainties when analysing risks. Phil. Trans. R. Soc. A 369, 4730-4750.
Stainforth D. UN., M. R. Allen, E. R. Tredger and L. UN. Fabbro. (2007a). Confidence, Uncertainty and Decision-support Relevance in Climate Predictions. Philosophical Transactions of the Royal Society A 365, 2145-2161.
Stainforth D. UN., T. E. Downing, R. Washington, UN. Lopez and M. New. (2007b). Issues in the Interpretation of Climate Model Ensembles to Inform Decisions. Philosophical Transactions of the Royal Society A 365, 2163-2177.
Steele K. (2006). The precautionary principle: a new approach to public decision-making?. Law Probability and Risk 5, 19-31.
Steele K. e C. Werndl. (2013). Climate Models, Confirmation and Calibration. The British Journal for the Philosophy of Science 64, 609-635.
Steele K. e C. Werndl. imminente (2015). The Need for a More Nuanced Picture on Use-Novelty and Double-Counting. Filosofia della scienza.
Stern N. (2007). The Economics of Climate Change: The Stern Review, 692 pp. Cambridge: Pressa dell'Università di Cambridge.
Poppa, N. (2013). The Structure of Economic Modeling of the Potential Impacts of Climate Change: Grafting Gross Underestimation of Risk onto Already Narrow Scientific Models. Journal of Economic Literature 51.3, 838-859.
Thompson, Erica, Roman Frigg and Casey Helgeson. Expert Judgment for Climate Change Adaptation, Philosophy of Science 83(5), 2016, 1110-1121,
von Neumann, J. and Morgenstern, O. (1944). Theory of Games and Economic Behaviour, 739 pp. Princeton: Stampa dell'Università di Princeton.
Walley P. (1991). Statistical Reasoning with Imprecise Probabilities, 706 pp. New York: Chapman and Hall.
Weitzman M. l. (2009). On Modeling and Interpreting the Economics of Catastrophic Climate Change. The Review of Economics and Statistics 91.1, 1-19.
Werndl C. (2015). On defining climate and climate change. The British Journal for the Philosophy of Science, doi:10.1093/bjps/axu48.
Wilby R. l. and S. Dessai. (2010). Robust adaptation to climate change. Weather 65.7, 180-185.
Weisberg Michael. (2006). Robustness Analysis. Philosophy of Science 73, 730-742.
Winsberg E. (2012). Values and Uncertainties in the Predictions of Global Climate Models. Kennedy Institute of Ethics Journal 22, 111-127.
Winsberg, E. 2018. Philosophy and Climate Science. Cambridge: Pressa dell'Università di Cambridge.
Winsberg, E and W. M. Goodwin (2016). The Adventures of Climate Science in the Sweet Land of Idle Arguments. Studies in History and Philosophy of Modern Physics 54, 9-17.
Worrall J. (2010). Error, Tests, and Theory Confirmation. Error and Inference: Recent Exchanges on Experimental Reasoning, Affidabilità, and the Objectivity and Rationality of Science (eds. D. G. Mayo and A. Spanos), 125-154. Cambridge: Pressa dell'Università di Cambridge.
Wüthrich, N. (2017). Conceptualizing Uncertainty: An Assessment of the Uncertainty Framework of the Intergovernmental Panel on Climate Change. In EPSA15 Selected Papers, 95-107. Cham: Springer.
Informazioni sull'autore
Richard Bradley
London School of Economics and Political Science
UK
Roman Frigg
London School of Economics and Political Science
UK
Katie Steele
Università Nazionale Australiana
Australia
Erica Thompson
London School of Economics and Political Science
UK
Charlotte Werndl
University of Salzburg
Austria
e
London School of Economics and Political Science
UK