We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The knowledge that is used in IPCC assessments predominantly stems from a wide variety of academic disciplines. Given the high scientific and political profile of the IPCC, the production of knowledge in disciplines is impacted by the existence and dynamics of the IPCC assessment process. In some cases, the dynamics between academic disciplines and the IPCC is characterised by the presence of positive feedback loops, where the production of knowledge is structured and programmed by the IPCC. The subsequent findings then receive a preeminent role in later IPCC assessments, and so the cycle continues. It is important to critically reflect on these dynamics, in order to determine whether visions of climate change’s past, present, and future – for example, pathways for the climate-change problem and its potential solutions, as far as they exist – have not been unduly constrained by the IPCC process. The IPCC runs the risk of unreflexively foregrounding some scientific and policy approaches at the expense of other approaches.
This chapter deals with the implications of uncertainty in the practice of climate modelling for communicating model-based findings to decision-makers, particularly high-resolution predictions intended to inform decision-making on adaptation to climate change. Our general claim is that methodological reflections on uncertainty in scientific practices should provide guidance on how their results can be used more responsibly in decision support. In the case of decisions that need to be made to adapt to climate change, societal actors, both public and private, are confronted with deep uncertainty. In fact, it has been argued that some of the questions these actors may ask ‘cannot be answered by science’. In this chapter, the notions of ‘reliability’ are examined critically; in particular the manner(s) in which the reliability of climate model findings pertaining to model-based high-resolution climate predictions is communicated. A broader discussion of these issues can be found in the chapter by Beck, in this volume.
Findings can be considered ‘reliable’ in many different ways. Often only a statistical notion of reliability is implied, but in this chapter we consider wider variations on the meaning of ‘reliability’, some more relevant to decision support than the mere uncertainty in a particular calculation.
Assessment of error and uncertainty is a vital component of both natural and social science. Empirical research involves dealing with all kinds of errors and uncertainties, yet there is significant variance in how such results are dealt with. Contributors to this volume present case studies of research practices across a wide spectrum of scientific fields, including experimental physics, econometrics, environmental science, climate science, engineering, measurement science and statistics. They compare methodologies and present the ingredients needed for an overarching framework applicable to all.
Policy decisions in many areas involving science, including the environment and public health, are both complex and contested. Typically there are no facts that entail a unique correct policy. Furthermore, political decisions on these problems will need to be made before conclusive scientific evidence is available. Decision stakes are high: The impacts of wrong decisions based on the available limited knowledge can be huge. Actors disagree on the values that should guide the decision-making. The available knowledge bases are typically characterised by imperfect understanding (and imperfect reduction into models) of the complex systems involved. Models, scenarios and assumptions dominate assessment of these problems, and many (hidden) value loadings reside in problem frames, indicators chosen and assumptions made.
The evidence that is embodied in scientific policy advice under such post-normal (Funtowicz and Ravetz 1993) conditions requires quality assessment. Advice should be relevant to the policy issue, scientifically tenable and robust under societal scrutiny. Governmental and intergovernmental agencies that inform policy and the public about complex risks increasingly recognise that uncertainty and disagreement can no longer be suppressed or denied, but need to be dealt with in a transparent and effective manner. In response to emerging needs, several institutions that interface science and policy have adopted knowledge quality assessment approaches, where knowledge refers to any information that is accepted into a debate (UK Strategy Unit 2002; EPA 2003; MNP/UU 2003; IPCC 2005).
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.