Gedetailleerde leidraad

B Appendix: Glossary


Aggregation is the joining of more or less equivalent elements. Aggregation can take place across different scale dimensions, leading to different resolutions on these scales. The most relevant scale dimensions in environmental assessment are: temporal scale (e.g. diurnal; seasonal; annual; century), spatial scale (e.g. local; regional; continental; global), and systemic scales (e.g. individual plants; ecosystems; terrestrial biosphere).

Aggregation error

Aggregation error arises from the scaling up or scaling down of variables to meet a required aggregation level. The scaling-up or scaling-down relations are, especially for non-additive variables, to a certain degree arbitrary.


Assessment is a process that connects knowledge and action (in both directions) regarding a problem. Assessment comprises the analysis and review of knowledge for the purpose of helping someone in a position of responsibility to evaluate possible actions or think about a problem. Assessment usually does not mean doing new research. Assessment means assembling, summarizing, organizing, interpreting, and possibly reconciling pieces of existing knowledge, and communicating them so that they are relevant and helpful to an intelligent but inexpert policy-maker or other actor(s) involved in the problem at hand.

Behavioural variability

One of the sources of variability distinguished in the PRIMA typology (Van Asselt, 2000). It refers to 'non-rational' behaviour, discrepancies between what people say and what they actually do (e.g. cognitive dissonance), or to deviations from 'standard' behavioural patterns (micro-level behaviour).


A constant or systematic deviation as opposed to a random error. It appears as a persistent over- or under-estimation of the quantity measured, calculated or estimated. See also related concepts as cognitive bias, disciplinary bias, motivational bias and value ladenness.

Cognitive bias

Experts and lay people alike are subject to a variety of potential mental errors or shortcomings caused by man's simplified and partly subconscious information processing strategies. It is important to distinguish these so-called cognitive biases from other sources of bias, such as cultural bias, organizational bias, or bias resulting from one's own self-interest (from Psychology of Intelligence Analysis, R.J. Heuer, 1999; Some of the sources of cognitive bias are as follows: overconfidence, anchoring, availability, representativeness, satisficing, unstated assumptions, coherence. A fuller description of sources of cognitive bias in expert and lay elicitation processes is available in Dawes (1988).

Cognitive bias: Anchoring and adjustment

Assessments are often unduly weighted toward the conventional value, or first value given, or to the find-ings of previous assessments in making an assessment. Thus, they are said to be 'anchored' and 'adjusted' to this value.

Cognitive bias: Availability

This bias refers to the tendency to give too much weight to readily available data or recent experience (which may not be representative of the required data) in making assessments.

Cognitive bias: Coherence

Events are considered more likely when many options/ scenarios can be envisaged that lead to the event, or if some options/ scenarios are particularly coherent. Conversely, events are considered unlikely when options/scenarios can not be imagined. Thus, probabilities tend to be assigned more on the basis of one's ability to tell coherent stories than on the basis of intrinsic probability of occurrence.

Cognitive bias: Overconfidence

Experts tend to overestimate their ability to make quantitative judgements. This is often manifest with an estimate of a quantity and its uncertainty range that does not even encompass the true value of the quantity. This is difficult for an individual to guard against; but a general awareness of the tendency can be important.

Cognitive bias: Representativeness

This relates to the tendency to place more confidence in a single piece of information that is considered representative of a process than in a larger body of more generalized information.

Cognitive bias: Satisficing

This refers to the tendency to search through a limited number of solution options and to pick from among them. Comprehensiveness is sacrificed for expediency in this case.

Cognitive bias: Unstated assumptions

A subject's responses are typically conditional on various unstated assumptions. The effect of these assumptions is often to constrain the degree of uncertainty re ected in the resulting estimate of a quantity. Stating assumptions explicitly can help re ect more of a subject's total uncertainty.

Conflicting evidence

One of the categories on the spectre of uncertainty due to 'lack of knowledge' as distinguished in the PRIMA typology (Van Asselt, 2000). Conflicting evidence occurs if different data sets/observations are available, but allow room for competing interpretations. 'We don't know what we know'.

Context validation

Context validity refers to the probability that an estimate has approximated the true but unknown range of (causally) relevant aspects and rival hypotheses present in a particular policy context. Context validation thus is minimizing the probability that one overlooks something of relevance. It can be performed by a participatory bottom-up process eliciting from stakeholders those aspects considered relevant as well as rival hypotheses on underlying causal relations, and rival problem definitions and problem framings. See Dunn,1998, 2000.

Cultural theory

also known as 'grid-group cultural theory' or theory of socio-cultural viability has been developed over the past thirty years by the British anthropologists Mary Douglas, Michael Thompson, and Steve Rayner, the American political scientists Aaron Wildavsky and Richard Ellis, and many others. The theoretical framework was originally designed to deal with cultural diversity in remote places by an author interested in rituals, symbols, witchcraft, food and drinking habits, Mary Douglas. Her aim was to show the relevance of anthropology for 'modern' societies. And indeed her neo-Durkheimian approach emerged as a useful tool in so many fields of social science. Until present, the theory has been used most extensively in anthropology and political science, especially in policy analysis and in the interdisciplinary field of risk analysis (taken from the Grid-Group Cultural Theory website; see for more information). Cultural theory employs two axes (dimensions) for describing social formations and cultural diversity, 'group' and 'grid'; when these are at 'high' and 'low', they yield types described as 'hierarchist', 'egalitarian', 'fatalist' and 'individualist'. Michael Thompson has added a fifth type, residing in the middle, called 'hermit'. In recent applications the 'fatalist' has been eliminated from the scheme. Recently Ravetz (2001) proposed a modification of the scheme using as dimensions of social variation: Style of action (isolated / collective) and location (insider / outsider), yielding the types: 'Administrator', 'Business man', 'Campaigner', and 'Survivor' (ABCS).

Disciplinary bias

Science tends to be organized into different disciplines. Disciplines develop somewhat distinctive traditions over time, tending to develop their own characteristic manner of viewing problems, drawing problem boundaries and of selecting the objects of inquiry etc. These differences in perspective will translate into forms of bias in viewing problems.


The theory of knowledge.

Extended facts

Knowledge from other sources than science, including local knowledge, citizens' surveys, anecdotal information, and the results of investigative journalism. Inclusion of extended facts in environmental assessment is one of the key principles of Post-Normal Science. (Funtowicz and Ravetz, 1993) Extended peer communities Participants in the quality assurance processes of knowledge production and assessment in Post-Normal Science, including all stakeholders engaged in the management of the problem at hand. (Funtowicz and Ravetz, 1993)


The inference of unknown data from known data, for instance future data from past data, by analyzing trends and making assumptions.


A person who has the role to facilitate a structured group process (for instance participatory integrated assessment, i.e. integrated assessment where public participation (stakeholders) is an explicit and crucial part of the whole assessment process) in such a way that the aim of that group process will be met.

Focus group

Well-established research technique applied since the 1940's in the social sciences, marketing fields, evaluation and decision making research. Generally, a group of 5 to 12 people are interviewed by a moderator on a specific focused subject. With the focus group technique the researcher can obtain at the same time information from various individuals together with the interactions amongst them. To a certain extent such artificial settings simulate real situations where people communicate with each other. Functional error Functional error arises from uncertainty about the form and nature of the process represented by the model. Uncertainty about model structure frequently reflects disagreement between experts about the underlying causal mechanisms.


Literally, Garbage In, Garbage Out, typically referring to the fact that outputs from models are, at their best, only as good as the inputs. See e.g. Stirling, 2000. A variant formulation is 'Garbage In, Gospel Out' referring to a tendency to put faith in computer outputs regardless of the quality of the inputs.

Global sensitivity analysis

Global sensitivity analysis is a combination of sensitivity and uncertainty analysis in which "a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful". Leamer (1990) quoted in Saltelli (2002).

Hardware error

Hardware errors in model outcomes arise from bugs in hardware. An obvious example is the bug in the early version of the Pentium processor for personal computers, which gave rise to numerical error in a broad range of floating-point calculations performed on that processor. The processor had already been widely used worldwide for quite some time, when the bug was discovered. It cannot be ruled out that hardware used for environmental models contains undiscovered bugs that might affect the outcomes, although it is unlikely that they will have a significant influence on the models' performance. To secure against hardware error, one can test critical model output for reproducibility on a computer with a different processor before the critical output enters the policy debate.


Hedging is a quantitative technique for the iterative handling of uncertainties in decision making. It is used, for instance, to deal with risks in finance and in corporate R&D decisions. For example, a given future scenario may be considered so probable that all decisions which are made assume that the forecast is correct. However, if these assumptions are wrong, there may be no exibility to meet other outcomes. Thus, rather than solely developing a course of action for one particular future scenario, business strategic planners prefer to tailor a hedging strategy that will allow adaptation to a number of possible outcomes. Applied to climate change, it could for example be used by stakeholders from industry to reduce the risks of investing in energy technology, pending governmental measures on ecotax. Anticipating a range of measures from government to reduce greenhouse gases emissions, a branch of industry or a company could estimate the cost-effectiveness of investing or delaying investments in more advanced energy technology.


The deepest of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990): Inexactness, unreliability and border with ignorance, which refer to technical, methodological and epistemic aspects of uncertainty. In terms of the NUSAP notational system for describing uncertainty in information (data, model-outcomes etc.) the technical uncertainty (inexactness) in our knowledge of the behavior of the 'data' is expressed by the spread (S), while the methodological uncertainty (unreliability) refers to our knowledge of the data-production process. This latter aspect is expressed by the assessment-qualifier (A) in the NUSAP notation. Besides the technical and methodological uncertainty dimensions, there is still something more. No process in the field or laboratory is completely known. Even physical constants may vary unpredictably. This is the realm of our ignorance: it includes all the different sorts of gaps in our knowledge not encompassed in the previous sorts of uncertainty. This ignorance may merely be of what is considered insignificant, such as when anomalies in experiments are discounted or neglected, or it may be deeper, as is appreciated retrospectively when revolutionary new advances are made. Thus, space-time and matter-energy were both beyond the bounds of physical imagination, and hence of scientific knowledge, before they were discovered. Can we say anything useful about that of which we are ignorant? It would seem by the very definition of ignorance that we cannot, but the boundless sea of ignorance has shores, which we can stand on and map. The Pedigree qualifier (P) in the NUSAP system maps this border with ignorance in knowledge production. In this way it goes beyond what statistics has provided in its mathematical approach to the management of uncertainty.
In the PRIMA typology (Van Asselt, 2000) 'ignorance' is one of the categories on the continuum scale of uncertainty due to lack of knowledge. The PRIMA typology distinguishes between: reducible ignorance and irreducible ignorance. Reducible ignorance refers to processes that we do not observe, or theoretically imagine, at this point in time, but probably in the future: 'We don't know what we do not know'. Irreducible ignorance refers to processes and interactions between processes that cannot, or not unambiguously, be determined by human capacities and capabilities: 'We cannot know'.


Indeterminacy is a category of uncertainty which refers to the open-endedness (both social and natural) in the coupled natural-social processes. It applies to processes where the outcome cannot (or only partly) be determined from the input. Indeterminacy introduces the idea that contingent social behavior also has to be included in the analytical and prescriptive framework. It acknowledges the fact that many knowledge claims are not fully determined by empirical observations but are based on a mixture of observation and interpretation. The latter implies that scientific knowledge depends not only on its degree of fit with nature (the observation part), but also on its correspondence with the social world (the interpretation part) and on its success in building and negotiating trust and credibility for the way science deals with the 'interpretive space'.

In the PRIMA typology (Van Asselt, 2000) indeterminacy is one of the categories on the continuum scale of uncertainty due to lack of knowledge. Indeterminacy occurs in case of processes of which we understand the principles and laws, but which can never be fully predicted or determined: 'We will never know'.


One of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990): Inexactness, unreliability and border with ignorance. Quantitative (numerical) inexactness is the simplest sort of uncertainty; it is usually expressed by significant digits and error bars. Every set of data has a spread, which may be considered in some contexts as a tolerance or a random error in a (calculated) measurement. It is the kind of uncertainty that relates most directly to the stated quantity, and is most familiar to student of physics and even the general public. Next to quantitative inexactness one can also distinguish qualitative inexactness which occurs if qualitative knowledge is not exact but comprises a range. In the PRIMA typology (Van Asselt, 2000) inexactness is one of the categories on the continuum scale of uncertainty due to lack of knowledge. Inexactness is also referred to as lack of precision, inaccuracy, metrical uncertainty, measurement errors, or precise uncertainties: 'We roughly know'.

Institutional uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situa tional, and societal uncertainty. Institutional uncertainty is in some sense a subset of societal uncertainty, and refers more specifically to the role and actions of institutions and their members. Institutional uncertainty stems from the "diverse cultures and traditions, divergent missions and values, different structures, and work styles among personnel of different agencies" (De Marchi, 1995). High institutional uncertainty can hinder collaboration or understanding among agencies, and can make the actions of institutions difficult to predict.

Lack of observations/measurements

In the PRIMA typology (Van Asselt, 2000) 'lack of observations/measurements' is one of the categories on the continuum scale of uncertainty due to lack of knowledge. It refers to lacking data that could have been collected, but haven't been: 'We could have known'.

Legal uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Legal uncertainty is relevant "wherever agents must consider future contingencies of personal liability for their actions (or inactions)". High legal uncertainty can result in defensive responses in regard to both decision making and release of information. Legal uncertainty may also play a role where actions are conditioned on the transparance of a legal framework in allowing one to predict the consequences of particular actions.

Limited knowledge

One of the sources of uncertainty distinguished in the PRIMA typology (Van Asselt, 2000). Limited knowledge is a property of the analysts performing the study and/or of our state of knowledge. Also referred to as 'subjective uncertainty', 'incompleteness of the information', 'informative uncertainty', 'secondary uncertainty', or 'internal uncertainty'. Limited knowledge results partly out of variability, but knowledge with regard to deterministic processes can also be incomplete and uncertain. A continuum can be described that ranges from unreliability to structural uncertainty.

Model-fix error

Model-fix errors are those errors that arise from the introduction of non-existent phenomena in the model. These phenomena are introduced in the model for a variety of reasons. They can be included to make the model computable with today's computer technology, or to allow simplification, or to allow modelling at a higher aggregation level, or to bridge the mismatch between model behaviour and observation and or expectation. An example of the latter is the ux adjustment in many coupled Atmosphere Ocean General Circulation Models used for climate projection. The effect of such model fixes on the reliability of the model outcome will be bigger if the simulated state of the system is further removed from the (range of) state(s) to which the model was calibrated. It is useful to distinguish between (A) model fixes to account for well understood limitations of a model and (B) model fixes to account for a mismatch between model and observation that is not understood.

Monte Carlo Simulation

Monte Carlo Simulation is a statistical technique for stochastic model calculations and analysis of error propagation in calculations. Its purpose is to trace out the structure of the distributions of model output. In its simplest form this distribution is mapped by calculating the deterministic results (realizations) for a large number of random draws from the individual distribution functions of input data and parameters of the model. To reduce the required number of model runs needed to get sufficient information about the distribution in the outcome (mainly to save computation time), advanced sampling methods have been designed such as Latin Hyper Cube sampling. The latter makes use of stratification in the sampling of individual parameters and of pre-existing information about correlations between input variables.

Moral uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Moral uncertainty stems from the underlying moral issues related to action and inaction in any given case. De Marchi notes that, though similar to legal responsibility, moral guilt may occur absent legal responsibility when negative consequences might have been limited by the dissemination of prior information or more effective management for example. "Moral uncertainty is linked to the ethical tradition of a given country be it or not enacted in legislation (juridical and societal norms, shared moral values, mores), as well as the psychological characteristics of persons in charge, their social status and professional roles" (De Marchi, 1995). Moral uncertainty would typically be high when moral and ethical dimensions of an issue are central and participants have a range of understandings of the moral imperatives at stake.

Motivational bias

Motivational bias occurs when people have an incentive to reach a certain conclusion or see things a certain way. It is a pitfall in expert elicitation. Reasons for occurrence of motivational bias include: a) a person may want to influence a decision to go a certain way; b) the person may perceive that he will be evaluated based on the outcome and might tend to be conservative in his estimates; c) the person may want to suppress uncertainty that he actually believes is present in order to appear knowledgeable or authoritative; and d) the expert has taken a strong stand in the past and does not want to appear to contradict himself by producing an estimate that lends credence to alternative views.

Multi-criteria decision analysis

A method of formalising issues for decision, using both 'hard' and 'soft' indicators, not intended to yield an optimum solution but rather to clarify positions and coalitions.

Natural randomness

One of the sources of variability distinguished in the PRIMA typology (Van Asselt, 2000). It refers to the non-linear, chaotic and unpredictable nature of natural processes.

Normal science

Normal science is a term which was originally coined by Thomas Kuhn (1962), and was later on further expanded, by Funtowicz and Ravetz (1990) who introduced the term 'post-normal science' to denote the kind of science which is needed to tackle the current complex, boundary-crossing problems which society faces, and where system uncertainties or decision stakes are high. In their words: "By 'normality' we mean two things. One is the picture of research science as 'normally' consisting of puzzle solving within an unquestioned and unquestionable 'paradigm', in the theory of T.S. Kuhn (Kuhn 1962). Another is the assumption that the policy environment is still 'normal', in that such routine puzzle solving by experts provides an adequate knowledge base for policy decisions. Of course researchers and experts must do routine work on small-scale problems; the question is how the framework is set, by whom, and with whose awareness of the process. In 'normality', either science or policy, the process is managed largely implicitly, and is accepted unwittingly by all who wish to join in."

Numerical error

Numerical error arises from approximations in numerical solution, rounding of numbers and numerical precision (number of digits) of the represented numbers. Complex models include a large number of linkages and feedbacks which enhances the chance that unnoticed numerical artifacts co-shape the model behaviour to a significant extent. The systematic search for artifacts in model behaviour which are caused by numerical error, requires a mathematical 'tour de force' for which no standard recipe can be given. It will depend on the model at hand how one should set up the analysis. To secure against potential serious error due to rounding of numbers, one can test the sensitivity of the results to the number of digits accounted for in floating-point operations in model calculations.


Acronym for Numeral Unit Spread Assessment Pedigree Notational system developed by Silvio Funtowicz and Jerry Ravetz to better manage and communicate uncertainty in science for policy. In NUSAP, the increasing severity of uncertainty is marked by the three categories of uncertainty, Spread for technical uncertainty (or error-bar), Assessment for methodological (or unreliability) and Pedigree for border with ignorance (or the essential limitations of a particular sort of scientific practice). (Funtowicz and Ravetz, 1990)


A quantity related to one or more variables in such a way that it remains constant for any specified set of values of the variable or variables.

Partisan Mutual Adjustment

Charles Lindblom (1965) described governance in pluralist democracies as a 'Science of Muddling Through' that relies on Disjointed Incrementalism as its strategy of decision and whose intelligence is produced through what he calls Partisan Mutual Adjustment. Both of these practices are primarily justified ex negativo - by comparison, that is, to the counterfactual ideal of hierarchical governance based on 'synoptic' analyses of all pertinent issues and affected interests. While the synoptic ideal is said to overtax the bounded rationality of real-world decision makers, the incrementalist strategy will disaggregate large and complex issues into series of small steps that reduce the risks of misinformation and miscalculation, and that can use rapid feedback to correct any errors. Similarly, instead of relying on the benevolence and omniscience of central decision makers, Partisan Mutual Adjustment will directly involve representatives of affected groups and specialized office holders that are able to utilize local information, and to fend for their own interests in pluralist bargaining processes in which the opposing and different views need to be heard. In short, compared to an impossible ideal, muddling through is not only feasible but likely to produce policy choices that are, at the same time, better informed and more sensitive to the affected interests. (Scharpf and Mohr, 1994)


Pedigree conveys an evaluative account of the production process of information (e.g. a number) on a quantity or phenomenon, and indicates different aspects of the underpinning of the numbers and scientific status of the knowledge used (Funtowicz and Ravetz, 1990). Pedigree is expressed by means of a set of pedigree criteria to assess these different aspects. Examples of such criteria are empirical basis or degree of validation. These criteria are in fact yardsticks for strength. Many of these criteria are hard to measure in an objective way. Assessment of pedigree involves qualitative expert judgement. To minimise arbitrariness and subjectivity in measuring strength a pedigree matrix is used to code qualitative expert judgements for each criterion into a discrete numeral scale from 0 (weak) to 4 (strong) with linguistic descriptions (modes) of each level on the scale. Note that these linguistic descriptions are mainly meant to provide guidance in attributing scores to each of the criteria. It is not possible to capture all aspects that an expert may consider in scoring a pedigree in a single phrase. Therefore a pedigree matrix should be applied with some exibility and creativity. Examples of pedigree matrices can be found in the Pedigree matrices section of the NUSAP-net website (


A pitfall is a characteristic error that commonly occurs in assessing a problem. Such errors are typically associated with a lack of knowledge or experience, and thus may be reduced by experience, by consultation of others, or by following procedures designed to highlight and avoid pitfalls. In complex problems we sometimes say that pitfalls are 'dense', meaning that there is an unusual variety and number of pitfalls.

Post-Normal Science

Post-Normal Science is the methodology that is appropriate when "facts are uncertain, values in dispute, stakes high and decisions urgent". It is appropriate when either 'systems uncertainties' or 'decision stakes' are high. See for a tutorial. Practically immeasurable In the PRIMA typology (Van Asselt, 2000) 'practically immeasurable' is one of the categories on the continuum scale of uncertainty due to lack of knowledge. It refers to lacking data that in principle can be measured, but not in practice (too expensive, too lengthy, not feasible experiments): 'We know what we do not know'.

Precautionary principle

The principle is roughly that "when an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically" (Wingspread conference, Wisconsin, 1998). Note that this would apply to most environmental assessments since cause-effect statements can rarely be fully established on any issue. If the burden of proof were set such that one must demonstrate a completely unequivocal cause-effect relation-ship before taking action, then it would not be possible to take action on any meaningful environmental issue. The precautionary principle thus relates to the setting of burden of proof.

PRIMA approach

Acronym for Pluralistic fRamework of Integrated uncertainty Management and risk Analysis (Van Asselt, 2000). The guiding principle is that uncertainty legitimates different perspectives and that as a consequence uncertainty management should consider different perspectives. Central to the PRIMA approach is the issue of disentangling controversies on complex issues in terms of salient uncertainties. The salient uncertainties are then 'coloured' according to various perspectives. Starting from these perspective-based interpretations, various legitimate and consistent narratives are developed to serve as a basis for integrated analysis of autonomous and policy-driven developments in terms of risk.


Based on the notion of probabilities.

Probability density function (PDF)

The probability density function of a continuous random variable represents the probability that a random variable will take its value in a infinitely small variable interval. The probability density function can be integrated to obtain the probability that the random variable takes a value in a given interval.

Problem structuring

An approach to analysis and decision making which assumes that participants do not have clarity on their ends and means, and provides appropriate conceptual structures. It is a part of 'soft systems methodology'.

Process error

Process error arises from the fact that a model is by definition a simplification of the real system represented by the model. Examples of such simplifications are the use of constant values for entities that are non-constant in reality, or focusing on key processes that affect the modelled variables significantly whilst omitting processes that are considered to be not significant.

Proprietary uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995), in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Proprietary uncertainty occurs due to the fact that information and knowledge about an issue are not uniformly shared among all those who could potentially use it. That is, some people or groups have information that others don't and may assert ownership or control over it. "Proprietary uncertainty becomes most salient when it is necessary to reconcile the general needs for safety, health, and environment protection with more sectorial needs pertaining, for instance, to industrial production and process, or to licensing and control procedure" (De Marchi, 1995). De Marchi notes that 'whistle blowing' is another source of proprietary uncertainty in that there is a need for protection of those who act in sharing information for the public good. Proprietary uncertainty would typically be high when knowledge plays a key role in assessment, but is not widely shared among participants. An example of such would be the case of external safety of military nuclear production facilities.


Sometimes it is not possible to represent directly the quantity or phenomenon we are interested in by a parameter so some form of proxy measure is used. A proxy can be better or worse depending on how closely it is related to the actual quantity we intend to represent. Think of first order approximations, oversimplifications, idealisations, gaps in aggregation levels, differences in definitions etc..


Pseudo-imprecision occurs when results have been expressed so vaguely that they are effectively immune from refutation and criticism. Pseudo-precision Pseudo-precision is false precision that occurs when the precision associated with the representation of a number or finding grossly exceeds the precision that is warranted by closer inspection of the underlying uncertainties.

Reflexive Science

Reflexive science is to be understood in the sense of re ex (self-confrontation with own unanticipated or unintended consequences of the science) and reflection (self criticism of value ladenness and assumptions in the science). Re exive science does not simply report 'facts' or 'truths' but transparently constructs interpretations of his or her experiences in the field and then questions how those interpretations came about.

Resolution error

Resolution error arises from the spatial and temporal resolution in measurement, datasets or models. The possible error introduced by the chosen spatial and temporal resolutions can be assessed by analyzing how sensitive results are to changes in the resolution. However, this is not as straightforward as it looks, since the change in spatial and temporal scales in a model might require significant changes in model structure or parameterizations. For instance, going from annual time steps to monthly time steps in a climate model requires the inclusion of the seasonal cycle of insolation. Another problem can be that data are not available at a higher resolution.

Robust finding

A robust finding is "one that holds under a variety of approaches, methods, models, and assumptions and one that is expected to be relatively unaffected by uncertainties" (IPCC, 2001). Robust findings should be insensitive to most known uncertainties, but may break down in the presence of surprises.

Robust policy

A robust policy should be relatively insensitive to over- or under-estimates of risk. That is, should the problem turn out to be much better or much worse than expected, the policy would still provide a reasonable way to proceed.


A plausible description of how the future may develop, based on a coherent and internally consistent set of assumptions about key relationships and driving forces (e.g., rate of technology changes, prices). Note that "scenarios are neither predictions nor forecasts, since they depend on assumed changes in key boundary conditions (e.g. emissions), and neither are they fully projections of what is likely to happen because they have considered only a limited set of possible future boundary conditions (e.g., emissions scenarios). For the decision maker, scenarios provide an indication of possibilities, but not definitive probabilities." (see MacCracken, 2001, Scientific uncertainty One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Scientific uncertainty refers to uncertainty which emanates from the scientific and technical dimensions of a problem as opposed to the legal, moral, societal, institutional, proprietary, and situational dimensions outlined by De Marchi (1995). Scientific uncertainty is intrinsic to the processes of risk assessment and forecasting.

Sensitivity analysis

Sensitivity analysis is the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input. From Saltelli (2001).

Situational uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Situational uncertainty relates to "the predicament of the person responsible for a crisis, either in the phase of preparation and planning, or of actual emergency. It refers to individual behaviours or personal interventions in crisis situations" (De Marchi, 1995) and as such represents a form of integration over the other six types of uncertainty. That is, it tends to combine the uncertainties one has to face in a given situation or on a particular issue. High situational uncertainty would be characterized by situations where individual decisions play a substantial role and there is uncertainty about the nature of those decisions.

Societal randomness

One of the sources of variability distinguished in the PRIMA typology (Van Asselt, 2000). It refers to social, economic and cultural dynamics, especially to the non-linear, chaotic and unpredictable nature of societal processes (macro-level behaviour).

Societal uncertainty

One of the seven types of uncertainty distinguished by De Marchi (1995) in their checklist for characterizing uncertainty in environmental emergencies: institutional, legal, moral, proprietary, scientific, situational, and societal uncertainty. Communities within society may differ in their set of norms, values, and manner of relating. This in turn can result in differences in approach to decision making and assessment. Some salient characteristics of these differences will be different views about the role of consensus versus con ict, on locating responsibility between individuals and larger groups, on views about the legitimacy and role of social and private institutions, and on attitudes to authority and expertise. From De Marchi (1995). Societal uncertainty would typically be high when decisions involve substantial collaboration among groups characterized by divergent decision making styles.

Software error

Software error arises from bugs in software, design errors in algorithms, type-errors in model source code, etc. Here we encounter the problem of code verification which is defined as: examination of the implementation of the numerical model in the computer code to ascertain that there are no inherent implementation problems in obtaining a solution. If one realizes that some environmental models have hundreds of thousands of lines of source code, errors in it cannot easily be excluded and code verification is difficult to carry out in a systematic manner.


Stakeholders are those actors who are directly or indirectly affected by an issue and who could affect the outcome of a decision making process regarding that issue or are affected by it.


In stochastic models (as opposed to deterministic models), the parameters and variables are represented by probability distribution functions. Consequently, the model behavior, performance, or operation is probabilistic.

Structural uncertainty

Uncertainty about what the appropriate equations are to correctly represent a given causal relationship. In the PRIMA typology (Van Asselt, 2000) structural uncertainty refers to the lower half of the continuum scale of uncertainty due to lack of knowledge, and is also referred to as radical, or systematic uncertainty. It comprises conflicting evidence, reducible ignorance, indeterminacy, and irreducible ignorance.

Structured problems

Hisschemöller and Hoppe (1995) have defined structured problems as those for which there is a high level of agreement on the relevant knowledge base and a high level of consent on the norms and values associated with the problem. Such problems are thus typically of a more purely technical nature and fall within the category of 'normal' science.


Surprise occurs when actual outcomes differ sharply from expected ones. However, surprise is a relative term. An event will be surprising or not depending on the expectations and hence point of view of the person considering the event. Surprise is also inevitable if we accept that the world is complex and partially unpredictable, and that individuals, society, and institutions are limited in their cognitive capacities, and possess limited tools and information.

Sustainable development

"Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. It contains within it two key concepts: the concept of 'needs', in particular the essential needs of the world's poor, to which overriding priority should be given; and the idea of limitations imposed by the state of technology and social organization on the environment's ability to meet present and future needs." (Brundtland Commission, 1987)

Technological surprise

One of the sources of variability distinguished in the PRIMA typology (Van Asselt, 2000). It refers to unexpected developments or breakthroughs in technology or unexpected consequences of technologies.


The degree to which a model is transparent. A model is said to be transparent if its pedigree is well documented and all key assumptions that underlie the model are accessible and understandable for the users.

Type I error

also: Error of the first kind. In hypothesis testing, this error is caused by incorrect rejection of the hypothesis when it is true. Any test is at risk of being too selective and too sensitive. The design of the test, especially confidence limits, aims at reducing the likelihood of one type of error at the price of increasing the other. Thus, all such statistical tests are value laden.

Type II error

also: Error of the second kind. In hypothesis testing this error is caused by not rejecting the hypothesis when it is false.

Type III error

also: Error of the third kind. Assessing or solving the wrong problem by incorrectly accepting the false meta-hypothesis that there is no difference between the boundaries of a problem, as defined by the analyst, and the actual boundaries of that problem (Raifa, 1968, redefined by Dunn, 1997, 2000).


One of the three sorts of uncertainty distinguished by Funtowicz and Ravetz (1990): Inexactness, unreliability and border with ignorance. Unreliability relates to the level of confidence to be placed in a quantitative statement, usually represented by the confidence level (at say 95 % or 99 %). In practice, such judgements are quite diverse; thus estimates of safety and reliability may be given as "conservative by a factor of n". In risk analyses and futures scenarios estimates are qualified as 'optimistic' or 'pessimistic'. In laboratory practice, the systematic error in physical quantities, as distinct from the random error or spread, is estimated on an historic basis. Thus it provides a kind of assessment (the A in the NUSAP acronym) to act as a qualifier on the number (the NU in the NUSAP acronym) together with its spread (the S in the NUSAP acronym). In doing so it accounts for potential 'methodological limitations' and 'bias/value ladenness' in the process of providing the number and the spread. In the PRIMA typology (Van Asselt, 2000) unreliability refers to the upper half of the continuum of uncertainty due to lack of knowledge and comprises uncertainty due to inexactness, lack of observations/measurements and practical immeasurability.

Unstructured problems

Hoppe and Hisschemöller have defined unstructured problems as those for which there is a low level of agreement on the relevant knowledge base and a low level of consent on norms and values related to the problem. Compare with structured problems. Unstructured problems have similar characteristics as post-normal science problems.


Validation is the process of comparing model output with observations of the 'real world'. Validation can not 'validate' a model as true or correct, but can help establish confidence in a model's utility in cases where the samples of model output and real world samples are at least not inconsistent. For a fuller discussion of issues in validation, see Oreskes et al., (1994). Value diversity One of the sources of variability distinguished in the PRIMA typology (Van Asselt, 2000). It refers to the differences in people's belief systems, mental maps, world views and norms and values) due to which problem perceptions and definitions differ.


Value-ladenness refers to the notion that value orientations and biases of an analyst, an institute, a discipline or a culture can co-shape the way scientific questions are framed, data are selected, interpreted, and rejected, methodologies are devised, explanations are formulated and conclusions are formulated. Since theories are always underdetermined by observation, the analysts' biases will fill the epistemic gap which makes any assessment to a certain degree value-laden.


In one meaning of the word, variability refers to the observable variations (e.g. noise) in a quantity that result from randomness in nature (as in 'natural variability of climate') and society. In a slightly different meaning, variability refers to heterogeneity across space, time or members of a population. Variability can be expressed in terms of the extent to which the scores in a distribution of a quantity differ from each other. Statistical measures for variability include the range, mean deviation from the mean, variance, and standard deviation. In the PRIMA typology (Van Asselt, 2000), variability is one of the sources of uncertainty, and refers to the fact that the system/process under consideration can behave in different ways or is valued differently. Variability is an attribute of reality. Also referred to as 'objective uncertainty', 'stochastic uncertainty', 'primary uncertainty', 'external uncertainty' or 'random uncertainty'. The PRIMA typology distinguishes as sources of variability: natural randomness, value diversity, behavioral variability, societal randomness, and technological surprise.