• Unfamiliarity of the expert with the wording and statistical terminology in elicitation questions (e.g. sometimes people are inclined to specify the mean when they are asked for the median, which is apparently incorrect for asymmetric distributions). Clear and unambiguous questions are of importance, especially concerning issues as variability- and 'lack-of-knowledge'-induced uncertainty and correlations and dependencies. Using frequency formats when talking on probabilities connects better to the way people reason and experience (Hoffrage et al. 2000), and is therefore suggested as the preferred way of communicating on chances, even in a 'uncertainty as degree-of-belief'-setting (Anderson, 1998a, 1998b). A brief training in background and use of statistical terminology for elicitation is recommended.
  • The occurrence of groupthink or social bias in group settings during an elicitation process. It is recommended to use procedures or techniques to diminish this influence.
  • The 'validity' of the obtained scores in assessing the quality of the provided uncertainty information can be low e.g. due to lack of representativity, scope and accuracy of the seed variables in a performance assessment or due to inherent subjectivity and limitations of personal scope in the self-rating and pedigree analysis process. Therefore explicit attention should be spend on these aspects, and potential weak spots should be mentioned.
  • The outcomes of the probabilistic inversion are dependent on the model structure that is used. Ideally some level of 'model-structure validation' will be required to improve the confidence in the obtained results.
  • In combining expert-opinions one runs the risk of masking expert disagreement and throwing away important information concerning the problem, especially if the major differences between the expert opinions are not explicitly discussed and explained. Moreover one should be cautious in interpreting a combined PDF: it by no means needs to represent a consensus view on uncertainty.
  • Bias: The major pitfall in expert elicitation is expert bias. Experts and lay people alike are subject to a variety of potential mental errors or shortcomings caused by man's simplified and partly subconscious information processing strategies. It is important to distinguish these so-called cognitive biases from other sources of bias, such as cultural bias, organizational bias, or bias resulting from one's own self-interest (from Psychology of Intelligence Analysis, R.J. Heuer, 1999; Some of the sources of cognitive bias are as follows: overconfidence, anchoring, availability, representativeness, satisficing, unstated assumptions, coherence, and experts should be informed on the existence of these biases during the expert elicitation process. Below a brief explanation is given of these sources; see for more details e.g. Dawes (1988).
    Anchoring Assessments are often unduly weighted toward the conventional value, or first value given, or to the findings of previous assessments in making an assessment. Thus, they are said to be `anchored' to this value.
    Availability This bias refers to the tendency to give too much weight to readily available data or recent experience (which may not be representative of the required data) in making assessments.
    Coherence Events are considered more likely when many scenarios can be created that lead to the event, or if some scenarios are particularly coherent. Conversely, events are considered unlikely when scenarios cannot be imagined. Thus, probabilities tend to be assigned more on the basis of one's ability to tell coherent stories than on the basis of intrinsic probability of occurrence.
    Overconfidence Experts tend to over-estimate their ability to make quantitative judgements. This is often manifest with an estimate of a quantity and its uncertainty range that does not even encompass the true value of the quantity. This is difficult for an individual to guard against; but a general awareness of the tendency can be important.
    Representativeness This is the tendency to place more confidence in a single piece of information that is considered representative of a process than in a larger body of more generalized information.
    Satisficing This refers to the tendency to search through a limited number of solution options and to pick from among them. Comprehensiveness is sacrificed for expediency in this case.
    Motivational People may have incentives to reach a certain conclusion or see things a certain way. Reasons for occurrence of motivational bias include: a) a person may want to influence a decision to go a certain way; b) the person may perceive that he will be evaluated based on the outcome and might tend to be conservative in his estimates; c) the person may want to suppress uncertainty that he actually believes is present in order to appear knowledgeable or authoritative; and d) the expert has taken a strong stand in the past and does not want to appear to contradict himself by producing a distribution that lends credence to alternative views.
    Unstated assumptions A subject's responses are typically conditional on various unstated assumptions. The effect of these assumptions is often to constrain the degree of uncertainty reflected in the resulting estimate of a quantity. Stating assumptions explicitly can help reflect more of a subject's total uncertainty.

    Gigerenzer (1991,1994) and Cosmides and Tooby (1996) argue that part of these biases are not so much caused by the limited cognitive abilities of the human mind, but more by the way in which information is presented or elicited. A thoughtful wording of questions can be helpful to avoid part of these biases. Performing dry run exercises (try-outs) can render important feedback on the suitability of the posed questions.