02.03.2013 Views

Downloadable - About University

Downloadable - About University

Downloadable - About University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Assessment of the validity of probability forecasts 287<br />

occurrence may psychologically disassociate an event from its event<br />

set. As we saw earlier in Chapter 9, a similar phenomenon has been<br />

notedbyTverskyandKahneman 14 and the term ‘representativeness’<br />

was coined to refer to the dominance of individuating information in<br />

intuitive prediction.<br />

Clearly, judgmental forecasts should be monitored for additivity and<br />

incoherence should be resolved. However, a simple normalization may<br />

not be a quick and easy solution to incoherence. Lindley et al. 15 outlined<br />

a major problem:<br />

Suppose that I assess the probabilities of a set of mutually exclusive and<br />

exhaustive events to be<br />

0.001, 0.250, 0.200, 0.100, 0.279<br />

It is then pointed out to me that these probabilities sum to 0.830 and<br />

hence that the assessment is incoherent. If we use the method ... with the<br />

probability metric, we have to adjust the probabilities by adding 0.034 to each<br />

(= (1/5)(1 − 0.830)) to give<br />

0.035, 0.284, 0.234, 0.134, 0.313<br />

The problem is with the first event, which I originally regarded as very<br />

unlikely, has had its probability increased by a factor of 35! Though still small<br />

it is no longer smaller than the others by two orders of magnitude.<br />

Obviously, other methods of allocating probability shortfalls can be<br />

devised, but our view is that the best solution to such problems is for the<br />

decision analyst to show the decision maker his or her incoherence and<br />

so allow iterative resolution of departures from this (and other) axioms<br />

of probability theory. Such iteration can involve the analyst plotting the<br />

responses on a graph (e.g. as a cumulative distribution function) and<br />

establishing whether the decision maker is happy that this is an accurate<br />

reflection of his or her judgments. Finally, the decision maker can be<br />

offered a series of pairs of bets. Each pair can be formulated so that the<br />

respondent would be indifferent between them if he or she is behaving<br />

consistently with assessments which were made earlier.<br />

Assessment of the validity of probability forecasts<br />

A major measure of the validity of subjective probability forecasts is<br />

known as calibration. By calibration we mean the extent to which the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!