02.03.2013 Views

Downloadable - About University

Downloadable - About University

Downloadable - About University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Assessment of the validity of probability forecasts 289<br />

These conditions for learning calibration as a skill are perhaps most<br />

obvious in professional weather forecasting. However, as Einhorn 16<br />

has argued, most judgmental forecasts are made without the benefit<br />

of such accurate feedback. Einhorn traces these difficulties to two<br />

main factors. The first is a lack of search for and use of disconfirming<br />

evidence, and the second is the use of unaided memory for coding,<br />

storing, and retrieving outcome information. In addition, predictions<br />

instigate actions to facilitate desired outcomes and indeed, outcome<br />

feedback can be irrelevant for correcting poor judgment. Einhorn gives<br />

the following example:<br />

Imagine that you are a waiter in a busy restaurant and because you cannot<br />

give good service to all the people at your station, you make a judgment<br />

regarding which people will leave good or poor tips. You then<br />

give good or bad service depending on your judgment. If the quality of<br />

service, in itself, has an effect on the size of the tip, outcome feedback<br />

will ‘confirm’ the predictions (‘they looked cheap and left no tip – just as<br />

I thought’). The extent of such self-fulfilling prophecies is much greater<br />

than we think and represents a considerable obstacle to learning from<br />

outcome feedback.<br />

It is clear that such ‘treatment effects’ where actions in the world can<br />

determine subsequent outcomes may be more prevalent in some types<br />

of forecasting situations than in others. The implications of this research<br />

for decision analysis practice are not clear cut. Most forecasting tasks<br />

can be seen to be unlike weather forecasting in that there are actions<br />

that the forecaster can take to avoid or facilitate possible futures. As<br />

we shall see in Chapter 12, decisions once made are often ‘made’ to<br />

work. It follows that simply keeping a tally of your probability forecasts<br />

and the subsequent occurrence or non-occurrence of events may not<br />

be helpful in evaluating the calibration of your judgmental forecasting.<br />

However, such a tally is more likely to reveal poor validity than simply<br />

reflecting on the quality of your own judgment. This conclusion is<br />

founded on what has come to be termed the ‘hindsight bias’. Briefly,<br />

Fischhoff 17 has shown that remembered predictions of future events<br />

‘move’ towards the event’s occurrence or non-occurrence. For example,<br />

a real prediction of 0.8 that an event will happen tends to be inflated,<br />

in recall, with the knowledge that the event did, in fact, occur. Such<br />

a hindsight bias limits the possibility of improving calibration in the<br />

light of experience and tends to artificially inflate our confidence in our<br />

own judgmental abilities, resulting in what Fischhoff has termed the<br />

‘I-knew-it-all-along’ effect.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!