02.03.2013 Views

Downloadable - About University

Downloadable - About University

Downloadable - About University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

270 Biases in probability assessment<br />

businesses that survive (he would argue that they would be able to make<br />

good estimates of this relative frequency) and use this as an estimate of<br />

an individual’s business surviving.<br />

Comparison of studies of the calibration of probability assessments<br />

concerning unique individual events with those where assessments have<br />

been made for repetitive predictions of weather events, reveals a general<br />

finding of relatively poor calibration in the former contrasted with<br />

good calibration in the latter. Bolger and Wright 44 have argued that<br />

this differential forecasting performance is due, in part, to the existence<br />

of rapid and meaningful feedback to the weather forecasters in<br />

terms of both the relative frequency of probability predictions and the<br />

predicted event’s occurrence. Such prediction-feedback frequency information<br />

may well be ideal for the achievement of frequentistic-based<br />

accuracy. However, such ideal conditions for probability assessment<br />

are not common in many management situations which tend to be<br />

characterized by the need to judge the likelihood of unique events.<br />

In summary, we advocate that, in assessing a subjective probability,<br />

you attempt to locate a reference class of previous forecasts that<br />

you have made which are similar to the event that you now need to<br />

forecast. If the event is, say, demand for a set number of batches of<br />

perishable food (page 96), attendance at a conference in a provincial<br />

town (page 102), or successful development of a new type of processor<br />

(page 145), then you should first consider whether or not you<br />

have made repetitive forecasts of such events in the past. If you have,<br />

and have received timely feedback on the accuracy of your forecasts,<br />

then the assessment task is likely to be like that of weather forecasting<br />

where good calibration is a general finding. If not, then you<br />

should consider whether there is a historic, relative frequency reference<br />

class that you can use. For example, if you are considering the likelihood<br />

that a newly hired worker will stay for at least a year then you<br />

should consider the number of workers who have been hired in your<br />

organization at that grade (i.e. identify the reference class) and then<br />

calculate how many in, say, the last five years have remained for at least<br />

one year.<br />

If a reference class of previous forecasts or historic frequencies is not<br />

obvious then be aware that the only way to assess the likelihood of the<br />

event is to use judgmental heuristics and that such heuristics can lead<br />

to bias – as we have documented in this chapter. Figure 9.1 summarizes<br />

this conclusion.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!