02.03.2013 Views

Downloadable - About University

Downloadable - About University

Downloadable - About University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Statistical models of judgment 449<br />

involves following valid principles but following them poorly these valid<br />

principles will be abstracted by the model.<br />

Goldberg 40 reported an intensive study of clinical judgment, pitting<br />

experienced and inexperienced clinicians against linear models and a<br />

variety of non-linear or configural models in the psychotic/neurotic<br />

prediction task. He was led to conclude that Meehl chose the wrong task<br />

for testing the clinicians’ purported ability to utilize complex configural<br />

relationships. The clinicians achieved a 62% rate, while the simple linear<br />

composite achieved 70%. A 50% hit rate could have been achieved<br />

by chance as the criterion base rate was approximately 50% neurotic,<br />

50% psychotic.<br />

Dawes and Corrigan 41 have called the replacement of the decision<br />

maker by his model ‘bootstrapping’. Belief in the efficacy of bootstrapping<br />

is based on a comparison of the validity of the linear model of the<br />

judge with the validity of his or her holistic judgments. However, as<br />

Dawes and Corrigan point out, that is only one of two logically possible<br />

comparisons. The other is between the validity of the linear model or the<br />

judge and the validity of linear models in general. That is, to demonstrate<br />

that bootstrapping works because the linear model catches the essence<br />

of a judge’s expertise and at the same time eliminates unreliability, it<br />

is necessary to demonstrate that the weights obtained from an analysis<br />

of the judge’s behavior are superior to those that might be obtained in<br />

another way – for example, obtained randomly.<br />

Dawes and Corrigan constructed semi-random linear models to predict<br />

the criterion. The sign of each predictor variable was determined<br />

on an aprioribasis so that it would have a positive relationship to<br />

the criterion.<br />

On average, correlations between the criterion and the output predicted<br />

from the random models were higher than those obtained from<br />

the judge’s models. Dawes and Corrigan also investigated equal weighting<br />

and discovered that such weighting was even better than the models<br />

of the judges or the random linear models. In all cases, equal weighting<br />

was superior to the models based on judges’ behavior.<br />

Dawes and Corrigan concluded that the human decision maker need<br />

specify with very little precision the weightings to be used in the<br />

decision – at least in the context studied. What must be specified is the<br />

variables to be utilized in the linear additive model. It is precisely this<br />

knowledge of ‘what to look for’ in reaching a decision that is the province<br />

of the expert clinician.<br />

The distinction between knowing what to look for and the ability<br />

to integrate information is illustrated in a study by Einhorn. 42 Expert

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!