02.03.2013 Views

Downloadable - About University

Downloadable - About University

Downloadable - About University

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

216 Revising judgments in the light of new information<br />

the chapter we will show how the potential benefits of information can<br />

be evaluated so that a decision can be made as to whether it is worth<br />

obtaining it in the first place.<br />

Bayes’ theorem<br />

In Bayes’ theorem an initial probability estimate is known as a prior<br />

probability. Thus the marketing manager’s assessment that there was an<br />

80% probability that sales of the calculator would reach break-even level<br />

was a prior probability. When Bayes’ theorem is used to modify a prior<br />

probability in the light of new information the result is known as a<br />

posterior probability.<br />

We will not put forward a mathematical proof of Bayes’ theorem here.<br />

Instead, we will attempt to develop the idea intuitively and then show<br />

how a probability tree can be used to revise prior probabilities. Let us<br />

imagine that you are facing the following problem.<br />

A batch of 1000 electronic components was produced last week at<br />

your factory and it was found, after extensive and time-consuming<br />

tests, that 30% of them were defective and 70% ‘OK’. Unfortunately, the<br />

defective components were not separated from the others and all the<br />

components were subsequently mixed together in a large box. You now<br />

have to select a component from the box to meet an urgent order from<br />

a customer. What is the prior probability that the component you select<br />

is ‘OK’? Clearly, in the absence of other information, the only sensible<br />

estimate is 0.7.<br />

You then remember that it is possible to perform a ‘quick and dirty’<br />

test on the component, though this test is not perfectly reliable. If the<br />

component is ‘OK’ then there is only an 80% chance it will pass the<br />

test and a 20% chance that it will wrongly fail. On the other hand, if<br />

the component is defective then there is a 10% chance that the test will<br />

wrongly indicate that it is ‘OK’ and a 90% chance that it will fail the<br />

test. Figure 8.1 shows these possible outcomes in the form of a tree. Note<br />

that because the test is better at giving a correct indication when the<br />

component is defective we say that it is biased.<br />

When you perform the quick test the component fails. How should<br />

you revise your prior probability in the light of this result? Consider<br />

Figure 8.1 again. Imagine each of the 1000 components we start off with,<br />

traveling through one of the four routes of the tree. Seven hundred of<br />

them will follow the ‘OK’ route. When tested, 20% of these components

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!