24.02.2013 Views

Optimality

Optimality

Optimality

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

96 D. G. Mayo and D. R. Cox<br />

to make the correction. In short, it would be very unwise to dismiss the possibility<br />

of learning from data something new in a totally unanticipated direction, but one<br />

must discriminate the contexts in order to gain guidance for what further analysis,<br />

if any, might be required.<br />

5. Concluding remarks<br />

We have argued that error probabilities in frequentist tests may be used to evaluate<br />

the reliability or capacity with which the test discriminates whether or not the<br />

actual process giving rise to data is in accordance with that described in H0. Knowledge<br />

of this probative capacity allows determination of whether there is strong evidence<br />

against H0 based on the frequentist principle we set out FEV. What makes<br />

the kind of hypothetical reasoning relevant to the case at hand is not the long-run<br />

low error rates associated with using the tool (or test) in this manner; it is rather<br />

what those error rates reveal about the data generating source or phenomenon. We<br />

have not attempted to address the relation between the frequentist and Bayesian<br />

analyses of what may appear to be very similar issues. A fundamental tenet of the<br />

conception of inductive learning most at home with the frequentist philosophy is<br />

that inductive inference requires building up incisive arguments and inferences by<br />

putting together several different piece-meal results; we have set out considerations<br />

to guide these pieces. Although the complexity of the issues makes it more difficult<br />

to set out neatly, as, for example, one could by imagining that a single algorithm<br />

encompasses the whole of inductive inference, the payoff is an account that approaches<br />

the kind of arguments that scientists build up in order to obtain reliable<br />

knowledge and understanding of a field.<br />

References<br />

[1] Birnbaum, A. (1977). The Neyman–Pearson theory as decision theory, and as<br />

inference theory; with a criticism of the Lindley–Savage argument for Bayesian<br />

theory. Synthese 36, 19–49.<br />

[2] Carnap, R. (1962). Logical Foundations of Probability. University of Chicago<br />

Press.<br />

[3] Cochran, W. G. (1965). The planning of observational studies in human<br />

populations (with discussion). J.R.Statist. Soc. A 128, 234–265.<br />

[4] Cox, D. R. (1958). Some problems connected with statistical inference. Ann.<br />

Math. Statist. 29, 357–372.<br />

[5] Cox, D. R. (1977). The role of significance tests (with discussion). Scand. J.<br />

Statist. 4, 49–70.<br />

[6] Cox, D. R. and Hinkley, D. V. (1974). Theoretical Statistics. Chapman<br />

and Hall, London.<br />

[7] Cox, D. R. and Snell, E. J. (1974). The choice of variables in observational<br />

studies. J. R. Statist. Soc. C 23, 51–59.<br />

[8] De Finetti, B. (1974). Theory of Probability, 2 vols. English translation from<br />

Italian. Wiley, New York.<br />

[9] Fisher, R. A. (1935a). Design of Experiments. Oliver and Boyd, Edinburgh.<br />

[10] Fisher, R. A. (1935b). The logic of inductive inference. J. R. Statist. Soc.<br />

98, 39–54.<br />

[11] Gibbons, J. D. and Pratt, J. W. (1975). P-values: Interpretation and<br />

methodology. American Statistician 29, 20–25.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!