Research Methodology - Dr. Krishan K. Pandey
Research Methodology - Dr. Krishan K. Pandey Research Methodology - Dr. Krishan K. Pandey
Sampling Fundamentals 163 2 (ii) Square each D and then obtain the sum of such squares i.e., i Σ Di . (iii) Find A-statistic as under: b g 2 2 i i A=ΣD ΣD (iv) Read the table of A-statistic for (n – 1) degrees of freedom at a given level of significance (using one-tailed or two-tailed values depending upon H a ) to find the critical value of A. (v) Finally, draw the inference as under: When calculated value of A is equal to or less than the table value, then reject H 0 (or accept H a ) but when computed A is greater than its table value, then accept H 0 . The practical application/use of A-statistic in one sample case can be seen from Illustration No. 5 of Chapter IX of this book itself. CONCEPT OF STANDARD ERROR The standard deviation of sampling distribution of a statistic is known as its standard error (S.E) and is considered the key to sampling theory. The utility of the concept of standard error in statistical induction arises on account of the following reasons: 1. The standard error helps in testing whether the difference between observed and expected frequencies could arise due to chance. The criterion usually adopted is that if a difference is less than 3 times the S.E., the difference is supposed to exist as a matter of chance and if the difference is equal to or more than 3 times the S.E., chance fails to account for it, and we conclude the difference as significant difference. This criterion is based on the fact that at X ± 3 (S.E.) the normal curve covers an area of 99.73 per cent. Sometimes the criterion of 2 S.E. is also used in place of 3 S.E. Thus the standard error is an important measure in significance tests or in examining hypotheses. If the estimated parameter differs from the calculated statistic by more than 1.96 times the S.E., the difference is taken as significant at 5 per cent level of significance. This, in other words, means that the difference is outside the limits i.e., it lies in the 5 per cent area (2.5 per cent on both sides) outside the 95 per cent area of the sampling distribution. Hence we can say with 95 per cent confidence that the said difference is not due to fluctuations of sampling. In such a situation our hypothesis that there is no difference is rejected at 5 per cent level of significance. But if the difference is less than 1.96 times the S.E., then it is considered not significant at 5 per cent level and we can say with 95 per cent confidence that it is because of the fluctuations of sampling. In such a situation our null hypothesis stands true. 1.96 is the critical value at 5 per cent level. The product of the critical value at a certain level of significance and the S.E. is often described as ‘Sampling Error’ at that particular level of significance. We can test the difference at certain other levels of significance as well depending upon our requirement. The following table gives some idea about the criteria at various levels for judging the significance of the difference between observed and expected values:
164 Research Methodology Table 8.1: Criteria for Judging Significance at Various Important Levels Significance Confidence Critical Sampling Confidence Difference Difference level level value error limits Significant if Insignificant if 5.0% 95.0% 1.96 196 . σ ±196 . σ > 196 . σ < 196 . σ 1.0% 99.0% 2.5758 2. 5758 σ ± 2. 5758 σ > 25758 . σ < 25758 . σ 2.7% 99.73% 3 3 σ ± 3 σ > 3 σ < 3 σ 4.55% 95.45% 2 2 σ ± 2 σ > 2 σ < 2 σ σ = Standard Error. 2. The standard error gives an idea about the reliability and precision of a sample. The smaller the S.E., the greater the uniformity of sampling distribution and hence, greater is the reliability of sample. Conversely, the greater the S.E., the greater the difference between observed and expected frequencies. In such a situation the unreliability of the sample is greater. The size of S.E., depends upon the sample size to a great extent and it varies inversely with the size of the sample. If double reliability is required i.e., reducing S.E. to 1/2 of its existing magnitude, the sample size should be increased four-fold. 3. The standard error enables us to specify the limits within which the parameters of the population are expected to lie with a specified degree of confidence. Such an interval is usually known as confidence interval. The following table gives the percentage of samples having their mean values within a range of population mean bg μ ±S.E. Table 8.2 Range Per cent Values μ ± 1S.E. 68.27% μ ± 2S.E. 95.45% μ ± 3S.E. 99.73% μ ± 196 . S.E. 95.00% μ ± 2. 5758 S.E. 99.00% Important formulae for computing the standard errors concerning various measures based on samples are as under: (a) In case of sampling of attributes: (i) Standard error of number of successes = n⋅ p ⋅ q where n = number of events in each sample, p = probability of success in each event, q = probability of failure in each event.
- Page 130 and 131: Methods of Data Collection 113 usin
- Page 132 and 133: Methods of Data Collection 115 (v)
- Page 134 and 135: Methods of Data Collection 117 2.
- Page 136 and 137: Appendix (ii): Guidelines for Succe
- Page 138 and 139: Appendix (iii): Difference Between
- Page 140 and 141: Processing and Analysis of Data 123
- Page 142 and 143: Processing and Analysis of Data 125
- Page 144 and 145: Processing and Analysis of Data 127
- Page 146 and 147: Processing and Analysis of Data 129
- Page 148 and 149: Processing and Analysis of Data 131
- Page 150 and 151: Processing and Analysis of Data 133
- Page 152 and 153: Processing and Analysis of Data 135
- Page 154 and 155: Processing and Analysis of Data 137
- Page 156 and 157: Processing and Analysis of Data 139
- Page 158 and 159: Processing and Analysis of Data 141
- Page 160 and 161: Processing and Analysis of Data 143
- Page 162 and 163: Processing and Analysis of Data 145
- Page 164 and 165: Processing and Analysis of Data 147
- Page 166 and 167: Processing and Analysis of Data 149
- Page 168 and 169: Editing Coding Processing of Data (
- Page 170 and 171: Sampling Fundamentals 153 1. Univer
- Page 172 and 173: Sampling Fundamentals 155 should no
- Page 174 and 175: Sampling Fundamentals 157 certain l
- Page 176 and 177: Sampling Fundamentals 159 (i) Stati
- Page 178 and 179: Sampling Fundamentals 161 (ii) To t
- Page 182 and 183: Sampling Fundamentals 165 (ii) Stan
- Page 184 and 185: Sampling Fundamentals 167 σ s 1⋅
- Page 186 and 187: Sampling Fundamentals 169 mean is c
- Page 188 and 189: Sampling Fundamentals 171 08 . = ×
- Page 190 and 191: Sampling Fundamentals 173 We now il
- Page 192 and 193: Sampling Fundamentals 175 (v) Stand
- Page 194 and 195: Sampling Fundamentals 177 where N =
- Page 196 and 197: Sampling Fundamentals 179 Since $p
- Page 198 and 199: Sampling Fundamentals 181 (i) Find
- Page 200 and 201: Sampling Fundamentals 183 25. A tea
- Page 202 and 203: Testing of Hypotheses I 185 Charact
- Page 204 and 205: Testing of Hypotheses I 187 when th
- Page 206 and 207: Testing of Hypotheses I 189 Mathema
- Page 208 and 209: Testing of Hypotheses I 191 PROCEDU
- Page 210 and 211: Testing of Hypotheses I 193 MEASURI
- Page 212 and 213: Testing of Hypotheses I 195 We can
- Page 214 and 215: Testing of Hypotheses I 197 HYPOTHE
- Page 216 and 217: 1 2 3 4 5 z OR X − X 1 2 2 2 p p
- Page 218 and 219: 1 2 3 4 5 z = p q 0 0 p$ − p$ F H
- Page 220 and 221: Testing of Hypotheses I 203 to have
- Page 222 and 223: Testing of Hypotheses I 205 S. No.
- Page 224 and 225: Testing of Hypotheses I 207 S. No.
- Page 226 and 227: Testing of Hypotheses I 209 nX + nX
- Page 228 and 229: Testing of Hypotheses I 211 (Since
Sampling Fundamentals 163<br />
2 (ii) Square each D and then obtain the sum of such squares i.e., i Σ Di .<br />
(iii) Find A-statistic as under:<br />
b g<br />
2 2<br />
i i<br />
A=ΣD ΣD<br />
(iv) Read the table of A-statistic for (n – 1) degrees of freedom at a given level of significance<br />
(using one-tailed or two-tailed values depending upon H a ) to find the critical value of A.<br />
(v) Finally, draw the inference as under:<br />
When calculated value of A is equal to or less than the table value, then reject H 0 (or accept<br />
H a ) but when computed A is greater than its table value, then accept H 0 .<br />
The practical application/use of A-statistic in one sample case can be seen from Illustration<br />
No. 5 of Chapter IX of this book itself.<br />
CONCEPT OF STANDARD ERROR<br />
The standard deviation of sampling distribution of a statistic is known as its standard error (S.E) and<br />
is considered the key to sampling theory. The utility of the concept of standard error in statistical<br />
induction arises on account of the following reasons:<br />
1. The standard error helps in testing whether the difference between observed and expected<br />
frequencies could arise due to chance. The criterion usually adopted is that if a difference is less than<br />
3 times the S.E., the difference is supposed to exist as a matter of chance and if the difference is<br />
equal to or more than 3 times the S.E., chance fails to account for it, and we conclude the difference<br />
as significant difference. This criterion is based on the fact that at X ± 3 (S.E.) the normal curve<br />
covers an area of 99.73 per cent. Sometimes the criterion of 2 S.E. is also used in place of 3 S.E.<br />
Thus the standard error is an important measure in significance tests or in examining hypotheses. If<br />
the estimated parameter differs from the calculated statistic by more than 1.96 times the S.E., the<br />
difference is taken as significant at 5 per cent level of significance. This, in other words, means that<br />
the difference is outside the limits i.e., it lies in the 5 per cent area (2.5 per cent on both sides) outside<br />
the 95 per cent area of the sampling distribution. Hence we can say with 95 per cent confidence that<br />
the said difference is not due to fluctuations of sampling. In such a situation our hypothesis that there<br />
is no difference is rejected at 5 per cent level of significance. But if the difference is less than 1.96<br />
times the S.E., then it is considered not significant at 5 per cent level and we can say with 95 per cent<br />
confidence that it is because of the fluctuations of sampling. In such a situation our null hypothesis<br />
stands true. 1.96 is the critical value at 5 per cent level. The product of the critical value at a certain<br />
level of significance and the S.E. is often described as ‘Sampling Error’ at that particular level of<br />
significance. We can test the difference at certain other levels of significance as well depending<br />
upon our requirement. The following table gives some idea about the criteria at various levels for<br />
judging the significance of the difference between observed and expected values: