Research Methodology - Dr. Krishan K. Pandey
Research Methodology - Dr. Krishan K. Pandey Research Methodology - Dr. Krishan K. Pandey
Sampling Fundamentals 157 certain level of significance is compared with the calculated value of t from the sample data, and if the latter is either equal to or exceeds, we infer that the null hypothesis cannot be accepted. * 2 2 4. F distribution: If bσs1g and bσs2g are the variances of two independent samples of size n1 and n respectively taken from two independent normal populations, having the same variance, 2 2 2 2 2 2 2 dσ p1i = dσ p2i , the ratio F = σ s1 / σ s2 , where σ s1 =∑ X1i −X1 / n1 −1 and 2 2 bσs2g d 2i 2i b g b g b g d i =∑ X −X / n −1 has an F distribution with n – 1 and n – 1 degrees of freedom. 1 2 2 F ratio is computed in a way that the larger variance is always in the numerator. Tables have been prepared for F distribution that give critical values of F for various values of degrees of freedom for larger as well as smaller variances. The calculated value of F from the sample data is compared with the corresponding table value of F and if the former is equal to or exceeds the latter, then we infer that the null hypothesis of the variances being equal cannot be accepted. We shall make use of the F ratio in the context of hypothesis testing and also in the context of ANOVA technique. 5. Chi-square χ 2 e j distribution: Chi-square distribution is encountered when we deal with collections of values that involve adding up squares. Variances of samples require us to add a collection of squared quantities and thus have distributions that are related to chi-square distribution. If we take each one of a collection of sample variances, divide them by the known population variance and multiply these quotients by (n – 1), where n means the number of items in the sample, we shall obtain 2 2 e jb 1g a chi-square distribution. Thus, σ s / σ p n − would have the same distribution as chi-square distribution with (n – 1) degrees of freedom. Chi-square distribution is not symmetrical and all the values are positive. One must know the degrees of freedom for using chi-square distribution. This distribution may also be used for judging the significance of difference between observed and expected frequencies and also as a test of goodness of fit. The generalised shape of χ 2 distribution depends upon the d.f. and the χ 2 value is worked out as under: χ 2 = k 2 b i ig ∑ i = 1 O − E E Tables are there that give the value of χ 2 for given d.f. which may be used with calculated value of χ 2 for relevant d.f. at a desired level of significance for testing hypotheses. We will take it up in detail in the chapter ‘Chi-square Test’. CENTRAL LIMIT THEOREM When sampling is from a normal population, the means of samples drawn from such a population are themselves normally distributed. But when sampling is not from a normal population, the size of the * This aspect has been dealt with in details in the context of testing of hypotheses later in this book. i
158 Research Methodology sample plays a critical role. When n is small, the shape of the distribution will depend largely on the shape of the parent population, but as n gets large (n > 30), the thape of the sampling distribution will become more and more like a normal distribution, irrespective of the shape of the parent population. The theorem which explains this sort of relationship between the shape of the population distribution and the sampling distribution of the mean is known as the central limit theorem. This theorem is by far the most important theorem in statistical inference. It assures that the sampling distribution of the mean approaches normal distribtion as the sample size increases. In formal terms, we may say that the central limit theorem states that “the distribution of means of random samples taken from a population having mean μ and finite variance σ 2 approaches the normal distribution with mean μ and variance σ 2 /n as n goes to infinity.” 1 “The significance of the central limit theorem lies in the fact that it permits us to use sample statistics to make inferences about population parameters without knowing anything about the shape of the frequency distribution of that population other than what we can get from the sample.” 2 SAMPLING THEORY Sampling theory is a study of relationships existing between a population and samples drawn from the population. Sampling theory is applicable only to random samples. For this purpose the population or a universe may be defined as an aggregate of items possessing a common trait or traits. In other words, a universe is the complete group of items about which knowledge is sought. The universe may be finite or infinite. finite universe is one which has a definite and certain number of items, but when the number of items is uncertain and infinite, the universe is said to be an infinite universe. Similarly, the universe may be hypothetical or existent. In the former case the universe in fact does not exist and we can only imagin the items constituting it. Tossing of a coin or throwing a dice are examples of hypothetical universe. Existent universe is a universe of concrete objects i.e., the universe where the items constituting it really exist. On the other hand, the term sample refers to that part of the universe which is selected for the purpose of investigation. The theory of sampling studies the relationships that exist between the universe and the sample or samples drawn from it. The main problem of sampling theory is the problem of relationship between a parameter and a statistic. The theory of sampling is concerned with estimating the properties of the population from those of the sample and also with gauging the precision of the estimate. This sort of movement from particular (sample) towards general (universe) is what is known as statistical induction or statistical inference. In more clear terms “from the sample we attempt to draw inference concerning the universe. In order to be able to follow this inductive method, we first follow a deductive argument which is that we imagine a population or universe (finite or infinite) and investigate the behaviour of the samples drawn from this universe applying the laws of probability.” 3 The methodology dealing with all this is known as sampling theory. Sampling theory is designed to attain one or more of the following objectives: 1 Donald L. Harnett and James L. Murphy, Introductory Statistical Analysis, p.223. 2 Richard I. Levin, Statistics for Management, p. 199. 3 J.C. Chaturvedi: Mathematical Statistics, p. 136.
- Page 124 and 125: Methods of Data Collection 107 amon
- Page 126 and 127: Methods of Data Collection 109 Holt
- Page 128 and 129: Methods of Data Collection 111 COLL
- Page 130 and 131: Methods of Data Collection 113 usin
- Page 132 and 133: Methods of Data Collection 115 (v)
- Page 134 and 135: Methods of Data Collection 117 2.
- Page 136 and 137: Appendix (ii): Guidelines for Succe
- Page 138 and 139: Appendix (iii): Difference Between
- Page 140 and 141: Processing and Analysis of Data 123
- Page 142 and 143: Processing and Analysis of Data 125
- Page 144 and 145: Processing and Analysis of Data 127
- Page 146 and 147: Processing and Analysis of Data 129
- Page 148 and 149: Processing and Analysis of Data 131
- Page 150 and 151: Processing and Analysis of Data 133
- Page 152 and 153: Processing and Analysis of Data 135
- Page 154 and 155: Processing and Analysis of Data 137
- Page 156 and 157: Processing and Analysis of Data 139
- Page 158 and 159: Processing and Analysis of Data 141
- Page 160 and 161: Processing and Analysis of Data 143
- Page 162 and 163: Processing and Analysis of Data 145
- Page 164 and 165: Processing and Analysis of Data 147
- Page 166 and 167: Processing and Analysis of Data 149
- Page 168 and 169: Editing Coding Processing of Data (
- Page 170 and 171: Sampling Fundamentals 153 1. Univer
- Page 172 and 173: Sampling Fundamentals 155 should no
- Page 176 and 177: Sampling Fundamentals 159 (i) Stati
- Page 178 and 179: Sampling Fundamentals 161 (ii) To t
- Page 180 and 181: Sampling Fundamentals 163 2 (ii) Sq
- Page 182 and 183: Sampling Fundamentals 165 (ii) Stan
- Page 184 and 185: Sampling Fundamentals 167 σ s 1⋅
- Page 186 and 187: Sampling Fundamentals 169 mean is c
- Page 188 and 189: Sampling Fundamentals 171 08 . = ×
- Page 190 and 191: Sampling Fundamentals 173 We now il
- Page 192 and 193: Sampling Fundamentals 175 (v) Stand
- Page 194 and 195: Sampling Fundamentals 177 where N =
- Page 196 and 197: Sampling Fundamentals 179 Since $p
- Page 198 and 199: Sampling Fundamentals 181 (i) Find
- Page 200 and 201: Sampling Fundamentals 183 25. A tea
- Page 202 and 203: Testing of Hypotheses I 185 Charact
- Page 204 and 205: Testing of Hypotheses I 187 when th
- Page 206 and 207: Testing of Hypotheses I 189 Mathema
- Page 208 and 209: Testing of Hypotheses I 191 PROCEDU
- Page 210 and 211: Testing of Hypotheses I 193 MEASURI
- Page 212 and 213: Testing of Hypotheses I 195 We can
- Page 214 and 215: Testing of Hypotheses I 197 HYPOTHE
- Page 216 and 217: 1 2 3 4 5 z OR X − X 1 2 2 2 p p
- Page 218 and 219: 1 2 3 4 5 z = p q 0 0 p$ − p$ F H
- Page 220 and 221: Testing of Hypotheses I 203 to have
- Page 222 and 223: Testing of Hypotheses I 205 S. No.
158 <strong>Research</strong> <strong>Methodology</strong><br />
sample plays a critical role. When n is small, the shape of the distribution will depend largely on the<br />
shape of the parent population, but as n gets large (n > 30), the thape of the sampling distribution will<br />
become more and more like a normal distribution, irrespective of the shape of the parent population.<br />
The theorem which explains this sort of relationship between the shape of the population distribution<br />
and the sampling distribution of the mean is known as the central limit theorem. This theorem is by<br />
far the most important theorem in statistical inference. It assures that the sampling distribution of the<br />
mean approaches normal distribtion as the sample size increases. In formal terms, we may say that<br />
the central limit theorem states that “the distribution of means of random samples taken from a<br />
population having mean μ and finite variance σ 2 approaches the normal distribution with mean μ<br />
and variance σ 2 /n as n goes to infinity.” 1<br />
“The significance of the central limit theorem lies in the fact that it permits us to use sample<br />
statistics to make inferences about population parameters without knowing anything about the shape<br />
of the frequency distribution of that population other than what we can get from the sample.” 2<br />
SAMPLING THEORY<br />
Sampling theory is a study of relationships existing between a population and samples drawn from<br />
the population. Sampling theory is applicable only to random samples. For this purpose the population<br />
or a universe may be defined as an aggregate of items possessing a common trait or traits. In other<br />
words, a universe is the complete group of items about which knowledge is sought. The universe<br />
may be finite or infinite. finite universe is one which has a definite and certain number of items, but<br />
when the number of items is uncertain and infinite, the universe is said to be an infinite universe.<br />
Similarly, the universe may be hypothetical or existent. In the former case the universe in fact does<br />
not exist and we can only imagin the items constituting it. Tossing of a coin or throwing a dice are<br />
examples of hypothetical universe. Existent universe is a universe of concrete objects i.e., the universe<br />
where the items constituting it really exist. On the other hand, the term sample refers to that part of<br />
the universe which is selected for the purpose of investigation. The theory of sampling studies the<br />
relationships that exist between the universe and the sample or samples drawn from it.<br />
The main problem of sampling theory is the problem of relationship between a parameter and a<br />
statistic. The theory of sampling is concerned with estimating the properties of the population from<br />
those of the sample and also with gauging the precision of the estimate. This sort of movement from<br />
particular (sample) towards general (universe) is what is known as statistical induction or statistical<br />
inference. In more clear terms “from the sample we attempt to draw inference concerning the<br />
universe. In order to be able to follow this inductive method, we first follow a deductive argument<br />
which is that we imagine a population or universe (finite or infinite) and investigate the behaviour of<br />
the samples drawn from this universe applying the laws of probability.” 3 The methodology dealing<br />
with all this is known as sampling theory.<br />
Sampling theory is designed to attain one or more of the following objectives:<br />
1 Donald L. Harnett and James L. Murphy, Introductory Statistical Analysis, p.223.<br />
2 Richard I. Levin, Statistics for Management, p. 199.<br />
3 J.C. Chaturvedi: Mathematical Statistics, p. 136.