12.07.2015 Views

Chapter 2. Minimum variance unbiased estimation

Chapter 2. Minimum variance unbiased estimation

Chapter 2. Minimum variance unbiased estimation

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

1.<strong>Minimum</strong> <strong>variance</strong> criterionAn <strong>unbiased</strong> estimator is defined by, with the importantrule that this holds for all possible values of the unknown parameter. Within this class ofestimators, the one with the minimum <strong>variance</strong> is the most searched.In searching for optimal estimators we need to adopt some optimality criterion. Anatural one is the mean square error (MSE):This measures the average mean squared deviation of the estimator from the truevalue.Adoption of this natural criterion leads to unrealizable estimators, ones thatcannot be written solely as a function of the data. To understand the problem which ariseswe first rewrite the MSE as:That shows that the MSE is composed of errors due to the <strong>variance</strong> of the estimator as wellas the bias.Generally, any criterion which depends on the bias will lead to an unrealizableestimator. Although this is generally true, on ocassion realizable minimum MSE estimatorscan be found.From a practical viewpoint the minimum MSE estimator needs to be abandoned.An alternative approach is to constrain the bias to be zero and find the estimator whichminimizes the <strong>variance</strong>. Such an estimator is called the minimum <strong>variance</strong> <strong>unbiased</strong> (MVU)estimator.Note: The MSE of an <strong>unbiased</strong> estimator is just the <strong>variance</strong>.Minimizing the <strong>variance</strong> of an <strong>unbiased</strong> estimator also has the effect ofconcentrating the PDF (probability density function) of the estimator error, , aboutzero. The <strong>estimation</strong> error will therefore be less likely to be large.3


3.Finding the minimum <strong>variance</strong> <strong>unbiased</strong> estimatorEven if a MVU estimator exists, we may not be able to find it. There is no known“turn-the crunk” procedure which will produce the estimator. There are, instead, a variety oftechniques that can sometimes be applied to find the MVUE.These methods include:1. Compute the Cramer-Rao Lower Bound (CRLB), and check the condition for equality.<strong>2.</strong> Apply the Rao-Blackwell-Lehmann-Scheffe (RBLS) theorem.3.Restrict the class of estimators to be not only <strong>unbiased</strong> but also linear. Then find theminimum <strong>variance</strong> estimator within this restricted class.Approaches 1. and <strong>2.</strong> may produce the MVU estimator, while 3. will yield it onlyif the MVU estimator is linear in the data.The CRLB allows us to determine that for any <strong>unbiased</strong> estimator the <strong>variance</strong>must be greater than or equal to a given value, as shown in Figure 3.Figure 3. Cramer-Rao lower bound on <strong>variance</strong> of <strong>unbiased</strong> estimatorIf an estimator exists whose <strong>variance</strong> equals the CRLB for each value of ө, thenit must be the MVU estimator. In this case the theory of the CRLB immediately yields theestimator. It may happen that no estimator exists whose <strong>variance</strong> equals the bound. Yet aMVU estimator may still exist, as for instance in the case of ө 1 in Figure 3. Then we mustresort to the Rao-Blackwell-Lehmann-Scheffen theorem. This procedure first find asufficient statistic, one which uses all the data efficiently, and then finds a function of asufficien statistic which is an <strong>unbiased</strong> estimator ө. With a slight restriction of the PDF ofthe data this procedure will then be garanteed to produce the MVU estimator. The thirdapproach requires the estimator to be linear, a sometimes severe restriction, and chooses thebest linear estimator. Of course, only for particular data sets can this approach produce theMVU estimator.6


4.Extension to a vector parameterIfis a vector of unknown parameters, then say that an estimatoris <strong>unbiased</strong> if:(*)for i= 1,2,3…,p. By definingwe can equivalently define an <strong>unbiased</strong> estimator to have the propertyfor every ө contained within the space defined in (*). A MVU estimator has the additionalproperty thatis minimum among all <strong>unbiased</strong> estimators.7


5.ConclusionsIn statistics, the mean square error or MSE of an estimator is one of many waysto quantify the difference between an estimator and the true value of the quantity beingestimated. MSE is a risk function, corresponding to the expected value of the squared errorloss or quadratic loss. MSE measures the average of the square of the "error." The error isthe amount by which the estimator differs from the quantity to be estimated. Note that theMSE is not equivalent to the expected value of the absolute error.Since MSE is an expectation, it is a scalar, and not a random variable. It may be afunction of the unknown parameter θ, but it does not depend on any random quantities.However, when MSE is computed for a particular estimator of θ the true value of which isnot known, it will be subject to <strong>estimation</strong> error.MVUE is an <strong>unbiased</strong> estimator that has lower <strong>variance</strong> than any other <strong>unbiased</strong>estimator for all possible values of the parameter. An efficient estimator doesn’t need toexist, but if it does, it’s the MVUE. Since the mean square error (MSE) of an estimator isMSE(θ) = var(θ) + bias(θ)2the MVUE minimizes MSE among <strong>unbiased</strong> estimators .For an <strong>unbiased</strong> estimator, the MSE is the <strong>variance</strong>. Like the <strong>variance</strong>, MSE hasthe same unit of measurement as the square of the quantity being estimated. In some casesbiased estimators have lower MSE because they have a smaller <strong>variance</strong> than does any<strong>unbiased</strong> estimator.The question of determining the MVUE, if one exists, for a particular problem isimportant for practical statistics, since less-than-optimal procedures would naturally beavoided, other things being equal. This has led to substantial development of statisticaltheory related to the problem of optimal <strong>estimation</strong>.8


6.References1. Fundamentals of Statistical Signal Processing : Estimation Theory, Steven M.Key,University of Rhode Island.<strong>2.</strong> Prediction and Improved Estimation in Linear Models, J. Wiley, New York, 1977.3. Unbiased estimators and Their Applications, V.G Voinov, M.S. Nikulin, KluwerAcademic Publishers9

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!