12.07.2015 Views

Notes on Poisson Regression and Some Extensions

Notes on Poisson Regression and Some Extensions

Notes on Poisson Regression and Some Extensions

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Moments.var(Y ) = µE(Y ) = µThe mean rateThe variance mean identityHow well does the mean/variance equivalence hold in the example data above? Obtain theobserved proporti<strong>on</strong>s of counts <strong>and</strong> apply the usual formula for the mean <strong>and</strong> variance of ar<strong>and</strong>om variable. We find, as before, E(Y ) = 0.899 <strong>and</strong> var(Y ) = 0.860, which are close enoughfor most government work.Models <strong>and</strong> Estimati<strong>on</strong>.We usually let µ depend <strong>on</strong> data through a loglinear model , so thatlog(µ i ) = x ′ iβThe log likelihood is written aslog L =n∑ {x′i β − exp(x ′ iβ) − log Γ(y i + 1) }i=1We can ignore the last term in this expressi<strong>on</strong> as it does not involve parameters. In other words,the log L evaluated without this term differs from the log L above by an additive c<strong>on</strong>stant that isindependent of the model parameters.Exercise 1: Suppose we are interested in finding the MLE of µ (i.e., without covariates). Showthat the log likelihood is simplyn∑log L = log µ y i − nµ<strong>and</strong> that the MLE is ̂µ = ∑ y i /n by solving the first-order c<strong>on</strong>diti<strong>on</strong>s for an MLE.Exercise 2: Use the sec<strong>on</strong>d-order c<strong>on</strong>diti<strong>on</strong>s for a maximum to show that the variance of̂µ = ̂µ 2 / ∑ y i .Exercise 3: Develop a least-squares estimator for µ by (1) devising a appropriate loss functi<strong>on</strong>(i.e., sum of squared deviati<strong>on</strong>s around the expected value), <strong>and</strong> (2) solving the normal equati<strong>on</strong>for µ.i=1Numerical Methods. (Please skip this if you are not interested in gory details underlying theblack box of most statistical programs.) The score vector <strong>and</strong> Hessian matrix are<strong>and</strong>u = X ′ (y − m)H = −X ′ MX,where m = exp(x ′ îβ) <strong>and</strong> M = diag(m)We can apply Newt<strong>on</strong>-Raphs<strong>on</strong> or IRLS to this problem. The NR iterati<strong>on</strong> steps arêβ (t) = ̂β (t−1) [− H (t−1)] −1U (t−1) .2

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!