11.07.2015 Views

The Northern Yellowstone Elk: Density Dependence and Climatic ...

The Northern Yellowstone Elk: Density Dependence and Climatic ...

The Northern Yellowstone Elk: Density Dependence and Climatic ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

110 DENSITY DEPENDENCE IN YELLOWSTONE ELK * Taper <strong>and</strong> GoganJ. Wildl. Manage. 66(1):2002(Coughenour <strong>and</strong> Singer 1996, Lemke et al.1998). <strong>The</strong>se authors estimated that between1980 <strong>and</strong> 1985 (between our early <strong>and</strong> late periods),the extent of the winter range increased41%. In analyses that relate to population densities,we adjust the estimated population numbersduring the late period by dividing by 1.41. Thisexpresses all estimates as densities in units of animalsper pre-1980 winter range.Testing for <strong>Density</strong> <strong>Dependence</strong>To legitimately test for density dependence intime series of population estimates, one musthave a true a priori hypothesis that is null <strong>and</strong>alternative models that are specified beforeinspecting the data. Following Dennis <strong>and</strong> Taper(1994), we used a null model of exponentialgrowth <strong>and</strong> an alternative model of Ricker density-dependentgrowth. We used a parametric bootstraplikelihood ratio (PBLR) test (Dennis <strong>and</strong>Taper 1994). This procedure regresses populationgrowth rate as a response variable against apredictor variable of population size. <strong>The</strong> distinctionbetween the PBLR test <strong>and</strong> a st<strong>and</strong>ard regressionis that the PBLR develops the distribution ofthe test statistic under the null hypothesis by simulation.Thus, it gives a true size to the test of densitydependence. <strong>The</strong> test also is robust againstmeasurement error (Dennis <strong>and</strong> Taper 1994).Identifying the Form of PopulationDynamicsIdentifying Population Dynamic Process withSchwarz's Information Criterion.-Because legitimatehypothesis testing requires a single pair ofnull <strong>and</strong> alternative hypotheses, it often is quiteconstraining. Model mis-specification is a majorsource of error in data analysis (Chatfield 1995,Buckl<strong>and</strong> et al. 1997, Burnham <strong>and</strong> Anderson1998). Model identification is an approach thatminimizes the risk of model mis-specification bycomparing the goodness-of-fit of a whole suite ofmodels. A number of possible criteria for comparingmodels exist. We prefer the informationcriteria methods because of their strong statisticalfoundations (Sakamoto et al. 1986, Burnham<strong>and</strong> Anderson 1998) <strong>and</strong> because the validity ofits application to the identification of populationdynamic models in time-series analysis has beeninvestigated through simulation (Hooten 1995).Although model identification formally selectsa single model from a suite of models, as opposedto testing a single null against a single alternative,the process can still be used to compare classes ofmodels. All the population dynamic models weinvestigated were classified as density-independentor density-dependent models. Thus, if themodel identified as the best model is a memberof the set of density-independent models, thenthe population dynamics are considered to bedensity-independent, while, if the identifiedmodel is a member of the density-dependent set,the population dynamics are classified as densitydependent(Hooten 1995, Zeng 1996, Zeng et al.1998). Because of the multiple models involved<strong>and</strong> because explicit P-values are not calculated,such a classification by model identification doesnot per se constitute a hypothesis test. Nonetheless,if the compared models have been includedon an a priori basis, then the comparison retainsmuch of the epistemological st<strong>and</strong>ing of ahypothesis test (Burnham <strong>and</strong> Anderson 1998).<strong>The</strong>re are a variety of possible information criteriato choose from (Hooten 1995). All informationcriteria are constructed as -2*(log-likelihood)+ (an adjustment). <strong>The</strong> adjustment is afunction of the number of parameters or of thenumber of parameters <strong>and</strong> the number of observations.<strong>The</strong> adjustment for Akaike's InformationCriterion (AIC; Akaike 1973) is 2*p (p = the numberof parameters including the error variance),while the adjustment for Schwarz's InformationCriterion (SIC; Schwarz 1978) contains a samplesize correction so that the adjustment is the numberof parameters times the natural logarithm ofthe number of observations. <strong>The</strong> model with thelowest value for the information criterion used isselected as the best-supported model. However,this selection should not be considered absolute.If the difference in information criteria (IC) valueof 2 models is slight, then it is best to consider themodels more or less equally supported <strong>and</strong> retainboth for consideration. Differences in informationcriterion values (AIC) of >2 are generallyconsidered to indicate that models are statisticallydistinguishable (Sakamoto et al. 1986).It has been shown both theoretically (Stone1979) <strong>and</strong> through simulation (Hooten 1995)that the SIC estimates the true order of theunderlying model more accurately than the AIC.Furthermore, the SIC tends to choose models oflower order than the AIC (when the selection differs).In contrasting between density-independent<strong>and</strong> density-dependent classes of models,this feature will make the identification of densi-ty dependence conservative as the density-dependentmodels contain more parameters than thedensity-independent models (Zeng et al. 1998).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!