07.01.2015 Views

Using the SAS System to Fit the Burr XII Distribution to Lifetime Data ...

Using the SAS System to Fit the Burr XII Distribution to Lifetime Data ...

Using the SAS System to Fit the Burr XII Distribution to Lifetime Data ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Using</strong> <strong>the</strong> <strong>SAS</strong> ® <strong>System</strong> <strong>to</strong> <strong>Fit</strong> <strong>the</strong> <strong>Burr</strong> <strong>XII</strong> <strong>Distribution</strong> <strong>to</strong><br />

<strong>Lifetime</strong> <strong>Data</strong><br />

A J Watkins, University of Wales Swansea<br />

Introduction<br />

This paper is concerned with various practical and <strong>the</strong>oretical aspects of using <strong>the</strong><br />

<strong>Burr</strong> <strong>XII</strong> distribution <strong>to</strong> model lifetime data, and thus indicates some possible<br />

additional features and refinements <strong>to</strong> <strong>the</strong> <strong>SAS</strong>/STAT ® procedure LIFEREG. Despite<br />

its lengthy pedigree, having been introduced by <strong>Burr</strong> (1942), <strong>the</strong> <strong>Burr</strong> <strong>XII</strong> distribution<br />

has remained ra<strong>the</strong>r neglected as an option in <strong>the</strong> analysis of lifetime data. However,<br />

<strong>the</strong>re are recent indications that this distribution possesses sufficient flexibility <strong>to</strong><br />

make it a possible model for various types of data; for instance, it has been used <strong>to</strong><br />

model business failure data (Lomax, 1954), <strong>the</strong> efficacy of analgesics in clinical trials<br />

(Wingo, 1983), and <strong>the</strong> times <strong>to</strong> failure of electronic components (Wang, Keats and<br />

Zimmer, 1996).<br />

The basic two parameter <strong>Burr</strong> <strong>XII</strong> distribution has parameters c,k, with cumulative<br />

distribution function<br />

( )<br />

F x c k x c −k<br />

( ; , ) = 1 − 1 +<br />

for x>0, and probability density function<br />

( 1 )<br />

c − 1 c − ( k + 1)<br />

f ( x; c, k)<br />

= ckx + x<br />

for x>0; both c and k are, in standard statistical terminology, “shape” parameters of <strong>the</strong><br />

distribution, and Tadikamalla (1980) notes that <strong>the</strong>re are several ways <strong>to</strong> introduce a<br />

scale parameter in<strong>to</strong> f,F.<br />

The structure of our discussion is as follows. We first focus on <strong>the</strong> practical aspects of<br />

fitting this two parameter distribution <strong>to</strong> complete samples; we present a profile loglikelihood<br />

which reduces <strong>the</strong> number of parameters under active consideration <strong>to</strong> one.<br />

This profile log-likelihood has <strong>to</strong> be maximised numerically, and, in <strong>the</strong> absence of<br />

any code within existing <strong>SAS</strong> procedures, we <strong>the</strong>n give <strong>the</strong> derivatives required by<br />

New<strong>to</strong>n’s iterative method, and some <strong>SAS</strong>/IML ® code <strong>to</strong> implement this procedure.<br />

We also illustrate <strong>the</strong> use of this code on a published data-set.<br />

The use of likelihood-based techniques <strong>the</strong>n raises <strong>the</strong> possibility of using asymp<strong>to</strong>tic<br />

results <strong>to</strong> assess <strong>the</strong> precision in parameter estimates; in particular, we can use <strong>the</strong><br />

inverse of <strong>the</strong> Fisher information matrix as an approximate variance-covariance<br />

matrix of <strong>the</strong> maximum likelihood estima<strong>to</strong>r. We thus return <strong>to</strong> <strong>the</strong> full log-likelihood,<br />

and consider <strong>the</strong> expectations appearing in elements of <strong>the</strong> expected Fisher<br />

information matrix. The evaluation of <strong>the</strong>se expectations may be conveniently<br />

undertaken by exploiting various recurrence relations; we outline <strong>the</strong>se, and give<br />

suitable formulae for both <strong>the</strong> expectations and <strong>the</strong> elements of <strong>the</strong> Fisher information<br />

matrix. We <strong>the</strong>n present a summary of a simulation experiment <strong>to</strong> investigate <strong>the</strong>


agreement between <strong>the</strong>se asymp<strong>to</strong>tically valid formulae and <strong>the</strong> results obtained in<br />

practice with samples of small <strong>to</strong> moderate size.<br />

Full and Profile Log-Likelihoods<br />

We assume that <strong>the</strong> data for analysis is X 1 ,X 2 ,...,X n , comprising <strong>the</strong> times <strong>to</strong> failure of<br />

n items, so that <strong>the</strong> log-likelihood is<br />

n<br />

∑ ∑ ∑<br />

( c<br />

i )<br />

l = log f ( X ; c, k) = n log( ck) + ( c − 1) log X − ( k + 1) log 1 + X ,<br />

i<br />

i<br />

i= 1 i= 1 i=<br />

1<br />

here, as throughout <strong>the</strong> paper, we use natural (or base e) logarithms. The partial<br />

derivative of l with respect <strong>to</strong> c requires<br />

so that we have<br />

<strong>to</strong>ge<strong>the</strong>r with<br />

Solving<br />

∂l<br />

∂<br />

d<br />

dc<br />

⎡<br />

⎢<br />

⎣<br />

n<br />

( X<br />

c<br />

i<br />

∑ log 1 +<br />

i ) ∑ ,<br />

c<br />

i= 1 = 1<br />

n<br />

n c<br />

⎤ Xi<br />

log X<br />

⎥<br />

⎦<br />

= i 1 + Xi<br />

n<br />

∂l<br />

−1<br />

= nc + ∑ log Xi<br />

− ( k + 1)<br />

∂c<br />

i= 1 i=<br />

1<br />

n<br />

∑<br />

n<br />

∂l<br />

−1<br />

c<br />

= nk − ∑ log ( 1 + X i ) .<br />

∂k<br />

i=<br />

1<br />

k = 0 for k in terms of c yields ~ n<br />

k =<br />

n<br />

c<br />

log( + X i )<br />

∑ 1<br />

i=<br />

1<br />

,<br />

c<br />

Xi<br />

log X<br />

1 + X<br />

so that we may derive a profile log-likelihood l * by substituting ~ k in l, and <strong>the</strong>n<br />

simplifying. We obtain<br />

n<br />

n<br />

n<br />

*<br />

c<br />

l = n log( c) + ( c − 1 ) log Xi<br />

− log( + Xi<br />

) − n log ⎡<br />

⎤<br />

∑ ∑ 1 ⎢∑<br />

log( + X<br />

c<br />

i ) ⎥<br />

i= i=<br />

⎣<br />

1<br />

1 1<br />

i=<br />

1<br />

⎦<br />

and can now maximise l * with respect <strong>to</strong> c. This remains a numerical exercise, and<br />

options for finding <strong>the</strong> maximising value of c include those based on <strong>the</strong> first and<br />

second derivatives of l * . From <strong>the</strong> above discussion, we have<br />

*<br />

dl<br />

dc<br />

n<br />

∑<br />

−1<br />

= nc + log X −<br />

i=<br />

1<br />

i<br />

n<br />

∑<br />

X<br />

c<br />

i<br />

i=<br />

1 1<br />

log X<br />

+ X<br />

c<br />

i<br />

i<br />

⎡<br />

⎢<br />

− n⎢<br />

⎢<br />

⎢<br />

⎣<br />

n<br />

∑<br />

i=<br />

1<br />

n<br />

∑<br />

i=<br />

1<br />

X<br />

c<br />

i<br />

log<br />

c<br />

i<br />

i<br />

n<br />

log X<br />

1 + X<br />

c<br />

i<br />

( 1 + X<br />

c<br />

i )<br />

i<br />

⎤<br />

⎥<br />

⎥.<br />

⎥<br />

⎥<br />

⎦<br />

We next note that<br />

d<br />

dc<br />

n c<br />

c<br />

2<br />

n<br />

X X X ( X<br />

i<br />

log<br />

i<br />

⎤<br />

i<br />

log<br />

i )<br />

∑ c ∑ ,<br />

c 2<br />

⎡<br />

⎢<br />

⎣<br />

⎥<br />

1 + X<br />

= i ⎦ i 1 ( 1 + Xi<br />

)<br />

i= 1<br />

=


so that <strong>the</strong> second derivative of l * is<br />

nc<br />

−2<br />

−<br />

n<br />

∑<br />

i=<br />

1<br />

X<br />

c<br />

i<br />

( log X )<br />

1 + X<br />

c<br />

i<br />

i<br />

2<br />

⎡⎧<br />

⎢⎨<br />

⎢⎩<br />

− n⎢<br />

⎢<br />

⎢<br />

⎣⎢<br />

n<br />

c<br />

2<br />

n<br />

X ( X<br />

n<br />

⎫⎧<br />

)<br />

X<br />

c ⎪ i<br />

log ⎫<br />

i ⎪ ⎛<br />

∑ log( 1 +<br />

i<br />

) ⎬⎨∑ c 2 ⎬ ⎜∑<br />

c<br />

Xi<br />

Xi<br />

c<br />

⎭ i Xi<br />

i<br />

⎩⎪ +<br />

⎭⎪ − log ⎞<br />

⎟<br />

1 ( 1 ) ⎝ 1 1 + Xi<br />

⎠<br />

i= 1<br />

= =<br />

⎧<br />

⎨<br />

⎩<br />

n<br />

∑<br />

i=<br />

1<br />

c ⎫<br />

log( 1 + Xi<br />

) ⎬<br />

⎭<br />

2<br />

2<br />

⎤<br />

⎥<br />

⎥<br />

⎥.<br />

⎥<br />

⎥<br />

⎦⎥<br />

The following <strong>SAS</strong>/IML code, loosely based on that given in Example 2 in <strong>the</strong><br />

available documentation (<strong>SAS</strong> Institute, 1989), implements <strong>the</strong> standard New<strong>to</strong>n<br />

iterative method for finding <strong>the</strong> maximising value of c: <strong>the</strong> corresponding value of k is<br />

<strong>the</strong>n found by calculating ~ k at this maximising value of c.<br />

proc iml;<br />

/* structure based on example 2 in <strong>SAS</strong>/IML manual pp113-114 */<br />

start new<strong>to</strong>n;<br />

run fun;<br />

do iter=1 <strong>to</strong> 10<br />

while(max(abs(plc))>0.0001);<br />

c = c - solve(plcc,plc);<br />

run fun;<br />

end;<br />

finish new<strong>to</strong>n;<br />

/* fun gives profile log-likelihood and first two derivatives */<br />

start fun;<br />

dc = exp(c*c1);<br />

s2 = sum(log(dc+1));<br />

s3 = sum(dc#c1/(dc+1));<br />

s4 = sum(dc#c1#c1/((dc+1)#(dc+1)));<br />

pl = n*log(c) + (c-1)*s1 - s2 - n*log(s2);<br />

plc = n/c + s1 - s3 - n*s3/s2;<br />

plcc = -n/(c*c) - s4 - n*(s2*s4-s3*s3)/(s2*s2);<br />

finish fun;<br />

/* data from Wingo (1983) included explicitly */<br />

do;<br />

data = {0.70, 0.84, 0.58, 0.50, 0.55, 0.82, 0.59, 0.71, 0.72, 0.61,<br />

0.62, 0.49, 0.54, 0.72, 0.36, 0.71, 0.35, 0.64, 0.85, 0.55,<br />

0.59, 0.29, 0.75, 0.53, 0.46, 0.60, 0.60, 0.36, 0.52, 0.68,<br />

0.80, 0.55, 0.84, 0.70, 0.34, 0.70, 0.49, 0.56, 0.71, 0.61,<br />

0.57, 0.73, 0.75, 0.58, 0.44, 0.81, 0.80, 0.87, 0.29, 0.50};<br />

c = 1.0;<br />

n = nrow(data);<br />

c1 = log(data);<br />

s1 = sum(c1);<br />

run new<strong>to</strong>n;<br />

k = n/s2;<br />

print c k plc plcc;<br />

end;<br />

For illustration, <strong>the</strong> example code contains an explicit listing of data given in Wingo<br />

(1983); in practice, <strong>the</strong> data may be read from an external file; in simulation<br />

experiments, <strong>the</strong> data may be generated by fur<strong>the</strong>r lines of code. The above output


gives <strong>the</strong> maximising value of c as 5.0006407, with a corresponding value of k as<br />

8.2680792; <strong>the</strong>se values are consistent with those reported both by Wingo (1983) and<br />

Wang, Keats and Zimmer (1996).<br />

The Observed Fisher Information<br />

The first order partial derivatives of <strong>the</strong> log-likelihood are given above; we now give<br />

<strong>the</strong> second order counterparts. These are <strong>the</strong> deterministic quantity<br />

<strong>to</strong>ge<strong>the</strong>r with<br />

and<br />

2<br />

∂ l<br />

∂c<br />

2<br />

∂ l<br />

∂k<br />

n<br />

= − ,<br />

k<br />

2 2<br />

2 2 n c<br />

∂ l ∂ l Xi<br />

Xi<br />

= = − ∑ log c<br />

∂k∂c<br />

∂c∂k<br />

+ X<br />

,<br />

n<br />

= − − k + 1<br />

c<br />

2 2<br />

i=<br />

1 1<br />

n<br />

( )∑<br />

i=<br />

1<br />

X<br />

c<br />

i ( log Xi<br />

)<br />

2<br />

( 1 + X<br />

c<br />

i )<br />

Expectations for Regularity<br />

It is of some interest <strong>to</strong> establish regularity of <strong>the</strong> log-likelihood for complete samples<br />

from <strong>the</strong> <strong>Burr</strong> <strong>XII</strong> distribution by taking expectations of <strong>the</strong> first and second order<br />

partial derivatives of <strong>the</strong> full log-likelihood. Reference <strong>to</strong> <strong>the</strong>se derivatives shows that<br />

we need <strong>the</strong> expectations of<br />

c<br />

c<br />

c X log X X<br />

log X, log ( 1 + X ),<br />

,<br />

c<br />

1 + X 1 +<br />

i<br />

2<br />

.<br />

( log X)<br />

c<br />

( X )<br />

There are two basic results: <strong>the</strong> first is that Y=1+X c follows a Pare<strong>to</strong> distribution with<br />

probability density function ky − ( k+<br />

1) for y>1, so that logY has a negative exponential<br />

distribution with mean k -1 . Thus, we immediately have<br />

The second basic result is<br />

c<br />

[ log( 1 )]<br />

1<br />

E + X = k<br />

− .<br />

r c<br />

[ ] =<br />

Γ( k)<br />

E X<br />

r r<br />

Γ<br />

⎛<br />

⎜ + 1<br />

⎞<br />

⎟Γ<br />

⎛<br />

k −<br />

⎞<br />

⎜ ⎟<br />

⎝ ⎠ ⎝ c⎠<br />

,<br />

where Γ is <strong>the</strong> usual gamma function. Differentiating this expression with respect <strong>to</strong><br />

r, we obtain<br />

2<br />

2


[ r c c<br />

log X] =<br />

cΓ( k)<br />

E X<br />

r r r<br />

Γ<br />

⎛<br />

⎜ + 1<br />

⎞<br />

⎟Γ ⎛<br />

⎜k<br />

−<br />

⎞<br />

⎝ ⎠ ⎝ ⎠<br />

⎟ − Γ<br />

⎛<br />

⎜<br />

⎝ c<br />

+ 1<br />

⎞<br />

⎟Γ<br />

⎠<br />

( 1) ( 1)<br />

⎛ r<br />

k −<br />

⎞<br />

⎜ ⎟<br />

⎝ c⎠<br />

,<br />

where Γ (1) denotes <strong>the</strong> first derivative of <strong>the</strong> gamma function. Evaluating this<br />

expectation at r=0 gives<br />

E<br />

[ log X]<br />

( ) ( k) − ( ) ( k)<br />

cΓ( k)<br />

( 1) ( 1)<br />

Γ 1 Γ Γ 1 Γ<br />

+ ( k)<br />

=<br />

= −<br />

⎡γ<br />

Ψ ⎤<br />

⎣⎢ c ⎦⎥<br />

where Ψ = Γ ( 1) / Γ is <strong>the</strong> digamma or psi function, and γ is Euler’s constant. This<br />

gives <strong>the</strong> second of <strong>the</strong> required expectations. However, it should also be noted that<br />

we can obtain <strong>the</strong> third expectation by an appropriate manipulation of <strong>the</strong> last result.<br />

Writing expectation as E k <strong>to</strong> emphasise <strong>the</strong> role of <strong>the</strong> parameter k in <strong>the</strong> <strong>Burr</strong> <strong>XII</strong><br />

distribution, <strong>the</strong> third expectation is<br />

E X c<br />

⎡ log X ⎤<br />

log X<br />

k<br />

k ⎢<br />

E X E<br />

⎣ X + ⎦<br />

⎥ = − ⎡ ⎤<br />

1 ⎣⎢ X + 1⎦⎥ = − k + 1<br />

[ log ] E [ log X] E [ log X]<br />

c k k c k k+<br />

1<br />

,<br />

on exploiting <strong>the</strong> form of <strong>the</strong> probability density function of <strong>the</strong> <strong>Burr</strong> <strong>XII</strong> distribution.<br />

This gives us an expression for <strong>the</strong> third expectation in terms of Ψ at k and k+1;<br />

fur<strong>the</strong>r simplification is possible on recalling <strong>the</strong> recurrence relation<br />

and we thus obtain<br />

Ψ( k + 1) = Ψ( k)<br />

+ k<br />

−<br />

1 ,<br />

E X c<br />

⎡ log X ⎤ ( k)<br />

k ⎢<br />

.<br />

c<br />

X +<br />

⎥<br />

⎣ ⎦<br />

= 1 − γ − Ψ<br />

1 ( k + 1)<br />

c<br />

A similar procedure may also be used <strong>to</strong> obtain <strong>the</strong> final expectation from a suitable<br />

expression for<br />

Differentiating E[ X X]<br />

2<br />

[ ]<br />

r<br />

for E X ( log X)<br />

E<br />

[( log<br />

2<br />

X)<br />

]<br />

.<br />

r log with respect <strong>to</strong> r leads <strong>to</strong> an expression<br />

2<br />

[ ]<br />

, which, evaluated at r=0, yields E ( log X)<br />

2<br />

⎡π<br />

( 2) ( 1) ( 1) ( 2)<br />

Γ 1 Γ 2Γ 1 Γ Γ 1 Γ<br />

⎢<br />

6<br />

=<br />

⎣<br />

( ) ( k) − ( ) ( k) + ( ) ( k)<br />

2<br />

c Γ( k)<br />

as<br />

2 2 ( 1) ⎤<br />

+ γ + 2γΨ( k) + ( Ψ( k) ) + Ψ ( k)<br />

⎥<br />

⎦<br />

,<br />

2<br />

c<br />

where Γ ( 2)<br />

is <strong>the</strong> second derivative of <strong>the</strong> gamma function, Ψ (1) is <strong>the</strong> derivative of<br />

<strong>the</strong> digamma function (and is also known as <strong>the</strong> trigamma function), and<br />

( 2) 2 ( 1)<br />

Γ = Γ × Ψ + Ψ .<br />

( )


⎡<br />

We <strong>the</strong>n write <strong>the</strong> final expectation as E X c<br />

log<br />

⎢<br />

X<br />

k<br />

c<br />

⎢<br />

⎣ ( 1 + X )<br />

E<br />

k c k<br />

( )<br />

2 2<br />

⎡( log X) ⎤ ⎡ ( log X)<br />

⎤<br />

k<br />

⎢ ⎥ − E ⎢ ⎥ E<br />

X<br />

c 2 k 1<br />

⎣⎢<br />

1 + ⎥⎦<br />

⎢( 1 + X ) ⎥ = −<br />

k + 1 k + 2<br />

⎣ ⎦<br />

2<br />

2<br />

⎤<br />

⎥ , which in turn becomes<br />

⎥<br />

⎦<br />

2 k<br />

[( log X)<br />

] Ek<br />

2 ( log X)<br />

2<br />

[ ]<br />

+ +<br />

,<br />

again by exploiting <strong>the</strong> form of <strong>the</strong> probability density function of <strong>the</strong> <strong>Burr</strong> <strong>XII</strong><br />

(1)<br />

distribution, thus yielding an expression for <strong>the</strong> expectation in terms of Ψ,<br />

Ψ at k+1<br />

and k+2; fur<strong>the</strong>r simplification is again possible, since we now also have<br />

Ψ<br />

We obtain <strong>the</strong> required expectation as<br />

( )<br />

( k + 2) = Ψ ( k + 1)<br />

− k + 1<br />

− .<br />

( 1) ( 1) 2<br />

2<br />

⎡π<br />

2 2 ( 1) ⎤<br />

k⎢<br />

+ γ − 2γ + 2( γ − 1) Ψ( k + 1) + { Ψ( k + 1) } + Ψ ( k + 1)<br />

⎣ 6<br />

⎥<br />

⎦<br />

.<br />

2<br />

( k + 1)( k + 2)<br />

c<br />

Regularity and <strong>the</strong> Expected Fisher Information<br />

We <strong>the</strong>refore have<br />

⎡ ∂<br />

E<br />

l ⎤ n<br />

c n n<br />

nE[ X ]<br />

⎣<br />

⎢∂k<br />

⎦<br />

⎥ = k<br />

− log( 1 + ) = k<br />

− k<br />

= 0<br />

and<br />

E<br />

l n<br />

nE[ X ] n k E X c<br />

⎡ ∂ ⎤<br />

X<br />

c<br />

c<br />

⎣<br />

⎢∂c⎦<br />

⎥ = c<br />

+ − + ⎡ log ⎤<br />

log ) ( 1)<br />

⎢<br />

⎣ 1 + X<br />

⎥<br />

⎦<br />

n n{ γ + Ψ( k) } n( k + 1) { 1 − γ − Ψ( k)<br />

}<br />

= −<br />

−<br />

= 0,<br />

c c<br />

( k + 1)<br />

c<br />

after some simplification; <strong>the</strong>se results are as we require. For <strong>the</strong> expected Fisher<br />

information, we have<br />

2<br />

⎡ ∂ l ⎤ n<br />

E⎢<br />

2 2<br />

⎣∂k<br />

⎥<br />

⎦<br />

= − k<br />

,<br />

2<br />

l<br />

{ }<br />

E nE X c<br />

⎡ ∂ ⎤<br />

X n 1 γ k<br />

⎢<br />

c<br />

∂k∂c<br />

⎥<br />

⎣ ⎦<br />

= − ⎡ log ⎤<br />

⎢<br />

⎣ 1 + X<br />

⎥<br />

⎦<br />

= − − − Ψ( )<br />

( k + 1)<br />

c<br />

and<br />

2<br />

l n<br />

( )<br />

E<br />

n k E X c<br />

X<br />

2<br />

⎡∂<br />

⎤<br />

⎡<br />

⎢ 2 2<br />

∂c<br />

⎥ 1<br />

c<br />

c 2<br />

⎣ ⎦<br />

= − − + log ⎤<br />

( ) ⎢ ⎥<br />

⎢<br />

⎣ ( 1 + X ) ⎥<br />

⎦<br />

2<br />

n ⎡ k ⎧π<br />

2 2 ( 1) ⎫⎤<br />

= − ⎢1<br />

+ ⎨ + γ − 2γ + 2( γ − 1) Ψ( k + 1) + [ Ψ( k + 1) ] + Ψ ( k + 1) 2<br />

⎬⎥.<br />

c ⎣ k + 2 ⎩ 6<br />

⎭⎦<br />

The elements of <strong>the</strong> inverse of <strong>the</strong> expected Fisher information matrix can now be<br />

written down; however, <strong>the</strong>re is little or no simplification, and so <strong>the</strong>y are omitted


here. We note that, despite <strong>the</strong>ir ra<strong>the</strong>r inelegant form, <strong>the</strong>se elements are readily<br />

calculated, although we need <strong>to</strong> use numerical procedures <strong>to</strong> calculate Ψ,<br />

Ψ<br />

(1) ; <strong>the</strong><br />

former is directly available within <strong>SAS</strong> via <strong>the</strong> DIGAMMA function, while <strong>the</strong> latter<br />

can be evaluated by using an approach based on that outlined by Amos (1983).<br />

Simulation Experiments<br />

We now summarise a simulation experiment investigating <strong>the</strong> agreement between<br />

asymp<strong>to</strong>tically valid formulae and results obtained in practice with finite sample sizes.<br />

sample<br />

size n<br />

mean of<br />

c<br />

mean of<br />

k<br />

standard<br />

deviation of<br />

c<br />

standard<br />

deviation of<br />

k<br />

correlation<br />

between<br />

c , k<br />

c=k=1<br />

25 1.0654 1.0220 0.2165 0.1841 0.2367 0.2202 -0.3991 -0.4181<br />

50 1.0308 1.0099 0.1411 0.1302 0.1607 0.1557 -0.4129 -0.4181<br />

100 1.0142 1.0057 0.0949 0.0921 0.1114 0.1101 -0.4086 -0.4181<br />

250 1.0056 1.0021 0.0592 0.0582 0.0699 0.0696 -0.4248 -0.4181<br />

500 1.0031 1.0015 0.0416 0.0412 0.0493 0.0492 -0.4202 -0.4181<br />

1000 1.0017 1.0004 0.0290 0.0291 0.0352 0.0348 -0.4143 -0.4181<br />

c=k=2<br />

25 2.1044 2.1012 0.3450 0.3119 0.4751 0.4000 0.0542 0.0<br />

50 2.0499 2.0482 0.2312 0.2205 0.3062 0.2828 0.0203 0.0<br />

100 2.0235 2.0256 0.1595 0.1559 0.2078 0.2000 0.0278 0.0<br />

250 2.0092 2.0098 0.1002 0.0986 0.1285 0.1265 0.0125 0.0<br />

500 2.0060 2.0045 0.0709 0.0697 0.0901 0.0894 -0.0005 0.0<br />

1000 2.0020 2.0023 0.0493 0.0493 0.0636 0.0632 -0.0011 0.0<br />

c=k=3<br />

25 3.1436 3.2367 0.4887 0.4431 0.8031 0.6226 0.3525 0.2669<br />

50 3.0724 3.1001 0.3267 0.3133 0.4922 0.4402 0.3020 0.2669<br />

100 3.0348 3.0474 0.2270 0.2216 0.3246 0.3113 0.2723 0.2669<br />

250 3.0143 3.0193 0.1422 0.1401 0.2017 0.1969 0.2693 0.2669<br />

500 3.0079 3.0090 0.0996 0.0991 0.1401 0.1392 0.2668 0.2669<br />

1000 3.0036 3.0046 0.0702 0.0701 0.0979 0.0984 0.2646 0.2669<br />

c=k=4<br />

25 4.2084 4.4269 0.6520 0.5780 1.2496 0.8880 0.5057 0.4340<br />

50 4.0919 4.1744 0.4305 0.4087 0.7314 0.6279 0.4707 0.4340<br />

100 4.0461 4.0901 0.2989 0.2890 0.4806 0.4440 0.4474 0.4340<br />

250 4.0197 4.0360 0.1841 0.1828 0.2893 0.2808 0.4342 0.4340<br />

500 4.0092 4.0180 0.1300 0.1293 0.2020 0.1986 0.4380 0.4340<br />

1000 4.0046 4.0081 0.0920 0.0914 0.1411 0.1404 0.4338 0.4340<br />

c=k=5<br />

25 5.2509 5.6189 0.8027 0.7158 1.7435 1.1906 0.5923 0.5428<br />

50 5.1175 5.2694 0.5316 0.5062 0.9815 0.8419 0.5617 0.5428<br />

100 5.0581 5.1333 0.3690 0.3579 0.6508 0.5953 0.5604 0.5428<br />

250 5.0221 5.0478 0.2290 0.2264 0.3898 0.3765 0.5435 0.5428<br />

500 5.0115 5.0259 0.1606 0.1601 0.2715 0.2662 0.5480 0.5428<br />

1000 5.0054 5.0133 0.1144 0.1132 0.1896 0.1883 0.5507 0.5428<br />

In <strong>the</strong> table above, we give, for various combinations of n and c=k, some basic<br />

statistical summaries for <strong>the</strong> maximum likelihood estima<strong>to</strong>rs of <strong>the</strong>se parameters,<br />

<strong>to</strong>ge<strong>the</strong>r with <strong>the</strong>oretical counterparts in italics, based on <strong>the</strong> formulae derived above;<br />

each entry in <strong>the</strong> table is based on 20000 replications of data.


We see that <strong>the</strong> agreement between <strong>the</strong>ory and practice improves as n increases, as<br />

intuition would suggest, with reasonable agreement for n=100 and higher. However,<br />

<strong>the</strong> relatively poor agreement observed for smaller values of n indicates that <strong>the</strong>re is<br />

scope for fur<strong>the</strong>r work on assessing <strong>the</strong> extent <strong>to</strong> which <strong>the</strong> precision in maximum<br />

likelihood estima<strong>to</strong>rs for small samples can be gauged by <strong>the</strong> application of<br />

asymp<strong>to</strong>tic results; <strong>the</strong> results of this investigation will be presented elsewhere.<br />

We note that <strong>the</strong>re is also scope for a more extended simulation experiment, in which<br />

<strong>the</strong> parameters of <strong>the</strong> <strong>Burr</strong> <strong>XII</strong> distribution take distinct values, and in which data is<br />

subjected <strong>to</strong> various censoring regimes; <strong>the</strong> interested reader will appreciate that <strong>the</strong><br />

<strong>the</strong>oretical calculations for <strong>the</strong> latter case also require fur<strong>the</strong>r algebraical<br />

considerations.<br />

References<br />

Amos, D.E., Algorithm 610: A Portable FORTRAN Subroutine for Derivatives of <strong>the</strong><br />

Psi Function, ACM Transactions on Ma<strong>the</strong>matical Software, 9, 494-502, 1983.<br />

<strong>Burr</strong>, I.W., Cumulative Frequency Functions, Annals of Ma<strong>the</strong>matical Statistics, 13,<br />

215-232, 1942.<br />

Lomax, K.S., Business Failure: Ano<strong>the</strong>r Example of <strong>the</strong> Analysis of Failure <strong>Data</strong>,<br />

Journal of <strong>the</strong> American Statistical Association, 49, 847-852, 1954.<br />

<strong>SAS</strong> Institute, <strong>SAS</strong>/IML Software: Usage and Reference, Version 6, First Edition,<br />

Cary, NC: <strong>SAS</strong> Institute Inc., 1989.<br />

Tadikamalla, P.R., A Look at <strong>the</strong> <strong>Burr</strong> and Related <strong>Distribution</strong>s, International<br />

Statistical Review, 48, 227-344, 1980.<br />

Wang, F.K., Keats, J.B. and Zimmer, W.J., Maximum Likelihood Estimation of <strong>the</strong><br />

<strong>Burr</strong> <strong>XII</strong> Parameters with Censored and Uncensored <strong>Data</strong>, Microelectronics<br />

and Reliability, 36, 359-362, 1996.<br />

Wingo, D.R., Maximum Likelihood Methods for <strong>Fit</strong>ting <strong>the</strong> <strong>Burr</strong> Type <strong>XII</strong><br />

<strong>Distribution</strong> <strong>to</strong> Life Test <strong>Data</strong>, Biometrical Journal, 25, 77-84, 1983.<br />

Correspondence<br />

A J Watkins, EBMS, Single<strong>to</strong>n Park, Swansea, SA2 8PP, United Kingdom<br />

E-mail: a.watkins@swansea.ac.uk<br />

Registered Trademarks<br />

<strong>SAS</strong>, <strong>SAS</strong>/IML and <strong>SAS</strong>/STAT are registered trademarks of <strong>SAS</strong> Institute Inc., Cary,<br />

NC, USA.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!