13.07.2015 Views

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

534.5.1 Typical behavior: average gain and goodputWe here provide expressions for the average pruning gain as well as the goodput which jointlyconsiders gain and reliability. This is followed by several clarifying examples.In terms <strong>of</strong> the average pruning gain, it is straightforward that this takes the form G :=E v,w G(v) = (∑ ρf=1 p ) −1,fɛ f and similarly the average relative gain takes the form Ev,w r(v) =∑ ρf=1 p f(1−ɛ f ). We recall that reliability is given by ɛ 1 .In combining the above average gain measure with reliability, we consider the (average) goodput,denoted as U, which for the sake <strong>of</strong> simplicity is here <strong>of</strong>fered a concise form <strong>of</strong> a weightedproduct between reliability and gain,U := ɛ γ 11 Gγ 2(4.23)for some chosen positiveγ 1 ,γ 2 that describe the importance paid to reliability and to pruning gainrespectively.We proceed with clarifying examples.Example 11 (Average gain with error uniformity) In the uniform error setting where the probability<strong>of</strong> erroneous categorization <strong>of</strong> subjects is assumed to be equal to ɛ for all categories , i.e.,where ɛ f = ɛ = 1−ɛ 1ρ−1, ∀f = 2,··· ,ρ, it is the case thatG = ( p 1 +ɛ−p 1 ɛρ ) −1 . (4.24)This was already illustrated in Fig. 4.2. We quickly note that forp 1 = 1/ρ, the gain remains equalto1/p 1 irrespective <strong>of</strong> ɛ and irrespective <strong>of</strong> the rest <strong>of</strong> the population statistics p f , f ≥ 2.Example 12 (Average gain with uniform error scaling) Now consider the case where the uniformerror increases with ρ as ɛ = max(ρ−β,0)ρλ, β ≥ 1. Then for any set <strong>of</strong> population statistics,(it is the case thatG(λ) = p 1 [1+(ρ−β)λ]+ ρ−β −1λ), (4.25)ρwhich approaches G(λ) = (p 1 [1+(ρ−β)λ]+λ) −1 as ρ increases. We briefly note that, asexpected, in the regime <strong>of</strong> very high reliability (λ → 0), and irrespective <strong>of</strong> {p f } ρ f=2, the pruninggain approaches 1 p 1. In the other extreme <strong>of</strong> low reliability (λ → 1), the gain approaches ɛ −11 .We proceed with an example on the average goodput.Example 13 (Average goodput with error uniformity) Under error uniformity where erroneouscategorization happens with probability ɛ, and forγ 1 = γ 2 = 1, the goodput takes the formU(ɛ) = ɛ+(1−ɛρ)ɛ+p 1 (1−ɛρ) . (4.26)To <strong>of</strong>fer insight we note that the goodput starts at a maximum <strong>of</strong> U = 1 p 1for a near zero value <strong>of</strong>ɛ, and then decreases with a slope <strong>of</strong>δUδɛ =p 1 −1[ɛ+p 1 (1−ρɛ)] 2, (4.27)which as expected 4 δis negative for all p 1 < 1. We here see thatδp 1δ U δɛ | ɛ→0 → 2−p 1which isp 3 1positive and decreasing inp 1 . Within the context <strong>of</strong> the example, the intuition that we can draw isthat, for the same increase inɛ 5 , a search for a rare looking subject (p 1 small) can be much moresensitive, in terms <strong>of</strong> goodput, to outside perturbations (fluctuations in ɛ) than searches for morecommon looking individuals (p 1 large).4. We note that asking for|S| ≥ 1, implies thatɛ+p 1(1−ρɛ) > 1 δU(see (4.24)) which guarantees that is finite.n δɛ5. An example <strong>of</strong> such a deterioration that causes an increase in ɛ, can be a reduction in the luminosity around thesubjects.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!