52 4. SEARCH PRUNING IN VIDEO SURVEILLANCE SYSTEMSExample 8 (How <strong>of</strong>ten will the gain be very small?) We recall the discussion in section 4.2 wherea pruning (surveillance) system can identify ρ = 3 categories, operates with reliability ɛ 1 = 0.8over a population with statistics p 1 = 0.4,p 2 = 0.25,p 3 = 0.35 and has confusability parametersɛ 2 = 0.2,ɛ 3 = 0.3. In the context <strong>of</strong> the above theorems we note that the result already shownin Figure 4.3 applies by substituting τ with τ/ρ = τ/3. Consequently from Figure 4.3 we recallthe following. The size <strong>of</strong> the (after pruning) set S is typically 47.5% <strong>of</strong> the original size n. Theprobability that pruning removes less than 1 −0.72 = 28% <strong>of</strong> the original set, is approximatelygiven by e −ρn = e −3n because, as Figure 4.3 shows, J(0.72) ≈ 1 (recall ρ = 3). Similarly thesame Figure tells us that the probability that pruning removes less than 1 − 0.62 = 38% <strong>of</strong> theoriginal set, is approximately given by e −ρn/2 = e −3n/2 because J(0.62) ≈ 1/2.In the following example we are interested in understanding the behavior <strong>of</strong> the search pruningin the case <strong>of</strong> rare authentication groups.Example 9 (Which groups cause specific problems?) Consider the case where a (s<strong>of</strong>t biometricsbased) pruning system has ρ = 2 identifiable categories, population probabilities p =[p, 1 − p], and confusion probabilities ɛ = [1 − ɛ, ɛ] (this means that the probability that thefirst category is confused for the second, is equal to making the reverse error). We want to understandwhat types <strong>of</strong> authentication groups will cause our pruning system to prune out only, forexample, a fourth <strong>of</strong> the population (|S| ≈ 3n/4). The answer will turn out to be that the typicalgroups that cause such reduced pruning, have 43% <strong>of</strong> the subjects in the first category, and therest in the other category.To see this we recall that (see Theorem 2) |S| ≈ τ n 2|S| = 3n/4 which implies that τ = 3/2.For α denoting the fraction <strong>of</strong> the subjects (in v) that belong in the first category, and after somealgebra that we ignore here, it can be shown that α = 3−τ5−τwhich yields α = 3/7 ≈ 43%.A further clarifying example focuses on the case <strong>of</strong> a statistically symmetric, i.e., maximallydiverse population.Example 10 (Male or female?) Consider a city with50% male and50% female population (ρ =2,p 1 = p 2 = 0.5). Let the confusion probabilities as before to be equal, in the sense that (ɛ =[1−ɛ, ɛ]). We are interested in the following questions. For a system that searches for a male (firstcategory), how <strong>of</strong>ten will the system prune out only a third <strong>of</strong> the population (as opposed to theexpected one half)? How <strong>of</strong>ten will we run across an authentication group with α a 0,1 = 20%males, and then have the system prune out only45% <strong>of</strong> the overall size (as opposed to the expected80%)? As it turns out, the first answer reveals a probability <strong>of</strong> aboute −n/4 , and the second answerreveals a probability <strong>of</strong> about e −n/5 . For n ≈ 50, the two probabilities are about five in a millionand forty-five in a million respectively.To see this, first note that α 1,1 = ατ. Then we have thatlogI(α,α 1,1 ,τ) := limN→∞ n/2 P(|S| = n 2 τ,α,α 1,1),infδ I(α,α 1,1,τ) = I(α,τ,α 1,1 = ατ)=2αlog2α+2(1−α)log2(1−α)+τ logτ +(2−τ)log(2−τ).To see the above, just calculate the derivative <strong>of</strong> I with respect to α 1,1 . For the behavior <strong>of</strong> τwe see that I(τ) = inf α inf α1,1 I(α,α 1,1 ,τ) = I(α = p 1 ,α 1,1 = p 1 τ,τ)= τ logτ+(2−τ)log(2−τ), which can be seen by calculating the derivative <strong>of</strong>inf δ I(α,α 1,1 ,τ)with respect to α.
534.5.1 Typical behavior: average gain and goodputWe here provide expressions for the average pruning gain as well as the goodput which jointlyconsiders gain and reliability. This is followed by several clarifying examples.In terms <strong>of</strong> the average pruning gain, it is straightforward that this takes the form G :=E v,w G(v) = (∑ ρf=1 p ) −1,fɛ f and similarly the average relative gain takes the form Ev,w r(v) =∑ ρf=1 p f(1−ɛ f ). We recall that reliability is given by ɛ 1 .In combining the above average gain measure with reliability, we consider the (average) goodput,denoted as U, which for the sake <strong>of</strong> simplicity is here <strong>of</strong>fered a concise form <strong>of</strong> a weightedproduct between reliability and gain,U := ɛ γ 11 Gγ 2(4.23)for some chosen positiveγ 1 ,γ 2 that describe the importance paid to reliability and to pruning gainrespectively.We proceed with clarifying examples.Example 11 (Average gain with error uniformity) In the uniform error setting where the probability<strong>of</strong> erroneous categorization <strong>of</strong> subjects is assumed to be equal to ɛ for all categories , i.e.,where ɛ f = ɛ = 1−ɛ 1ρ−1, ∀f = 2,··· ,ρ, it is the case thatG = ( p 1 +ɛ−p 1 ɛρ ) −1 . (4.24)This was already illustrated in Fig. 4.2. We quickly note that forp 1 = 1/ρ, the gain remains equalto1/p 1 irrespective <strong>of</strong> ɛ and irrespective <strong>of</strong> the rest <strong>of</strong> the population statistics p f , f ≥ 2.Example 12 (Average gain with uniform error scaling) Now consider the case where the uniformerror increases with ρ as ɛ = max(ρ−β,0)ρλ, β ≥ 1. Then for any set <strong>of</strong> population statistics,(it is the case thatG(λ) = p 1 [1+(ρ−β)λ]+ ρ−β −1λ), (4.25)ρwhich approaches G(λ) = (p 1 [1+(ρ−β)λ]+λ) −1 as ρ increases. We briefly note that, asexpected, in the regime <strong>of</strong> very high reliability (λ → 0), and irrespective <strong>of</strong> {p f } ρ f=2, the pruninggain approaches 1 p 1. In the other extreme <strong>of</strong> low reliability (λ → 1), the gain approaches ɛ −11 .We proceed with an example on the average goodput.Example 13 (Average goodput with error uniformity) Under error uniformity where erroneouscategorization happens with probability ɛ, and forγ 1 = γ 2 = 1, the goodput takes the formU(ɛ) = ɛ+(1−ɛρ)ɛ+p 1 (1−ɛρ) . (4.26)To <strong>of</strong>fer insight we note that the goodput starts at a maximum <strong>of</strong> U = 1 p 1for a near zero value <strong>of</strong>ɛ, and then decreases with a slope <strong>of</strong>δUδɛ =p 1 −1[ɛ+p 1 (1−ρɛ)] 2, (4.27)which as expected 4 δis negative for all p 1 < 1. We here see thatδp 1δ U δɛ | ɛ→0 → 2−p 1which isp 3 1positive and decreasing inp 1 . Within the context <strong>of</strong> the example, the intuition that we can draw isthat, for the same increase inɛ 5 , a search for a rare looking subject (p 1 small) can be much moresensitive, in terms <strong>of</strong> goodput, to outside perturbations (fluctuations in ɛ) than searches for morecommon looking individuals (p 1 large).4. We note that asking for|S| ≥ 1, implies thatɛ+p 1(1−ρɛ) > 1 δU(see (4.24)) which guarantees that is finite.n δɛ5. An example <strong>of</strong> such a deterioration that causes an increase in ɛ, can be a reduction in the luminosity around thesubjects.
- Page 1:
FACIAL SOFT BIOMETRICSMETHODS, APPL
- Page 5: AcknowledgementsThis thesis would n
- Page 8: 6hair, skin and clothes. The propos
- Page 11 and 12: 97 Practical implementation of soft
- Page 13 and 14: 11Notations used in this workE : st
- Page 15 and 16: 13Chapter 1IntroductionTraditional
- Page 17 and 18: 15event of collision, which is of s
- Page 19 and 20: 17ric. In Section 6.6 we employ the
- Page 21 and 22: 19Chapter 2Soft biometrics: charact
- Page 23 and 24: 21is the fusion of soft biometrics
- Page 25 and 26: 23plied on low resolution grey scal
- Page 27 and 28: 25Chapter 3Bag of facial soft biome
- Page 29 and 30: 27In this setting we clearly assign
- Page 31 and 32: 29Table 3.1: SBSs with symmetric tr
- Page 33 and 34: 31corresponding to p(n,ρ). Towards
- Page 35 and 36: the same category (all subjects in
- Page 37 and 38: 3.5.2 Analysis of interference patt
- Page 39 and 40: an SBS by increasing ρ, then what
- Page 41 and 42: 39Table 3.4: Example for a heuristi
- Page 43 and 44: 41for a given randomly chosen authe
- Page 45 and 46: 43Chapter 4Search pruning in video
- Page 47 and 48: 45Figure 4.1: System overview.SBS m
- Page 49 and 50: 472.52rate of decay of P(τ)1.510.5
- Page 51 and 52: 49to be the probability that the al
- Page 53: 51The following lemma describes the
- Page 57 and 58: 55n = 50 subjects, out of which we
- Page 59 and 60: 5710.950.9pruning Gain r(vt)0.850.8
- Page 61 and 62: 59for one person, for trait t, t =
- Page 63 and 64: 61Chapter 5Frontal-to-side person r
- Page 65 and 66: 63Figure 5.1: Frontal / gallery and
- Page 67 and 68: 6510.90.80.7Skin colorHair colorShi
- Page 69 and 70: 6710.90.80.70.6Perr0.50.40.30.20.10
- Page 71 and 72: 69Chapter 6Soft biometrics for quan
- Page 73 and 74: 71raphy considerations include [BSS
- Page 75 and 76: 73Figure 6.3: Example image of the
- Page 77 and 78: 75A direct way to find a relationsh
- Page 79 and 80: 77- Pearson’s correlation coeffic
- Page 81 and 82: 79shown to have a high impact on ou
- Page 83 and 84: 81Chapter 7Practical implementation
- Page 85 and 86: 834) Eye glasses detection: Towards
- Page 87 and 88: 857.2 Eye color as a soft biometric
- Page 89 and 90: 87Table 7.5: GMM eye color results
- Page 91 and 92: 89and office lights, daylight, flas
- Page 93 and 94: 917.5 SummaryThis chapter presented
- Page 95 and 96: 93Chapter 8User acceptance study re
- Page 97 and 98: 95Table 8.1: User experience on acc
- Page 99 and 100: 97scared of their PIN being spying.
- Page 101 and 102: 99Table 8.2: Comparison of existing
- Page 103 and 104: 101ConclusionsThis dissertation exp
- Page 105 and 106:
103Future WorkIt is becoming appare
- Page 107 and 108:
105Appendix AAppendix for Section 3
- Page 109 and 110:
107- We are now left withN −F = 2
- Page 111 and 112:
109Appendix BAppendix to Section 4B
- Page 113 and 114:
111Blue Green Brown BlackBlue 0.75
- Page 115 and 116:
113Appendix CAppendix for Section 6
- Page 117 and 118:
115Appendix DPublicationsThe featur
- Page 119 and 120:
117Bibliography[AAR04] S. Agarwal,
- Page 121 and 122:
119[FCB08] L. Franssen, J. E. Coppe
- Page 123 and 124:
121[Ley96] M. Leyton. The architect
- Page 125 and 126:
123[RN11] D. Reid and M. Nixon. Usi
- Page 127 and 128:
125[ZG09] X. Zhang and Y. Gao. Face
- Page 129:
2Rapporteurs:Prof. Dr. Abdenour HAD
- Page 132 and 133:
Biométrie faciale douce 2Les terme
- Page 134 and 135:
Biométrie faciale douce 4une perso
- Page 136 and 137:
Couleur depeauCouleur descheveuxCou
- Page 138 and 139:
Biométrie faciale douce 8Nous nous
- Page 140 and 141:
Biométrie faciale douce 103. Proba
- Page 142 and 143:
Biométrie faciale douce 12l’entr
- Page 144 and 145:
Biométrie faciale douce 14Figure 6
- Page 146 and 147:
Biométrie faciale douce 16pages 77
- Page 148 and 149:
Reviewers:Prof. Dr. Abdenour HADID,
- Page 150 and 151:
3hair, skin and clothes. The propos
- Page 152 and 153:
person in the red shirt”. Further
- Page 154 and 155:
7- Not requiring the individual’s
- Page 156 and 157:
9Probability of Collision10.90.80.7
- Page 158 and 159:
11the color FERET dataset [Fer11] w
- Page 160 and 161:
13Table 2: Table of Facial soft bio
- Page 162 and 163:
15Chapter 1PublicationsThe featured
- Page 164 and 165:
17Bibliography[ACPR10] D. Adjeroh,
- Page 166 and 167:
19[ZESH04] R. Zewail, A. Elsafi, M.