FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
66 5. FRONTAL-TO-SIDE PERSON RE–IDENTIFICATIONrelated as well, r Estimationerror(HairColor,SkinColor) = 0.22, which shows a tendency of jointlyoccurrence of classification errors, for both hair and skin color classification. On a different notewe point out that each further trait and its classification error contributes negatively to the over allcategorization error and thus the over all error probability is increasing with an increasing numberof traits and categories ρ. On the other hand with each further trait the collision probabilitydecreases.We proceed with the description and inclusion of two further properties of the employedpatches, namely texture and intensity difference.5.2.4.2 Patch textureWe formalize a descriptor for texture −→ x including following four characteristics and computethem on the graylevel images for each patch.Contrast: measure of the intensity contrast between a pixel and its neighbor over the wholeimage. The contrast in an image is related to its variance and inertia and is:x 1 = ∑ i,j|i−j| 2 p(i,j), (5.1)whereiandj denote the gray scale intensities of two pixels,p refers to the gray level co-occurrencematrix, which describes the co-occurrence of gray scale intensities between two image areas. Eachelement(i,j) in a gray level co-occurrence matrix specifies the number of times that the pixel withvalue i occurred horizontally adjacent to a pixel with value j.Correlation: measure for correlation of neighboring pixels and is denoted as:x 2 = ∑ i,j(i−µ i )(j −µ j )⃗p(i,j)σ i σ j, (5.2)whereµ i andµ j stand for the mean values of the two areas aroundiandj,σ i andσ j represent therelated standard deviations.Energy: sum of squared elements or angular second moment. Energy equal to one correspondsto a uniform color image.x 3 = ∑ i,jp(i,j) 2 (5.3)Homogeneity: measure of the closeness of distribution of elements.x 4 = ∑ i,jp(i,j)1+|i−j|(5.4)5.2.4.3 Patch histogram distanceAlong with the color information we integrate into our classifier a simple relation measure forthe divergence between the intensity probability density functions (pdf) of patches concerning onesubject. With other words we express the three relationships between intensities within a subject:hair–skin, skin–clothes and hair–clothes. Speaking in an example we expect to have a higherdistance measure for a person with brown hair and light skin than for a person with blond hairand light skin. For the computation we convert the patches to gray level intensities and assess the
6710.90.80.70.6Perr0.50.40.30.20.102 4 6 8 10 12 14 16 18 20Subjects NFigure 5.4: Overall-classifier obtained by boosting color, texture and intensity differences.L1–distance three times per person for all relations between the patches. For two distributions rand s of discrete random character the measure is given as:∑255D = ‖r −s‖ 1 = |r(k)−s(k)|, (5.5)where k represents a bin of the 255 intensity bins in a gray scale image.5.2.5 Combined overall-classifierk=1The combined over–all–classifier, which boosts all described traits, color, texture and intensitydifferences performs with a decreased error probability and thus outperforms expectedly the colorclassifier shown in Figure 5.3. Still the achieved error probability of0.1 in an authentication groupof 4 subjects is not sufficient enough for a robust re-identification system. This limited enhancedperformance is due to the strong illumination dependence of color and furthermore due to correlationsbetween traits, e.g. hair color–skin color or skin color–skin texture, see 3.4.1. We here notethat the FERET database is a database captured with controlled lighting conditions, so with a differenttesting database we expect the performance to decrease additionally. Towards increasing theperformance the amount of sub-classifiers can be extended, whereas emphasis should be placedon classifiers not based on color information. The system in its current constellation can be usedas a pruning system for more robust systems or as an additional system for multi-trait biometricsystems.5.3 SummaryMotivated by realistic surveillance scenarios, we addressed in this chapter the problem offrontal-to-side facial recognition, providing re–identification algorithms/classifiers that are specificallysuited for this setting. Emphasis was placed on classifiers that belong in the class of softbiometric traits, specifically color–, texture– and intensity– based traits taken from patches of
- Page 17 and 18: 15event of collision, which is of s
- Page 19 and 20: 17ric. In Section 6.6 we employ the
- Page 21 and 22: 19Chapter 2Soft biometrics: charact
- Page 23 and 24: 21is the fusion of soft biometrics
- Page 25 and 26: 23plied on low resolution grey scal
- Page 27 and 28: 25Chapter 3Bag of facial soft biome
- Page 29 and 30: 27In this setting we clearly assign
- Page 31 and 32: 29Table 3.1: SBSs with symmetric tr
- Page 33 and 34: 31corresponding to p(n,ρ). Towards
- Page 35 and 36: the same category (all subjects in
- Page 37 and 38: 3.5.2 Analysis of interference patt
- Page 39 and 40: an SBS by increasing ρ, then what
- Page 41 and 42: 39Table 3.4: Example for a heuristi
- Page 43 and 44: 41for a given randomly chosen authe
- Page 45 and 46: 43Chapter 4Search pruning in video
- Page 47 and 48: 45Figure 4.1: System overview.SBS m
- Page 49 and 50: 472.52rate of decay of P(τ)1.510.5
- Page 51 and 52: 49to be the probability that the al
- Page 53 and 54: 51The following lemma describes the
- Page 55 and 56: 534.5.1 Typical behavior: average g
- Page 57 and 58: 55n = 50 subjects, out of which we
- Page 59 and 60: 5710.950.9pruning Gain r(vt)0.850.8
- Page 61 and 62: 59for one person, for trait t, t =
- Page 63 and 64: 61Chapter 5Frontal-to-side person r
- Page 65 and 66: 63Figure 5.1: Frontal / gallery and
- Page 67: 6510.90.80.7Skin colorHair colorShi
- Page 71 and 72: 69Chapter 6Soft biometrics for quan
- Page 73 and 74: 71raphy considerations include [BSS
- Page 75 and 76: 73Figure 6.3: Example image of the
- Page 77 and 78: 75A direct way to find a relationsh
- Page 79 and 80: 77- Pearson’s correlation coeffic
- Page 81 and 82: 79shown to have a high impact on ou
- Page 83 and 84: 81Chapter 7Practical implementation
- Page 85 and 86: 834) Eye glasses detection: Towards
- Page 87 and 88: 857.2 Eye color as a soft biometric
- Page 89 and 90: 87Table 7.5: GMM eye color results
- Page 91 and 92: 89and office lights, daylight, flas
- Page 93 and 94: 917.5 SummaryThis chapter presented
- Page 95 and 96: 93Chapter 8User acceptance study re
- Page 97 and 98: 95Table 8.1: User experience on acc
- Page 99 and 100: 97scared of their PIN being spying.
- Page 101 and 102: 99Table 8.2: Comparison of existing
- Page 103 and 104: 101ConclusionsThis dissertation exp
- Page 105 and 106: 103Future WorkIt is becoming appare
- Page 107 and 108: 105Appendix AAppendix for Section 3
- Page 109 and 110: 107- We are now left withN −F = 2
- Page 111 and 112: 109Appendix BAppendix to Section 4B
- Page 113 and 114: 111Blue Green Brown BlackBlue 0.75
- Page 115 and 116: 113Appendix CAppendix for Section 6
- Page 117 and 118: 115Appendix DPublicationsThe featur
6710.90.80.70.6Perr0.50.40.30.20.102 4 6 8 10 12 14 16 18 20Subjects NFigure 5.4: Overall-classifier obtained by boosting color, texture and intensity differences.L1–distance three times per person for all relations between the patches. For two distributions rand s <strong>of</strong> discrete random character the measure is given as:∑255D = ‖r −s‖ 1 = |r(k)−s(k)|, (5.5)where k represents a bin <strong>of</strong> the 255 intensity bins in a gray scale image.5.2.5 Combined overall-classifierk=1The combined over–all–classifier, which boosts all described traits, color, texture and intensitydifferences performs with a decreased error probability and thus outperforms expectedly the colorclassifier shown in Figure 5.3. Still the achieved error probability <strong>of</strong>0.1 in an authentication group<strong>of</strong> 4 subjects is not sufficient enough for a robust re-identification system. This limited enhancedperformance is due to the strong illumination dependence <strong>of</strong> color and furthermore due to correlationsbetween traits, e.g. hair color–skin color or skin color–skin texture, see 3.4.1. We here notethat the FERET database is a database captured with controlled lighting conditions, so with a differenttesting database we expect the performance to decrease additionally. Towards increasing theperformance the amount <strong>of</strong> sub-classifiers can be extended, whereas emphasis should be placedon classifiers not based on color information. The system in its current constellation can be usedas a pruning system for more robust systems or as an additional system for multi-trait biometricsystems.5.3 SummaryMotivated by realistic surveillance scenarios, we addressed in this chapter the problem <strong>of</strong>frontal-to-side facial recognition, providing re–identification algorithms/classifiers that are specificallysuited for this setting. Emphasis was placed on classifiers that belong in the class <strong>of</strong> s<strong>of</strong>tbiometric traits, specifically color–, texture– and intensity– based traits taken from patches <strong>of</strong>