FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

theses.eurasip.org
from theses.eurasip.org More from this publisher
13.07.2015 Views

66 5. FRONTAL-TO-SIDE PERSON RE–IDENTIFICATIONrelated as well, r Estimationerror(HairColor,SkinColor) = 0.22, which shows a tendency of jointlyoccurrence of classification errors, for both hair and skin color classification. On a different notewe point out that each further trait and its classification error contributes negatively to the over allcategorization error and thus the over all error probability is increasing with an increasing numberof traits and categories ρ. On the other hand with each further trait the collision probabilitydecreases.We proceed with the description and inclusion of two further properties of the employedpatches, namely texture and intensity difference.5.2.4.2 Patch textureWe formalize a descriptor for texture −→ x including following four characteristics and computethem on the graylevel images for each patch.Contrast: measure of the intensity contrast between a pixel and its neighbor over the wholeimage. The contrast in an image is related to its variance and inertia and is:x 1 = ∑ i,j|i−j| 2 p(i,j), (5.1)whereiandj denote the gray scale intensities of two pixels,p refers to the gray level co-occurrencematrix, which describes the co-occurrence of gray scale intensities between two image areas. Eachelement(i,j) in a gray level co-occurrence matrix specifies the number of times that the pixel withvalue i occurred horizontally adjacent to a pixel with value j.Correlation: measure for correlation of neighboring pixels and is denoted as:x 2 = ∑ i,j(i−µ i )(j −µ j )⃗p(i,j)σ i σ j, (5.2)whereµ i andµ j stand for the mean values of the two areas aroundiandj,σ i andσ j represent therelated standard deviations.Energy: sum of squared elements or angular second moment. Energy equal to one correspondsto a uniform color image.x 3 = ∑ i,jp(i,j) 2 (5.3)Homogeneity: measure of the closeness of distribution of elements.x 4 = ∑ i,jp(i,j)1+|i−j|(5.4)5.2.4.3 Patch histogram distanceAlong with the color information we integrate into our classifier a simple relation measure forthe divergence between the intensity probability density functions (pdf) of patches concerning onesubject. With other words we express the three relationships between intensities within a subject:hair–skin, skin–clothes and hair–clothes. Speaking in an example we expect to have a higherdistance measure for a person with brown hair and light skin than for a person with blond hairand light skin. For the computation we convert the patches to gray level intensities and assess the

6710.90.80.70.6Perr0.50.40.30.20.102 4 6 8 10 12 14 16 18 20Subjects NFigure 5.4: Overall-classifier obtained by boosting color, texture and intensity differences.L1–distance three times per person for all relations between the patches. For two distributions rand s of discrete random character the measure is given as:∑255D = ‖r −s‖ 1 = |r(k)−s(k)|, (5.5)where k represents a bin of the 255 intensity bins in a gray scale image.5.2.5 Combined overall-classifierk=1The combined over–all–classifier, which boosts all described traits, color, texture and intensitydifferences performs with a decreased error probability and thus outperforms expectedly the colorclassifier shown in Figure 5.3. Still the achieved error probability of0.1 in an authentication groupof 4 subjects is not sufficient enough for a robust re-identification system. This limited enhancedperformance is due to the strong illumination dependence of color and furthermore due to correlationsbetween traits, e.g. hair color–skin color or skin color–skin texture, see 3.4.1. We here notethat the FERET database is a database captured with controlled lighting conditions, so with a differenttesting database we expect the performance to decrease additionally. Towards increasing theperformance the amount of sub-classifiers can be extended, whereas emphasis should be placedon classifiers not based on color information. The system in its current constellation can be usedas a pruning system for more robust systems or as an additional system for multi-trait biometricsystems.5.3 SummaryMotivated by realistic surveillance scenarios, we addressed in this chapter the problem offrontal-to-side facial recognition, providing re–identification algorithms/classifiers that are specificallysuited for this setting. Emphasis was placed on classifiers that belong in the class of softbiometric traits, specifically color–, texture– and intensity– based traits taken from patches of

6710.90.80.70.6Perr0.50.40.30.20.102 4 6 8 10 12 14 16 18 20Subjects NFigure 5.4: Overall-classifier obtained by boosting color, texture and intensity differences.L1–distance three times per person for all relations between the patches. For two distributions rand s <strong>of</strong> discrete random character the measure is given as:∑255D = ‖r −s‖ 1 = |r(k)−s(k)|, (5.5)where k represents a bin <strong>of</strong> the 255 intensity bins in a gray scale image.5.2.5 Combined overall-classifierk=1The combined over–all–classifier, which boosts all described traits, color, texture and intensitydifferences performs with a decreased error probability and thus outperforms expectedly the colorclassifier shown in Figure 5.3. Still the achieved error probability <strong>of</strong>0.1 in an authentication group<strong>of</strong> 4 subjects is not sufficient enough for a robust re-identification system. This limited enhancedperformance is due to the strong illumination dependence <strong>of</strong> color and furthermore due to correlationsbetween traits, e.g. hair color–skin color or skin color–skin texture, see 3.4.1. We here notethat the FERET database is a database captured with controlled lighting conditions, so with a differenttesting database we expect the performance to decrease additionally. Towards increasing theperformance the amount <strong>of</strong> sub-classifiers can be extended, whereas emphasis should be placedon classifiers not based on color information. The system in its current constellation can be usedas a pruning system for more robust systems or as an additional system for multi-trait biometricsystems.5.3 SummaryMotivated by realistic surveillance scenarios, we addressed in this chapter the problem <strong>of</strong>frontal-to-side facial recognition, providing re–identification algorithms/classifiers that are specificallysuited for this setting. Emphasis was placed on classifiers that belong in the class <strong>of</strong> s<strong>of</strong>tbiometric traits, specifically color–, texture– and intensity– based traits taken from patches <strong>of</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!