FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
62 5. FRONTAL-TO-SIDE PERSON RE–IDENTIFICATIONWe here take a rather different and more direct approach in the sense that the proposed methoddoes not require mapping or a-priori learning. In applying this approach, we provide a preliminarystudy on the role of a specific set of simple features in frontal-to-side face recognition. Theseselected features are based on soft biometrics, as such features are highly applicable in videosurveillance scenarios. To clarify, we refer to re-identification as the correctly relating of data toan already explicitly identified subject (see [MSN03]). Our used traits of hair, skin and clothescolor and texture also belong in this category of soft biometric traits, and are thus of special interestfor video surveillance applications.5.2 System model, simulations and analysisIn what follows we empirically study the error probability of a system for extraction andclassification of properties related to the above described color soft biometric patches. We firstdefine the operational setting, and also clarify what we consider to be system reliability, addressingdifferent reliability related factors, whose impact we examine. The clarifying experiments are thenfollowed by a preliminary analytical exposition of the error probability of a combined classifierconsisting of the above described traits.5.2.1 Operational scenarioThe setting of interest corresponds to the re-identification of a randomly chosen subject (targetsubject) out of a random authentication group, where subjects in gallery and probe images haveapproximately 90 degrees pose difference.Patches retrieval: We retrieve automatically patches corresponding to the regions of hair, skinand clothes, see Figure 5.1, based on the coordinates of face and eyes. The size of each patch isdetermined empirically by one half and one third distance of center-to-center eye distance. Thefrontal placement of the patches is following, horizontally / vertically respectively:– hair patch: from nose towards left side / top of the head,– skin patch: centered around left eye / centered between mouth and eyes,– clothes patch: from left side of the head towards left / mouth-(mouth-top of head distance).The horizontal side face placements of the patches are: nose-(nose-chin distance), eye towardsleft, head length-chin. Those coordinates were provided along with the images from the FERETdatabase, but are also easily obtainable by state of the art face detectors, like the OpenCV implementationof the Viola and Jones algorithm [VJ01a].Following the extraction of the patches we retrieve a set of feature vectors including the colorand texture information. In this operational setting of interest, the reliability of our system capturesthe probability of false identification of a randomly chosen person out of a random set of nsubjects. This reliability relates to the following parameters:– Factor 1. The number of categories that the system can identify, as in the previous chapters3 and 4 we refer to asρ.– Factor 2. The degree with which these features / categories represent the chosen set ofsubjects over which identification will take place.– Factor 3. The robustness with which these categories can be detected.Finally reliability is related (empirically and analytically) to n, where a higher n corresponds toidentifying a person among an increasingly large set of possibly similar-looking people.We will proceed with the discussion and illustration of the above three named pertinent factors.
63Figure 5.1: Frontal / gallery and profile / probe image of a subject. Corresponding ROIs for hair,skin and clothes color.5.2.2 Factor 1. Number of categories that the system can identifyOur system employs soft biometric traits, where each trait is subdivided into trait-instances,that directly have an impact on the number of overall categories ρ. We here note that in the scientificwork [SGN08] the terminology trait and semantic term was used instead. For this specificscenario, our system is endowed with the traits hair, skin and shirt color as well as texture andintensity correlation. It is to be noted that, as already shown in the chapters 3 and 4 the morecategories a system can classify subjects into, the more reliable the system is in terms of distinctiveness.We proceed with the description of population of categories.5.2.3 Factor 2. Category representation of the chosen set of subjectsThe distribution of subjects over the set of ρ categories naturally has an impact on the specificimportance of the different traits and thus also affects the behavior of the system for a givenpopulation. Table 5.1 and Table 5.2 give example distributions, with respect to skin and haircolor traits, based on265 subjects from the color FERET database. For clarifying experiments weattributed this subset into three subcategories of skin color and eight subcategories of hair color.It is evident that the trait hair color has a higher distinctiveness and thus greater importance thanskin color, since it has more instances in which the subjects are subdivided into. This observationis illustrated in Figure 5.2 and we follow with the explanation.Categories 1 2 3Skin Color 62.64% 28.66% 8.7%Table 5.1: Distribution of FERET subjects in the three skin color categories.Categories 1 2 3 4 5 6 7 8Hair Color 4.2% 26.8% 46.4% 3% 3% 3.4% 1.5% 11.7%Table 5.2: Distribution of FERET subjects in eight hair color categories.In the case of (re-)identification the only way to unambiguously recognize a person is an
- Page 13 and 14: 11Notations used in this workE : st
- Page 15 and 16: 13Chapter 1IntroductionTraditional
- Page 17 and 18: 15event of collision, which is of s
- Page 19 and 20: 17ric. In Section 6.6 we employ the
- Page 21 and 22: 19Chapter 2Soft biometrics: charact
- Page 23 and 24: 21is the fusion of soft biometrics
- Page 25 and 26: 23plied on low resolution grey scal
- Page 27 and 28: 25Chapter 3Bag of facial soft biome
- Page 29 and 30: 27In this setting we clearly assign
- Page 31 and 32: 29Table 3.1: SBSs with symmetric tr
- Page 33 and 34: 31corresponding to p(n,ρ). Towards
- Page 35 and 36: the same category (all subjects in
- Page 37 and 38: 3.5.2 Analysis of interference patt
- Page 39 and 40: an SBS by increasing ρ, then what
- Page 41 and 42: 39Table 3.4: Example for a heuristi
- Page 43 and 44: 41for a given randomly chosen authe
- Page 45 and 46: 43Chapter 4Search pruning in video
- Page 47 and 48: 45Figure 4.1: System overview.SBS m
- Page 49 and 50: 472.52rate of decay of P(τ)1.510.5
- Page 51 and 52: 49to be the probability that the al
- Page 53 and 54: 51The following lemma describes the
- Page 55 and 56: 534.5.1 Typical behavior: average g
- Page 57 and 58: 55n = 50 subjects, out of which we
- Page 59 and 60: 5710.950.9pruning Gain r(vt)0.850.8
- Page 61 and 62: 59for one person, for trait t, t =
- Page 63: 61Chapter 5Frontal-to-side person r
- Page 67 and 68: 6510.90.80.7Skin colorHair colorShi
- Page 69 and 70: 6710.90.80.70.6Perr0.50.40.30.20.10
- Page 71 and 72: 69Chapter 6Soft biometrics for quan
- Page 73 and 74: 71raphy considerations include [BSS
- Page 75 and 76: 73Figure 6.3: Example image of the
- Page 77 and 78: 75A direct way to find a relationsh
- Page 79 and 80: 77- Pearson’s correlation coeffic
- Page 81 and 82: 79shown to have a high impact on ou
- Page 83 and 84: 81Chapter 7Practical implementation
- Page 85 and 86: 834) Eye glasses detection: Towards
- Page 87 and 88: 857.2 Eye color as a soft biometric
- Page 89 and 90: 87Table 7.5: GMM eye color results
- Page 91 and 92: 89and office lights, daylight, flas
- Page 93 and 94: 917.5 SummaryThis chapter presented
- Page 95 and 96: 93Chapter 8User acceptance study re
- Page 97 and 98: 95Table 8.1: User experience on acc
- Page 99 and 100: 97scared of their PIN being spying.
- Page 101 and 102: 99Table 8.2: Comparison of existing
- Page 103 and 104: 101ConclusionsThis dissertation exp
- Page 105 and 106: 103Future WorkIt is becoming appare
- Page 107 and 108: 105Appendix AAppendix for Section 3
- Page 109 and 110: 107- We are now left withN −F = 2
- Page 111 and 112: 109Appendix BAppendix to Section 4B
- Page 113 and 114: 111Blue Green Brown BlackBlue 0.75
62 5. FRONTAL-TO-SIDE PERSON RE–IDENTIFICATIONWe here take a rather different and more direct approach in the sense that the proposed methoddoes not require mapping or a-priori learning. In applying this approach, we provide a preliminarystudy on the role <strong>of</strong> a specific set <strong>of</strong> simple features in frontal-to-side face recognition. <strong>Theses</strong>elected features are based on s<strong>of</strong>t biometrics, as such features are highly applicable in videosurveillance scenarios. To clarify, we refer to re-identification as the correctly relating <strong>of</strong> data toan already explicitly identified subject (see [MSN03]). Our used traits <strong>of</strong> hair, skin and clothescolor and texture also belong in this category <strong>of</strong> s<strong>of</strong>t biometric traits, and are thus <strong>of</strong> special interestfor video surveillance applications.5.2 System model, simulations and analysisIn what follows we empirically study the error probability <strong>of</strong> a system for extraction andclassification <strong>of</strong> properties related to the above described color s<strong>of</strong>t biometric patches. We firstdefine the operational setting, and also clarify what we consider to be system reliability, addressingdifferent reliability related factors, whose impact we examine. The clarifying experiments are thenfollowed by a preliminary analytical exposition <strong>of</strong> the error probability <strong>of</strong> a combined classifierconsisting <strong>of</strong> the above described traits.5.2.1 Operational scenarioThe setting <strong>of</strong> interest corresponds to the re-identification <strong>of</strong> a randomly chosen subject (targetsubject) out <strong>of</strong> a random authentication group, where subjects in gallery and probe images haveapproximately 90 degrees pose difference.Patches retrieval: We retrieve automatically patches corresponding to the regions <strong>of</strong> hair, skinand clothes, see Figure 5.1, based on the coordinates <strong>of</strong> face and eyes. The size <strong>of</strong> each patch isdetermined empirically by one half and one third distance <strong>of</strong> center-to-center eye distance. Thefrontal placement <strong>of</strong> the patches is following, horizontally / vertically respectively:– hair patch: from nose towards left side / top <strong>of</strong> the head,– skin patch: centered around left eye / centered between mouth and eyes,– clothes patch: from left side <strong>of</strong> the head towards left / mouth-(mouth-top <strong>of</strong> head distance).The horizontal side face placements <strong>of</strong> the patches are: nose-(nose-chin distance), eye towardsleft, head length-chin. Those coordinates were provided along with the images from the FERETdatabase, but are also easily obtainable by state <strong>of</strong> the art face detectors, like the OpenCV implementation<strong>of</strong> the Viola and Jones algorithm [VJ01a].Following the extraction <strong>of</strong> the patches we retrieve a set <strong>of</strong> feature vectors including the colorand texture information. In this operational setting <strong>of</strong> interest, the reliability <strong>of</strong> our system capturesthe probability <strong>of</strong> false identification <strong>of</strong> a randomly chosen person out <strong>of</strong> a random set <strong>of</strong> nsubjects. This reliability relates to the following parameters:– Factor 1. The number <strong>of</strong> categories that the system can identify, as in the previous chapters3 and 4 we refer to asρ.– Factor 2. The degree with which these features / categories represent the chosen set <strong>of</strong>subjects over which identification will take place.– Factor 3. The robustness with which these categories can be detected.Finally reliability is related (empirically and analytically) to n, where a higher n corresponds toidentifying a person among an increasingly large set <strong>of</strong> possibly similar-looking people.We will proceed with the discussion and illustration <strong>of</strong> the above three named pertinent factors.