FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

theses.eurasip.org
from theses.eurasip.org More from this publisher
13.07.2015 Views

48 4. SEARCH PRUNING IN VIDEO SURVEILLANCE SYSTEMS Figure 4.4: Pruning process: categorization and elimination of categories.4.4 General settingFor this chapter, as mentioned above, we consider the setting where there is a search for asubject of interest v ′ , from within a larger authentication group of n subjects, v. The subject ofinterest v ′ is randomly placed inside v, and in turn v is randomly drawn from a larger population.Each member of v belongs to one of ρ categories C f ⊂ v, f = 1,··· ,ρ, with probability equalto|C f |p f := E v , f = 1,··· ,ρ, (4.4)nwhere E is used to denote the statistical expectation. Such category can be for example (labeledas) ’blue eyed, with moustache and with glasses’. The soft biometric system goes through theelements v ∈ v, and provides an estimate Ĉ(v) ∈ [1,ρ] of the category that v belongs in. For C′denoting the actual category of v ′ , where this category is considered to be known to the system,then each element v is pruned out if and only ifĈ(v) ≠ C′ . Specifically the SBS produces a setS = {v ∈ v : Ĉ(v) = C ′ } ⊂ vof subjects that were not pruned out. The pruning gain comes from the fact that S is generallysmaller than v.It is the case that pruning which results in generally smaller S, is associated to a higher gain,but also a higher risk of erroneously pruning out the target subject v ′ , thus reducing the reliabilityof the SBS. Both reliability and pruning gain are naturally affected by different parameters suchas– the category distribution of the authentication group v,– the distinctiveness of the category to which v ′ belongs– the system design: a conservatively tuned system will prune only with low risk to prune outv ′ , allowing for a high false acceptance rate FAR, on the other hand an aggressive systemwill prune stronger with the cost of a higher false rejection rate FRR.Furthermore, the gain is clearly a function ofv. Consequently any meaningful analysis of an SBSwill have to be statistical in nature. We here consider the average behavior of such systems. Insuch a case we will see that two aspects prove to be crucial in defining the average case behaviorof the system. The first aspect is the population statistics and the second is the error behavior ofthe different categorization algorithms. Specifically we here consider the vectorp := [p 1 ,p 2 ,··· ,p ρ ] T (4.5)which defines the entire population statistics. In terms of error behavior, we defineɛ ij := P(Ĉ(v) = C j : v ∈ C i ) (4.6)

49to be the probability that the algorithms will categorize into thejth categoryC j , an element whichactually belongs to the ith category C i (see Figure 4.5 for a graphical illustration). Simply ɛ iji,j ∈ [1,ρ] is the element of the ith row andjth column of what is known as theρ×ρ confusionmatrix, which we denote here asE:⎡E := ⎢⎣Related to these parameters we also define⎤ɛ 11 ɛ 12 ··· ɛ 1ρɛ 21 ɛ 22 ··· ɛ 2ρ. ..⎥ɛ ρ1 ɛ ρ2 ··· ɛ ρρɛ f :=ρ∑i=1,i≠f⎦ . (4.7)ɛ fi (4.8)to denote the probability that a member of category C f is wrongly categorized. Finally we use thenotatione := [ɛ 1 ,ɛ 2 ,··· ,ɛ ρ ]. (4.9)RealcategoriesSystem reliabilityEstimatedcategoriesFigure 4.5: Confusion parameters {ɛ f }.4.5 Statistical analysis using the method of types and informationdivergenceLet us consider a scenario where a search for a subject v ′ turned out to be extremely ineffective,and fell below the expectations, due to a very unfortunate matching of the subject withits surroundings v. This unfortunate scenario motivates the natural question of how often will asystem that was designed to achieve a certain average gain-reliability behavior, fall short of theexpectations, providing an atypically small pruning gain and leaving its users with an atypicallylarge and unmanageable S. It consequently brings to the previously related questions such as forexample, how will this probability be altered if we change the hardware and algorithmic resourcesof the system (change the ɛ f and ρ), or change the setting in which the system operates (changethe p i ).

49to be the probability that the algorithms will categorize into thejth categoryC j , an element whichactually belongs to the ith category C i (see Figure 4.5 for a graphical illustration). Simply ɛ iji,j ∈ [1,ρ] is the element <strong>of</strong> the ith row andjth column <strong>of</strong> what is known as theρ×ρ confusionmatrix, which we denote here asE:⎡E := ⎢⎣Related to these parameters we also define⎤ɛ 11 ɛ 12 ··· ɛ 1ρɛ 21 ɛ 22 ··· ɛ 2ρ. ..⎥ɛ ρ1 ɛ ρ2 ··· ɛ ρρɛ f :=ρ∑i=1,i≠f⎦ . (4.7)ɛ fi (4.8)to denote the probability that a member <strong>of</strong> category C f is wrongly categorized. Finally we use thenotatione := [ɛ 1 ,ɛ 2 ,··· ,ɛ ρ ]. (4.9)RealcategoriesSystem reliabilityEstimatedcategoriesFigure 4.5: Confusion parameters {ɛ f }.4.5 Statistical analysis using the method <strong>of</strong> types and informationdivergenceLet us consider a scenario where a search for a subject v ′ turned out to be extremely ineffective,and fell below the expectations, due to a very unfortunate matching <strong>of</strong> the subject withits surroundings v. This unfortunate scenario motivates the natural question <strong>of</strong> how <strong>of</strong>ten will asystem that was designed to achieve a certain average gain-reliability behavior, fall short <strong>of</strong> theexpectations, providing an atypically small pruning gain and leaving its users with an atypicallylarge and unmanageable S. It consequently brings to the previously related questions such as forexample, how will this probability be altered if we change the hardware and algorithmic resources<strong>of</strong> the system (change the ɛ f and ρ), or change the setting in which the system operates (changethe p i ).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!