FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP

theses.eurasip.org
from theses.eurasip.org More from this publisher
13.07.2015 Views

44 4. SEARCH PRUNING IN VIDEO SURVEILLANCE SYSTEMSfurther elimination of categories, which limits large databases of subjects to a fraction of the initialdatabase, see Figure 4.1. In the context of this chapter the elimination or filtering of the employedcategories is based on the soft biometric characteristics of the subjects. The pruned database canbe subsequently processed by humans or by a biometric such as face recognition.The approach of pruning the search using SBSs, can apply to several re-identification scenarios,including the following:– A theft in a crowded mall is observed by different people who give partial information aboutthe thief’s appearance. Based on this information, a first-pass search applies SBS methodsto cut down on the long surveillance video recordings from several cameras.– A mother has lost her child and can describe traits like clothes color and height of the child.Video surveillance material can be pruned and resulting suggestions can be displayed to themother.The above cases support the applicability of SBSs, but also reveal that together with the benefitsof such systems, come considerable risks such as that of erroneously pruning out the target of thesearch. This brings to the fore the need to jointly analyze the gains and risks of such systems.In the setting of human identification, we consider the scenario where we search for a specificsubject of interest, denoted as v ′ , belonging to a large and randomly drawn authentication groupv of n subjects, where each subject belongs to one of ρ categories. The elements of the set(authentication group) v are derived randomly from a larger population, which adheres to a setof population statistics. A category corresponds to subjects who adhere to a specific combinationof soft biometric characteristics, so for example one may consider a category consisting of blond,tall, females. We note the analogy to the scenario from chapter 3, but proceed to elaborate on thedifferent goal of the current chapter.With n being potentially large, we seek to simplify the search for subject v ′ within v by algorithmicpruning based on categorization, i.e., by first identifying the subjects that potentiallybelong to the same category as v ′ , and by then pruning out all other subjects that have not beenestimated to share the same traits as v ′ . Pruning is then expected to be followed by careful searchof the remaining unpruned set. Such categorization-based pruning allows for a search speedupthrough a reduction in the search space, from v to some smaller and easier to handle set S whichis the subset of v that remains after pruning, see Figure 4.1 and Figure 4.4. This reduction thoughhappens in the presence of a set of categorization error probabilities {ɛ f }, called confusion probabilities,that essentially describe how easy it is for categories to be confused, hence also describingthe probability that the estimation algorithm erroneously prunes out the subject of interest, byfalsely categorizing it. This confusion set, together with the set of population statistics {p f } ρ f=1which describes how common a certain category is inside the large population, jointly define thestatistical performance of the search pruning, which we will explore. The above aspects will beprecisely described later on.Example 7 An example of a sufficiently large population includes the inhabitants of a certaincity, and an example of a randomly chosen authentication group (n-tuple) v includes the set ofpeople captured by a video surveillance system in the aforementioned city between 11:00 and11:05 yesterday. An example SBS could be able to classify 5 instances of hair color, 6 instances ofheight and 2 of gender, thus being able to differentiate betweenρ = 5·6·2 = 60 distinct categories.An example search could seek for a subject that was described to belong to the first category of,say, blond and tall females. The subject and the rest of the authentication group of n = 1000people, were captured by a video-surveillance system at approximately the same time and placesomewhere in the city. In this city, each SBS-based category appears with probability p 1 ,··· ,p 60 ,and each such category can be confused for the first category with probability ɛ 2 ,··· ,ɛ 60 . The

45Figure 4.1: System overview.SBS makes an error wheneverv ′ is pruned out, thus it allows for reliability ofɛ 1 . To clarify, havingp 1 = 0.1 implies that approximately one in ten city inhabitants are blond-tall-females, and havingɛ 2 = 0.05 means that the system (its feature estimation algorithms) tends to confuse the secondcategory for the first category with probability equal to 0.05.What becomes apparent though is that a more aggressive pruning of subjects in v results in asmaller S and a higher pruning gain, but as categorization entails estimation errors, such a gaincould come at the risk of erroneously pruning out the subject v ′ that we are searching for, thusreducing the system reliability.Reliability and pruning gain are naturally affected by, among other things, the distinctivenessand differentiability of the subjectv ′ from the rest of the people in the specific authentication groupv over which pruning will take place that particular instance. In several scenarios though, thisdistinctiveness changes randomly because v itself changes randomly. This introduces a stochasticenvironment. In this case, depending on the instance in which v ′ and its surroundings v−v ′ werecaptured by the system, some instances would have v consist of bystanders that look similar tothe subject of interest v ′ , and other instances would havevconsist of people who look sufficientlydifferent from the subject. Naturally the first case is generally expected to allow for a lowerpruning gain than the second case.The pruning gain and reliability behavior can also be affected by the system design. At oneextreme we find a very conservative system that prunes out a member of v only if it is highlyconfident about its estimation and categorization, in which case the system yields maximal reliability(near-zero error probability) but with a much reduced pruning gain. At the other extreme,we find an effective but unreliable system which aggressively prunes out subjects in v, resultingin a potentially much reduced search space (|S|

44 4. SEARCH PRUNING IN VIDEO SURVEILLANCE SYSTEMSfurther elimination <strong>of</strong> categories, which limits large databases <strong>of</strong> subjects to a fraction <strong>of</strong> the initialdatabase, see Figure 4.1. In the context <strong>of</strong> this chapter the elimination or filtering <strong>of</strong> the employedcategories is based on the s<strong>of</strong>t biometric characteristics <strong>of</strong> the subjects. The pruned database canbe subsequently processed by humans or by a biometric such as face recognition.The approach <strong>of</strong> pruning the search using SBSs, can apply to several re-identification scenarios,including the following:– A theft in a crowded mall is observed by different people who give partial information aboutthe thief’s appearance. Based on this information, a first-pass search applies SBS methodsto cut down on the long surveillance video recordings from several cameras.– A mother has lost her child and can describe traits like clothes color and height <strong>of</strong> the child.Video surveillance material can be pruned and resulting suggestions can be displayed to themother.The above cases support the applicability <strong>of</strong> SBSs, but also reveal that together with the benefits<strong>of</strong> such systems, come considerable risks such as that <strong>of</strong> erroneously pruning out the target <strong>of</strong> thesearch. This brings to the fore the need to jointly analyze the gains and risks <strong>of</strong> such systems.In the setting <strong>of</strong> human identification, we consider the scenario where we search for a specificsubject <strong>of</strong> interest, denoted as v ′ , belonging to a large and randomly drawn authentication groupv <strong>of</strong> n subjects, where each subject belongs to one <strong>of</strong> ρ categories. The elements <strong>of</strong> the set(authentication group) v are derived randomly from a larger population, which adheres to a set<strong>of</strong> population statistics. A category corresponds to subjects who adhere to a specific combination<strong>of</strong> s<strong>of</strong>t biometric characteristics, so for example one may consider a category consisting <strong>of</strong> blond,tall, females. We note the analogy to the scenario from chapter 3, but proceed to elaborate on thedifferent goal <strong>of</strong> the current chapter.With n being potentially large, we seek to simplify the search for subject v ′ within v by algorithmicpruning based on categorization, i.e., by first identifying the subjects that potentiallybelong to the same category as v ′ , and by then pruning out all other subjects that have not beenestimated to share the same traits as v ′ . Pruning is then expected to be followed by careful search<strong>of</strong> the remaining unpruned set. Such categorization-based pruning allows for a search speedupthrough a reduction in the search space, from v to some smaller and easier to handle set S whichis the subset <strong>of</strong> v that remains after pruning, see Figure 4.1 and Figure 4.4. This reduction thoughhappens in the presence <strong>of</strong> a set <strong>of</strong> categorization error probabilities {ɛ f }, called confusion probabilities,that essentially describe how easy it is for categories to be confused, hence also describingthe probability that the estimation algorithm erroneously prunes out the subject <strong>of</strong> interest, byfalsely categorizing it. This confusion set, together with the set <strong>of</strong> population statistics {p f } ρ f=1which describes how common a certain category is inside the large population, jointly define thestatistical performance <strong>of</strong> the search pruning, which we will explore. The above aspects will beprecisely described later on.Example 7 An example <strong>of</strong> a sufficiently large population includes the inhabitants <strong>of</strong> a certaincity, and an example <strong>of</strong> a randomly chosen authentication group (n-tuple) v includes the set <strong>of</strong>people captured by a video surveillance system in the aforementioned city between 11:00 and11:05 yesterday. An example SBS could be able to classify 5 instances <strong>of</strong> hair color, 6 instances <strong>of</strong>height and 2 <strong>of</strong> gender, thus being able to differentiate betweenρ = 5·6·2 = 60 distinct categories.An example search could seek for a subject that was described to belong to the first category <strong>of</strong>,say, blond and tall females. The subject and the rest <strong>of</strong> the authentication group <strong>of</strong> n = 1000people, were captured by a video-surveillance system at approximately the same time and placesomewhere in the city. In this city, each SBS-based category appears with probability p 1 ,··· ,p 60 ,and each such category can be confused for the first category with probability ɛ 2 ,··· ,ɛ 60 . The

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!