12.07.2015 Views

Parallel biometrics computing using mobile agents ... - IEEE Xplore

Parallel biometrics computing using mobile agents ... - IEEE Xplore

Parallel biometrics computing using mobile agents ... - IEEE Xplore

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Parallel</strong> Biometrics Computing Using Mobile AgentsJ. You, D. Zhang, J. CaoDepartment of ComputingThe Hong Kong Polytechnic UniversityKowloon, Hong Kongfcsyjia, csdzhang, csjcaog@comp.polyu.edu.hkMinyi GuoDepartment of Computer SoftwareThe University of AizuAizu-Wakamatsu City, Japanminyi@u-aizu.ac.jpAbstractThis paper presents an efficient and effective approachto personal identification by parallel <strong>biometrics</strong> <strong>computing</strong><strong>using</strong> <strong>mobile</strong> <strong>agents</strong>. To overcome the limitations of the existingpassword-based authentication services on the Internet,we integrate multiple personal features including fingerprints,palmprints, hand geometry and face into a hierarchicalstructure for fast and reliable personal identificationand verification. To increase the speed and flexibilityof the process, we use <strong>mobile</strong> <strong>agents</strong> as a navigationaltool for parallel implementation in a distributed environment,which includes hierarchical biometric feature extraction,multiple feature integration, dynamic biometric dataindexing and guided search. To solve the problems associatedwith bottlenecks and platform dependence, we applya four-layered structural model and a three-dimensionaloperational model to achieve high performance. Insteadof applying predefined task scheduling schemes to allocatethe <strong>computing</strong> resources, we introduce a new on-line competitivealgorithm to guide the dynamic allocation of <strong>mobile</strong><strong>agents</strong> with greater flexibility. The experimental resultsdemonstrate the feasibility and the potential of the proposedmethod.1. IntroductionA wide range of e-commerce applications require highlevels of security with reliable identification and verificationof the users who access to the on-line services. However,the traditional security measures such as passwords, PIN(Personal Identification Number) and ID cards can barelysatisfy the strict security requirements because the use ofpasswords, PINs and ID cards is very insecure (i.e., theycan be lost, stolen, forged or forgotten). More importantly,the password-based methods become ineffective on computernetworks, especially the Internet, where attackers canmonitor network traffic and intercept passwords or PINs.There is an urgent need to authenticate individuals in thevarious domains of today’s automated, geographically <strong>mobile</strong>and increasingly electronically wired information society[5]. Biometric technology provides a totally new and yetan effective solution to authentication, which changes theconventional security and access control systems by recognisingindividuals based on their unique, reliable and stablebiological or behavioral characteristics [10].These characteristicsinclude fingerprints, palmprints, iris patterns, facialfeatures, speech patterns and handwriting styles. Thisnew security technique overcomes many of the limitationsof the traditional automatic personal identification technologies[11].The ultimate goal of developing an identity verificationsystem is to achieve the best possible performance in termsof accuracy, efficiency and cost. In general, the designof such an automated biometric system involves biometricdata acquisition, data representation, feature extraction,matching, classification and evaluation [1]. Recent studiesshow that the fusion of multiple sources of evidence canimprove performance and increase the robustness of verification[14]. Consequently, it is desirable to utilize and integratemultiple biometric features to improve system accuracyand efficiency. However, a comprehensive, integratedand fully automatic biometric verification and identificationsystem has not yet been fully exploited.To meet the challenge and immediate need for a highperformance Internet authentication service that can secureInternet-based e-commerce applications and overcome thelimitations of the existing password-based authenticationservices on the Internet, we apply <strong>biometrics</strong> <strong>computing</strong>technology to achieve fast and reliable personal identification.Considering the reliability and the convenience of biometricdata collection from users, four biometric features(i.e., fingerprints, palmprints, hand geometry and face) areused in our proposed system. We adopt a dynamic featureselection scheme for the application-oriented authenticationtasks. In other words, a user can determine the level of authentication:either based on a single feature (<strong>using</strong> an indi-Proceedings of the 2003 International Conference on <strong>Parallel</strong> Processing (ICPP’03)0190-3918/03 $ 17.00 © 2003 <strong>IEEE</strong>


vidual’s fingerprints, palmprints, hand geometry or face) ormultiple features by integrating the individual features intoa hierarchical structure for coarse-to-fine matching.It is noted that it is very important to access and retrievean individual’s <strong>biometrics</strong> information from large data collectionsthat are distributed over large networks. However,it is difficult to have a uniform search engine that suits variousneeds.In this paper, we use <strong>mobile</strong> <strong>agents</strong> as a navigationaltool for a flexible approach to index and searchdistributed <strong>biometrics</strong> databases, which can 1) simultaneouslyextract useful <strong>biometrics</strong> information from differentdata collection sources on the network, 2) categorize imagesby <strong>using</strong> an index-on-demand scheme that allows usersto set up different index structures for fast search, and 3)support a flexible search scheme that allows users to chooseeffective methods to retrieve image samples.Although <strong>biometrics</strong> databases are distributed, most ofthe current research on <strong>biometrics</strong> <strong>computing</strong> has been focusedon a single-machine-based system. The methods developedfor such a system cannot be simply extended toaccommodate a distributed system. In order to effectivelyindex and search for images with specific features amongdistributed image collections, it is essential to have a sortof “agent” that can be launched to create an index basedon specific image feature or to search for specific imageswith a given content. In this paper, we use <strong>mobile</strong> <strong>agents</strong>as a tool to achieve network-transparent <strong>biometrics</strong> indexingand searching. In addition, we introduce a new systemstructure for dynamic allocation of <strong>mobile</strong> <strong>agents</strong> <strong>using</strong> onlinetask scheduling to address the limitations of the currentapproaches and to achieve greater flexibility. The proposedmulti-agent system structure is enhanced by push-basedtechnology [2]. The <strong>mobile</strong> <strong>agents</strong> are created and cloneddynamically, initialized with service units and pushed fromremote sites to local sites that are more convenient for localclients to access, thus speeding up the system’s response.The rest of this paper is organized as follows. Section2 highlights the hierarchical scheme and dynamic algorithmsused in <strong>biometrics</strong> <strong>computing</strong> for personal identificationand verification. The use of platform-independent<strong>mobile</strong> <strong>agents</strong> for parallel <strong>biometrics</strong> <strong>computing</strong> is brieflydescribed in Section 3, with the focus on a proposed fourlayeredstructural model and a three-dimensional operationalmodel. An on-line task-scheduling algorithm for dynamicallyallocationg <strong>mobile</strong> <strong>agents</strong> is introduced in Section4. The experimental results are reported in Section 5.Finally, conclusions are given in Section 6.2. Fundamentals of Biometrics ComputingIn general, a biometric system can operate in two modes:verification and identification. The question of how to representan image by its biometric features is the first keyissue in biometric-based identity authentication. The performanceof a biometric system is judged by its accuracyand efficiency. For a verification task, the system deals witha one-to-one comparison. Thus, the focus of multimodal<strong>biometrics</strong> is to improve the accuracy of the system by eitherthe integration of multiple snapshots of a single biometricor the integration of several different <strong>biometrics</strong>. Foran identification task, the system deals with one-to-manycomparisons to find a match. An appropriate integrationscheme is required to reduce the computational complexityto achieve the comparison with reliable accuracy. It isessential to index <strong>biometrics</strong> information by <strong>using</strong> multiplefeature integration to facilitate feature matching for <strong>biometrics</strong>information retrieval. Feature extraction, the indexscheme and the search strategy are three primary issues tobe solved. This section describes a hierarchical scheme for<strong>biometrics</strong> <strong>computing</strong> that is based on wavelet transforms,which includes multiple feature extraction, dynamic featureindexing and guided search.2.1. Wavelet Based Multiple Feature ExtractionIn contrast to the existing approaches which extract eachbiometric feature individually, we introduce a hierarchicalscheme for multiple <strong>biometrics</strong> feature representation andintegration. Initially, we categorize the biometric featuresinto three classes based on their nature ( ı.e., texture feature,shape feature and frequency feature). For example,the texture feature class includes fingerprints, palmprints,iris patterns, etc., while the shape feature class containsfeatures such as facial features, retina, hand geometry, andhandwriting. Some features such as speech and texture canalso be analyzed in the frequency domain, which belongsto frequency feature class. Then, we use a wavelet-basedscheme to combine the different feature classes based ontheir wavelet coefficients. To capture features at differentscales and orientations, we use a wavelet filter bank to decomposethe sample data into different decorrelated subbandsfor feature measurements. The details of the extractionof these features are described in [12].2.2. Dynamic Biometrics Feature IndexingBiometrics indexing plays a key role in personal identification.Indexing tabular data for exact matching orrange searches in traditional databases is a well-understoodproblem, and structures like B-trees provide efficient accessmechanisms. However, in the context of similaritymatching for <strong>biometrics</strong> images, traditional indexing methodsmay not be appropriate. Consequently, data structuresfor fast access to the high-dimensional features of the spatialrelationships have to be developed. In this paper, wepropose a wavelet-based <strong>biometrics</strong> image hierarchy and aProceedings of the 2003 International Conference on <strong>Parallel</strong> Processing (ICPP’03)0190-3918/03 $ 17.00 © 2003 <strong>IEEE</strong>


multiple feature integration scheme to facilitate the dynamic<strong>biometrics</strong> indexing that is associated with data summarization.Our approach is characterized as follows. 1) To applywavelet transforms to decompose a given biometric imageinto three layers of 10 sub-images. 2) To use the mean ofthe wavelet coefficients in three layers as the global featuremeasurements with respect to texture and shape, and thenindex them as tabular data in a global feature summary table.3) To calculate the mean of the wavelet coefficientsof the sub-band images (horizontal, vertical and diagonal)in different layers as local <strong>biometrics</strong> information, and thenindex them as tabular data in a local <strong>biometrics</strong> summarytable. 4) To detect the interesting points of the objects inthe original image and then store them in a table for finematch. To achieve dynamic indexing and flexible similaritymeasurement, a statistically-based feature-selection schemeis adopted for multiple feature integration. Such a schemealso coordinates data summarization to search for the bestmatch among <strong>biometrics</strong> similarities.2.3. Guided SearchThe third key issue in <strong>biometrics</strong>-based verification andidentification is feature matching, which is concerned withverifying and identifying the <strong>biometrics</strong> features that bestmatch a query sample provided by a user. In contrast to thecurrent approaches, which often use fixed matching criteriato select the candidate images, we propose <strong>using</strong> selectivematching criteria that are associated with a user’s queryfor more flexible search. Our system supports two types ofqueries: a) to pose a query by <strong>using</strong> a sample image, and b)to use a simple sketch as a query.In the case of query by <strong>using</strong> a sample image, the searchfollows the process of multiple feature extraction and imagesimilarity measurement that was described in the previoussections. Based on the nature of the query image,the user can add additional component weights during theprocess of combining image features for image similaritymeasurement. In the case of query by <strong>using</strong> a simple sketchprovided a user, we apply a B-spline based curve matchingscheme to identify the most suitable candidates from theimage database. The goal here is to match and recognizethe shape curves that were selected in the previous stage.These candidate curves are then modeled as B-splines andthe matching is based on comparing their control points(such as the ordered corner points obtained from boundarytracing at the initial stage ). Such a process involvesthe following steps: 1) projective-invariant curve models:uniform cubic B-splines, 2) iterative B-spline parameter estimation,and 3) invariant matching of the curves. In thecase of a query by <strong>using</strong> a sample image, we use an imagecomponent code in terms of texture and shape to guide thesearch for the most appropriate candidates from a databaseat a coarse level, and then apply image matching at a finelevel for the final output.3. <strong>Parallel</strong> Biometrics Computing Using MobileAgents<strong>Parallel</strong> computation has been used successfully in manyareas of computer science to speed up the computation requiredto solve a problem. In the field of image processingand computer vision, this is especially appropriate sinceit appears that the biological model for vision is a parallelmodel. In contrast to the conventional parallel implementationwhere either dedicated hardware or software are required,the parallel implementation of our parallel <strong>biometrics</strong><strong>computing</strong> is carried out by <strong>using</strong> <strong>mobile</strong> <strong>agents</strong> in adistributed <strong>computing</strong> environment. This section briefly describesthe basic system structure and the implementationstrategy.3.1. Mobile Agents and Multi-agent System StructureCompared with earlier paradigms such as process migrationor remote evaluation in distributed <strong>computing</strong>, the <strong>mobile</strong>agent model is becoming popular for network-centricprogramming. As an autonomous software entity, withpredefined functionality and certain intelligence, a <strong>mobile</strong>agent is capable of migrating autonomously from one hostto another, making its requests to a server directly and performingtasks on behalf of its master. Some of the advantagesof this model are better network bandwidth usage,more reliable network connection and reduced work in softwareapplication design. A detailed discussion of the currentcommercial and research-based agent systems can befound in [6].To achieve flexibility and efficiency, we propose a multi<strong>agents</strong>ystem, which includes two major modules: A remote system module working as a remote <strong>agents</strong>erver that hosts three kinds of <strong>agents</strong>: stationary<strong>agents</strong>, <strong>mobile</strong> agent controllers and <strong>mobile</strong> service<strong>agents</strong>. A local system module working as a local agent serverhosting two kinds of <strong>agents</strong>: <strong>mobile</strong> service <strong>agents</strong> andclient <strong>agents</strong>.Fig. 1 shows the general structure of the proposed system.The system is characterized by four kinds of servicesto support its major functionality: service registration, servicepreparation, service consumption and service completion.The system also includes two interfaces: a JDBC(Java Database Connectivity) interface resides at the remoteProceedings of the 2003 International Conference on <strong>Parallel</strong> Processing (ICPP’03)0190-3918/03 $ 17.00 © 2003 <strong>IEEE</strong>


server to retrieve service units from a back-end database,and a user interface resides at a local server to accept incomingrequests, which have been grouped together accordingto the number of requests that each client makes.has to clone more and supply them later, which causes adelay in the system’s response. If too many service unitsare prepared, then the unclaimed service <strong>agents</strong> can be disposedof explicitly, but this increases system overheads. Inthe proposed system, both lower overheads and faster serviceperformance need to be taken into account. Section 4presents an on-line algorithm to address the problems describedabove, which we call the on-line task-schedulingproblem (OTSP).3.2. Implementation StrategyTo facilitate the implementation of each component ofthe system, we adopt a hybrid agent <strong>computing</strong> paradigm.There are two classes of <strong>agents</strong>: global <strong>agents</strong> and local<strong>agents</strong>. The global <strong>agents</strong> handle inter-image coordination,query processing and reasoning. Each global agent mayconsist of a few sub-<strong>agents</strong>. The following list describesthe five global <strong>agents</strong> and their associated sub-<strong>agents</strong> thatare proposed in our system: Coordinator Agent: coordinating other global <strong>agents</strong>and image <strong>agents</strong>Fig. 1. Multi-agent system structureThe proposed multi-agent system can be viewed in twoways, one way is the system’s structural model, and theother is the operational model. The system structural modelconsists of four layers: 1) A point-to-point end user layeris a P2P (peer-to-peer) communication channel between anend user and an application server. 2) A line-like centralcontroller layer controls all decision-making schemes. 3)A star-like external layer for feature extraction and representation.4) A oval-like remote application layer is theouter application component for integrating a number of remotelegacy systems. From an operational point of view, wepropose a three-dimensional model which consists of threeblocks: 1) A central-agent host block implementing intelligentdetection scheme on a single PC. 2) A neighboringagenthost block implementing a feature extraction schemein a parallel <strong>computing</strong> environment. 3) A remote-agenthost block implementing a remote decision-making schemein a distributed environment.The design of the proposed system is straightforward,but a problem arises when a remote-agent controller preparesfor service units: it needs to decide how many <strong>mobile</strong>service units (<strong>agents</strong>) to create, clone and dispatch. Ifinsufficient service units are prepared, the agent controller Query Agent: processing users’ complex queries <strong>using</strong>three sub-<strong>agents</strong>, namely:– query understanding: categorizing a query (texturetbased,shape-based, or combination)– query reasoning: extracting text-based keywords forimage grouping– query feature formation: selecting appropriate featuresfor object generation and manipulation Wavelet Agent: generating wavelet coefficients formultiple feature representation and integration <strong>using</strong>three sub-<strong>agents</strong>:– wavelet transform : decomposition of an image intoa series of sub-band images in an image hierarchy– feature representation: of an individual feature vectorin terms of the wavelet coefficients– feature integration: combining multiple feature vectorswith adjustment weights Verification/Identification Agent: performing hierarchicalfeature matching for identity verification andidentification <strong>using</strong> two sub-<strong>agents</strong>:– matching criterion selection: selection of similaritymeasures– feature matching: hierarchical image matching User Interface Agent: managing all user interactionsProceedings of the 2003 International Conference on <strong>Parallel</strong> Processing (ICPP’03)0190-3918/03 $ 17.00 © 2003 <strong>IEEE</strong>


The local <strong>agents</strong> are referred to as Image Agents, whichare responsible for performing relevant <strong>biometrics</strong> <strong>computing</strong>tasks on each individual image.4. On-line Task Scheduling Using A CompetitiveAlgorithm4.1. BackgroundOn-line problems can be found in many research areassuch as data structuring, task scheduling or resource allocation[4]. In general, on-line problems are characterizedby the need to satisfy requests or make decisions withoutforeknowledge of future requests [7]. This is different fromtraditional system analysis approaches where algorithms aredesigned with the assumption that the complete sequence ofrequests is known.In this paper, we develop a competitive task-schedulingalgorithm to allocate <strong>mobile</strong> <strong>agents</strong> to client service requests.Our aim is to minimize the total cost of servicein an on-line environment. We measure each servicein terms of its <strong>computing</strong> cost. For example, we usecost(creation) to denote the average cost of the creationservice. Five types of costs are involved in our proposedsystem: cost(creation), cost(cloning), cost(dispatching),cost(disposal) and cost(messaging).We examine three scenarios when a client with x requestsconsumes d ready-made service units, we have thefollowing cases:1) x = d, the client happens to consume all of theready-made service units. In this scenario, each serviceagent costs c = Cost(creation) + Cost(cloning) +Cost(dispatching), and the total cost is cd or cx. 2)x d, the number of ready-madeservice units are insufficient to meet the client’s requests, soanother (x d) units are cloned and dispatched. In this scenario,we have c2 = Cost(messaging)+Cost(cloning)+Cost(dispatching), and the total cost is cd + c2(x d) orcx +(c2 c)(x d).If the actual request sequence is denoted by =(x1x2:::x n ),wherex i means the actual number of requestsin the ith period, we can obtain the optimal off-linecost of the problem (for a single client):P nC OP T () =c i=1 x iFor the same request sequence, if d i denotes the serviceunits that should be prepared by the on-line decision-makerfor the ith period, we can obtain the on-line cost of a competitivealgorithm A as follows:C A () =c P ni=1 x i +(c 2 c) P ni=1x i >d i(x i d i )+c 1 P ni=1x i


to 50. A special electronic sensor was used to obtain digitizedsamples on-line. Fig. 2 illustrates the samples of digitizedhand boundary, fingers and palmprints of one hand.A series of experiments were carried out to verify the highperformance of the proposed algorithms.level and candidate samples are selected for further processing.In stage 2, the regional palmprint features are detectedand a hierarchical image matching is performed for the finalretrieval output. Fig. 3 illustrates the multi-level extractionof the palmprint features. Fig. 3(a) shows a sample of palmprint,Fig. 3(b) shows delineation of the boundary of a palmas a global boundary feature. Fig. 3(c) shows the detectionof global principal lines and Fig. 3(d) shows the detailsof the regional palmprint texture features for local featurerepresentation at a fine level. The average accuracy rate forclassification is 97%.hand(a) palm(b)palmboundary(c) palm-line (d) palm-regionFig. 3. Hierarchical feature extractionfingerspalmboundarypalmprint5.2. Hierarchical Feature Matching TestFig. 2. An example of multiple feature integration for hand representation5.1. Multiple Biometrics Feature Extraction TestThe shape of a hand boundary can be used as a global<strong>biometrics</strong> feature for coarse matching. This data can bepresented by a data cube, where each dimension correspondsto a particular feature entity. The following list describesthe relevant features as dimensions for hand shapeclassification. (1) A list of boundary feature points, sortedin an anti-clockwise order along the hand boundary. (2)Parameters that control the active contour along the handboundary. (3) Measures of invariant moments, sorted in descendingorder. (4) A list of the coefficients of a B-Splinecurve. (5) The time when an image was taken. (6) A list ofindividual’s identity details.The above multi-dimensional structure for the data cubeoffers flexibility to manipulate the data and view it from differentperspectives. Such a structure also allows quick datasummarization at different levels. For a collection of 200hand image samples, 80% of the candidates are excludedafter the coarse level selection.The dynamic selection of image features is furtherdemonstrated by multi-level palmprint feature extraction forpersonal identification and verification (refer to [5] for a survey,also see our preliminary work on palmprint verification[13]). Our experiment is carried out in two stages. In stageone, the global palmprint features are extracted at a coarseThe performance of the proposed coarse-to-fine curvematchingapproach is further demonstrated in the secondtest, which is face recognition for personal identification. Ata coarse level, a fractional discrimination function is used toidentify the region of interest in an individual’s face. At afine curve-matching level, the active contour tracing algorithmis applied to detect the boundaries of interest in thefacial regions for the final matching. Fig. 4 illustrates thetracing of facial curves for face recognition. Fig. 4(a) is anoriginal image, Fig. 4(b) shows the boundaries of a regionof on the face and Fig. 4(c) presents the curve segments forhierarchical face recognition by curve matching.original imageboundarydetectionFig. 4. Face curve extractionface curvesTo verify the effectiveness of our approach, a series oftests were carried out <strong>using</strong> a database of 200 facial imagescollected from different individuals under various conditions,such as uneven lighting, moderate facial tilting andpartial occlusion. Table 1 lists the correct recognition rateof the coarse-level detection.Proceedings of the 2003 International Conference on <strong>Parallel</strong> Processing (ICPP’03)0190-3918/03 $ 17.00 © 2003 <strong>IEEE</strong>


Table 1: Performance of face detection at coarse-levelFace Condition Correct Detection Rateunevenness of lighting 98%multiple faces 95%moderate tilt of faces 97%partial sheltering 85%To show the robustness of the proposed algorithm forface detection that is invariant to the perspective view, partialdistortion and occlusion, the fine-level curve matchingis applied to facial images with different orientations andexpressions. Fig. 5(a) illustrates sample images of the sameperson <strong>using</strong> person under different conditions such as facialexpression, partial occlussion and distortion. Table 2and Table 3 summarizes the test results for 100 cases.The face samples at different orientationsThe face samples of different conditionsFig. 5. The face samplesTable 2: Performance of face recognition at different orientionsViewing PerspectiveCorrect Classification Rate20 0 (vertical) 84%10 0 (vertical) 86%+10 0 (vertical) 86%+20 0 (vertical) 83%20 0 (horizontal) 85%10 0 (horizontal) 87%+10 0 (horizontal) 87%+20 0 (horizontal) 84%5.3. Evaluation of System EfficiencyTo test the increased efficiency of the proposed agentbasedapproach, a group of external assistant <strong>agents</strong> wereemployed. The central agent controller is responsible fordissecting the task and assembling the final result. Accordingto the number of available worker <strong>agents</strong>, differentstrategies will be adopted to partition and distribute the subtasks.Table 3: Performance of face classification with different conditionsFace Condition Correct Classification Ratepartial occlusion 77%various expressions 81%wearing glasses 82% Five-worker patternFor facial feature extraction, if all facial features (suchas the chin, left eye, right eye, mouth and nose) formaset X = fc l rmng, this strategy simply dissectsthe whole task into five parts, and then distributesthem to five worker <strong>agents</strong>. The total processingtime is maxft 1 (c)t 2 (l)t 3 (r)t 4 (m)t 5 (n)g +latencies, wherelatencies include network latencyand data preparation latency for packaging and unpacking. four-worker patternFrom experiments, it could be noted that with the sameCPU speed: t(l + r)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!