These parameters are used as input to classifiers based on Bayesian Beliaf Networks, neural networks or other classifiers to recognize upper <strong>facial</strong> action units and all their possible combinations. The base for the expression recognition engine is supported through a BBN model that also handles the time behavior <strong>of</strong> the visual features. On a completely natural dataset with lots <strong>of</strong> head movements, pose changes and occlusions (Cohn-Kanade AU-coded <strong>facial</strong> expression database), the new probabilistic framework (based on BBN) achieved a recognition accuracy <strong>of</strong> 68 %. Other experiments implied the use <strong>of</strong> Linear Vector Quantization (LVQ) method, Probabilistic Neural Networks or Back-Prop Neural Networks. The results can be seen in the Experiments section <strong>of</strong> the report. There are some items on the design <strong>of</strong> the project that have not been fully covered yet. Among them, the most important is the inclusion <strong>of</strong> temporal-based parameters to be used in the recognition process. At the moment <strong>of</strong> running the experiments there were no data available on the dynamic behavior <strong>of</strong> the model parameters. However, the dynamic aspects <strong>of</strong> the parameters constitutes subject to further research in the field <strong>of</strong> <strong>facial</strong> expression recognition. - 112 -
REFERENCES Almageed, W. A., M. S. Fadali, G. Bebis, ‘A non-intrusive Kalman Filter-<strong>Based</strong> Tracker for Pursuit Eye Movement’, Proceedings <strong>of</strong> the 2002 American Control Conference Alaska, 2002 Azarbayejani, A., A. Pentland ‘Recursive estimation <strong>of</strong> motion, structure, and focal length’ IEEE Trans. PAMI, 17(6), 562-575, June 1995 Baluja, S., D. Pomerleau, ‘Non-intrusive Gaze Tracking Using Artificial Neural Networks’, Technical Report CMU-CS-94-102. Carnegie Mellon University, 1994 Bartlett, M. S., G. Littlewort, I. Fasel, J. R. Movellan ‘Real Time Face Detection and Facial Expression <strong>Recognition</strong>: Development and Applications to Human Computer Interaction’ IEEE Workshop on Face Processing in Video, Washington, 2004 Bartlett, M. S., G. Littlewort, C. Lainscsek, I. Fasel, J. Movellan, ‘Machine learning methods for fully automatic recognition <strong>of</strong> <strong>facial</strong> <strong>expressions</strong> and <strong>facial</strong> actions’, Proceedings <strong>of</strong> IEEE SMC, pp. 592–597, 2004 Bartlett, M. S., G. Littlewort, I. Fasel, J. R. Movellan, ‘Real Time Face Detection and Facial Expression <strong>Recognition</strong>: Development and Applications to Human Computer Interaction’, CVPR’03, 2003 Bartlett, M. A., J. C. Hager, P. Ekman, T. Sejnowski, ‘Measuring <strong>facial</strong> <strong>expressions</strong> by computer image analysis’, Psychophysiology, 36(2):253–263, March, 1999 Black, M., Y. Yacoob, ‘Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models <strong>of</strong> Image Motion’, Intel. J. <strong>of</strong> Computer Vision, 25(1), pp. 23-48, 1997 Black, M., Y. Yacoob, ‘Tracking and recognizing rigid and non-rigid <strong>facial</strong> motions using local parametric model <strong>of</strong> image motion’, In Proceedings <strong>of</strong> the International Conference on Computer Vision, pages 374–381, Cambridge, MA, IEEE Computer Society, 1995 Bourel, F., C. C. Chibelushi, A. A. Low, ‘<strong>Recognition</strong> <strong>of</strong> Facial Expressions in conditions <strong>of</strong> occlusion’, BMVC’01, pp. 213-222, 2001 Brown, R., P. Hwang, ‘Introduction to Random Signals and Applied Kalman Filtering’, 3rd edition, Wiley, 1996 Brunelli, R., T. Poggio, ‘Face recognition: Features vs. templates’, IEEE Trans. Pattern Analysis and Machine Intelligence, 15(10):1042–1053, 1993
- Page 1 and 2:
Automatic recognition of facial exp
- Page 3 and 4:
Man-Machine Interaction Group Facul
- Page 5 and 6:
Acknowledgements The author would l
- Page 7 and 8:
Eye Detection Module ..............
- Page 9:
List of tables Table 1. The used se
- Page 12 and 13:
data taken from the Cohn-Kanade AU-
- Page 14 and 15:
- The discussions on the current ap
- Page 16 and 17:
ecognition in static pictures, for
- Page 18 and 19:
In [Wang and Tang, 2003] the author
- Page 20 and 21:
Data preparation Starting from the
- Page 22 and 23:
Figure 2. Facial characteristic poi
- Page 24 and 25:
The only additional time is that of
- Page 26 and 27:
African-American and three percent
- Page 28 and 29:
Table 4. The emotion projections of
- Page 30 and 31:
contains a large sample of varying
- Page 32 and 33:
Bayesian networks were designed to
- Page 34 and 35:
- correctly identify the goals of m
- Page 36 and 37:
In the final step of constructing a
- Page 38 and 39:
- renormalize the w ijk to assure t
- Page 40 and 41:
Principal Component Analysis The ne
- Page 42 and 43:
The term σ ij is the covariance be
- Page 44 and 45:
T The term rank( X ∗ X ) is gener
- Page 46 and 47:
numeric information. Usually, a neu
- Page 48 and 49:
defined as: ∆w = −η ∇ ji ji
- Page 50 and 51:
∂E ∂a j = i E ∂a pk ∂a pk
- Page 52 and 53:
Spatial Filtering The spatial filte
- Page 54 and 55:
way high pass filter were used for
- Page 56 and 57:
module includes some routines for d
- Page 59 and 60:
IMPLEMENTATION Facial Feature Datab
- Page 61 and 62: SMILE resides in a dynamic link lib
- Page 63 and 64: FCP Management Application The Cohn
- Page 65 and 66: Figure 13. Head rotation in the ima
- Page 67 and 68: Table 6. The set of rules for the u
- Page 69 and 70: [image width] [image height] ---- A
- Page 71 and 72: Figure 17. The facial areas involve
- Page 73 and 74: The functionality of the tool was b
- Page 75 and 76: A small part of the output text fil
- Page 77 and 78: o call a specialized routine for co
- Page 79 and 80: There is another kind of structure
- Page 81 and 82: performing classification of facial
- Page 83 and 84: Figure 22. Sobel edge detector appl
- Page 85 and 86: almost closed it obviously does not
- Page 87 and 88: Figure 28. FCP detection The effici
- Page 89 and 90: TESTING AND RESULTS The following s
- Page 91 and 92: BBN experiment 2 “Detection of fa
- Page 93 and 94: General recognition rate is 63.77%
- Page 95 and 96: Recognition results. Confusion Matr
- Page 97 and 98: 5 states model General recognition
- Page 99 and 100: LVQ experiment “LVQ based facial
- Page 101 and 102: ANN experiment Back Propagation Neu
- Page 103 and 104: PCA experiment “Principal Compone
- Page 105 and 106: Eigenvalues: Eigenvectors: Factor l
- Page 107 and 108: Squared cosines of the variables: C
- Page 109 and 110: - 109 -
- Page 111: CONCLUSION The human face has attra
- Page 115 and 116: Essa, A. Pentland, ‘Coding, analy
- Page 117: Samal, A., P. Iyengar, ‘Automatic
- Page 120 and 121: 61: } 62: } 63: float model::comput
- Page 122 and 123: 119: for(k=0;k
- Page 125 and 126: APPENDIX B Datcu D., Rothkrantz L.J
- Page 127 and 128: facial feature and store the inform
- Page 129 and 130: available as part of the knowledge
- Page 131 and 132: detailed in table 4. The dependency
- Page 133: [11] M. Turk, A. Pentland ‘Face r