FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
FACIAL SOFT BIOMETRICS - Library of Ph.D. Theses | EURASIP
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
76 6. <strong>SOFT</strong> <strong>BIOMETRICS</strong> FOR QUANTIFYING AND PREDICTING <strong>FACIAL</strong> AESTHETICSalready attractive subjects use make-up more heavily. Table C.3 suggests a low correlation betweenthe facial proportions (representing beauty) and eye make-up, which validates the strongrole <strong>of</strong> makeup in raising the MOS.6.5 Model for facial aestheticsWe choose a linear metric due to its simplicity and the linear character <strong>of</strong> the traits with increasingMOS. We perform multiple regression with the multivariate data and obtain a MOSestimation metric with the following form:∑37̂MOS = γ i x i . (6.2)The resulting weights γ i corresponding to each trait are denoted in Table 6.1.We here note that the weights <strong>of</strong> the model are not normalized and do not give informationabout the importance <strong>of</strong> each characteristic. With other words, we did not normalize for the sake<strong>of</strong> reproducibility - ̂MOS can be computed with features labeled as in Table C.1 and Table C.2 inAppendix C and related weights from Table 6.1. The importance <strong>of</strong> the characteristics is conveyedby the Pearson’s correlation coefficients r Xi ,MOS.6.5.1 Validation <strong>of</strong> the obtained metricTo validate our model we compute the following three parameters.– Pearson’s correlation coefficient. As described above, and it is computed to bei=1r̂MOS,MOS= 0.7690. (6.3)– Spearman’s rank correlation coefficient, which is a measure <strong>of</strong> how well the relation betweentwo variables can be described by a monotonic function. The coefficient rangesbetween -1 and 1, with the two extreme points being obtained when the variables are purelymonotonic functions <strong>of</strong> each other. This coefficient takes the formr S = 1− 6∑ i d in(n 2 −1) , (6.4)whered i = rank(x i )−rank(y i ) is the difference between the ranks <strong>of</strong> thei th observation<strong>of</strong> the two variables. The variable n denotes the number <strong>of</strong> observations. The coefficient,which is <strong>of</strong>ten used due to its robustness to outliers, was calculated here to ber ŜMOS,MOS= 0.7645. (6.5)– Mean standard error <strong>of</strong> the difference between the estimated objective ̂MOS and the actualsubjective MOS.MSE = 0.7398 (6.6)These results clearly outperform the outcomes from Eigenfaces <strong>of</strong> = 0.18, as well asr̂MOS,MOS)neural = 0.458 (see [GKYG10]), but the comparison is not very adequate asnetworksr̂MOS,MOSwe would compare manual extraction with automatic extraction <strong>of</strong> facial aesthetics. Neverthelessthe potential <strong>of</strong> our approach is evident and we proceed with a robust validation <strong>of</strong> the facialaesthetics metric. For this purpose we annotated the 37 traits beyond the training set, in an extratesting set <strong>of</strong> 65 images. Once more we excluded outliers (3 images) and we computed the metricverification measures for the estimated ̂MOS and the according actual MOS