Segmentation of heterogeneous document images : an ... - Tel

Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel

tel.archives.ouvertes.fr
from tel.archives.ouvertes.fr More from this publisher
14.01.2014 Views

Table 3.2: AVERAGE PERFORMANCE OF SEVERAL CLASSIFIERS FOR TEXT GRAPHICS SEPARATION Classifier Text recall Graphics recall Text precision Graphics precision Accuracy Comments LogitBoost 93.08 93.03 92.43 92.38 92.73% 500 weak learner Decision Tree (gini-index) 95.33 88.19 90.47 94.14 92.94%+/-0.66 GentleBoost 93.32 90.63 90.92 93.12 91.97 GiniBoost 93.24 90.01 90.34 93.01 91.62 AutoMLP 95.32 80.54 90.7 89.64 90.38%+/-0.45 51 neurons Linear C-SVC 98.21 57.06 81.99 94.12 84.45%+/-0.33 SV: 6470, C = 1.0 Decision Tree (accuracy) 96.65 60.69 83.03 90.1 84.62%+/-2.61 LDA 98.99 46.86 78.76 95.89 81.56%+/-0.48 Nave Bayes 98.67 43.26 77.58 94.22 80.14%+/-0.77 Linear C-SVC 99.81 37.68 76.12 99 79.03%+/-0.31 SV: 7942, C = 0.0 Decision Tree (gain-ratio) 99.82 15.43 70.12 97.68 71.57%+/-6.34 Based on our evaluation, LogitBoost classifier is selected. tel-00912566, version 1 - 2 Dec 2013 3.5 LogitBoost for classification Boosting is a machine learning algorithm for performing supervised learning. It iteratively combines the performance of many weak classifiers to produce a final strong classifier. A weak classifier is only required to be better than chance, however the combination of them, results in a strong classifier which often outperforms most strong classifiers such as Neural Networks and SVMs [37]. We use boosted decision trees which are popular weak classifiers used in boosting scheme. Our boosted model is based on a large number of training examples (x i , y i ) with x i ∈ R K and y i ∈ {−1, +1}. x i is a vector with K elements (K = 15 is the number of our features). The desired two-class output is encoded as −1 and +1. Various boosting methods are known such as AdaBoost [35], LogitBoost and GentleBoost [36]. Regarding the overall structure, all of them are similar. Initially each instance of data in our training dataset is assigned the same weight as the other data points. Then a weak classifier is trained on the weighted training data. A weight indicates the importance of a data point for the classification. In each iteration the weights of misclassified data are increased and the weights of each correctly classified sample are decreased. As a result the new classifier focuses on the examples which have eluded correct classification in previous iterations. LogitBoost is very similar to the original AdaBoost algorithm. If one considers AdaBoost as a generalized additive model and then applies the cost functional of logistic regression, LogitBoost algorithm can be derived. The implementation of LogitBoost comes from OpenCV machine learning library. LogitBoost algorithm is as follows: 1. Given N instances (x i , y i ) with x i ∈ R K , y i ∈ {−1, +1} 2. Start with weights w i = 1/N, i = 1, ..., N, F (x) = 0 and probability estimates p(x i ) = 1 2 46

3. Repeat for m = 1, 2, ..., M • (a) Compute the working response and weights z i = y ∗ i − p(x i) p(x i )(1 − p(x i )) . w i = p(x i )(1 − p(x i )). • Fit the classifier f m (x) ∈ {−1, +1}, by a weighted least-squares regression of z i to x i using weights w i . • Update F (x) ← F (x) + 1 2 f m(x) and p(x) ← e F (x) /(e F (x) + e −F (x) ). 4. Output the classifier sign[F (x)] = sign[ ∑ M m=1 f m(x)]. 3.6 Post processing tel-00912566, version 1 - 2 Dec 2013 Based on our experiments, whenever a component is classified as graphics and contains many other components, it goes into one of the two possible scenarios. Either the component is a large graphics that contains many other broken parts, or it is a frame or table that holds text characters. Because tables and frames occupy a large area but the actual black pixels are far much less than this area, they can be easily identified with their solidity feature. We choose a small solidity value as the threshold that recognize tables from graphics. If the component has a larger solidity than the threshold, it is identified as graphics and all its children should also assigned a graphics label regardless of the label that are assigned to them by LogitBoost Classifier. Otherwise the children retain their original assigned labels. 3.7 Results Results of our text/graphics separation comes in two flavors. In the first part we evaluate the performance of our classifier on two datasets; dataset for IC- DAR2009 page segmentation competition and dataset for ICDAR2011 historical document layout analysis competition. Table 3.3 summaries these results. Note that the cross-dataset evaluation results is also provided in the table. In crossdataset evaluation the system is trained on one dataset but is tested on the other with totally different type of graphic structure. The indicated values are component based and measure the precision and recall of component classification regarding its class label. Table 3.3: EVALUATION OF LOGITBOOST ON TWO DATASETS FOR TEXT/GRAPHICS SEPARATION BEFORE POST-PROCESSING Trained on Tested on Text recall Graphics recall Text precision Graphics precision Text Accuracy ICDAR2009 ICDAR2009 99.88% 62.66% 98.91% 94.02% 98.82% ICDAR2009 ICDAR2011 81.31% 57.2% 96.23% 18.53% 79.65% ICDAR2011 ICDAR2011 99.15% 57.69% 96.92% 83.6% 96.29% ICDAR2011 ICDAR2009 98.40% 45.928% 98.4% 41.98% 96.65% 47

Table 3.2: AVERAGE PERFORMANCE OF SEVERAL CLASSIFIERS FOR<br />

TEXT GRAPHICS SEPARATION<br />

Classifier Text recall Graphics recall Text precision Graphics precision Accuracy Comments<br />

LogitBoost 93.08 93.03 92.43 92.38 92.73% 500 weak learner<br />

Decision Tree (gini-index) 95.33 88.19 90.47 94.14 92.94%+/-0.66<br />

GentleBoost 93.32 90.63 90.92 93.12 91.97<br />

GiniBoost 93.24 90.01 90.34 93.01 91.62<br />

AutoMLP 95.32 80.54 90.7 89.64 90.38%+/-0.45 51 neurons<br />

Linear C-SVC 98.21 57.06 81.99 94.12 84.45%+/-0.33 SV: 6470, C = 1.0<br />

Decision Tree (accuracy) 96.65 60.69 83.03 90.1 84.62%+/-2.61<br />

LDA 98.99 46.86 78.76 95.89 81.56%+/-0.48<br />

Nave Bayes 98.67 43.26 77.58 94.22 80.14%+/-0.77<br />

Linear C-SVC 99.81 37.68 76.12 99 79.03%+/-0.31 SV: 7942, C = 0.0<br />

Decision Tree (gain-ratio) 99.82 15.43 70.12 97.68 71.57%+/-6.34<br />

Based on our evaluation, LogitBoost classifier is selected.<br />

tel-00912566, version 1 - 2 Dec 2013<br />

3.5 LogitBoost for classification<br />

Boosting is a machine learning algorithm for performing supervised learning.<br />

It iteratively combines the perform<strong>an</strong>ce <strong>of</strong> m<strong>an</strong>y weak classifiers to produce<br />

a final strong classifier. A weak classifier is only required to be better th<strong>an</strong><br />

ch<strong>an</strong>ce, however the combination <strong>of</strong> them, results in a strong classifier which <strong>of</strong>ten<br />

outperforms most strong classifiers such as Neural Networks <strong>an</strong>d SVMs [37].<br />

We use boosted decision trees which are popular weak classifiers used in<br />

boosting scheme. Our boosted model is based on a large number <strong>of</strong> training<br />

examples (x i , y i ) with x i ∈ R K <strong>an</strong>d y i ∈ {−1, +1}. x i is a vector with K elements<br />

(K = 15 is the number <strong>of</strong> our features). The desired two-class output is<br />

encoded as −1 <strong>an</strong>d +1.<br />

Various boosting methods are known such as AdaBoost [35], LogitBoost <strong>an</strong>d<br />

GentleBoost [36]. Regarding the overall structure, all <strong>of</strong> them are similar. Initially<br />

each inst<strong>an</strong>ce <strong>of</strong> data in our training dataset is assigned the same weight as<br />

the other data points. Then a weak classifier is trained on the weighted training<br />

data. A weight indicates the import<strong>an</strong>ce <strong>of</strong> a data point for the classification.<br />

In each iteration the weights <strong>of</strong> misclassified data are increased <strong>an</strong>d the weights<br />

<strong>of</strong> each correctly classified sample are decreased. As a result the new classifier<br />

focuses on the examples which have eluded correct classification in previous iterations.<br />

LogitBoost is very similar to the original AdaBoost algorithm. If one considers<br />

AdaBoost as a generalized additive model <strong>an</strong>d then applies the cost functional<br />

<strong>of</strong> logistic regression, LogitBoost algorithm c<strong>an</strong> be derived. The implementation<br />

<strong>of</strong> LogitBoost comes from OpenCV machine learning library.<br />

LogitBoost algorithm is as follows:<br />

1. Given N inst<strong>an</strong>ces (x i , y i ) with x i ∈ R K , y i ∈ {−1, +1}<br />

2. Start with weights w i = 1/N, i = 1, ..., N, F (x) = 0 <strong>an</strong>d probability estimates<br />

p(x i ) = 1 2<br />

46

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!