Segmentation of heterogeneous document images : an ... - Tel
Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel
tel-00912566, version 1 - 2 Dec 2013 Figure 4.16: This figure shows the evolution of training error rate with 5 different learning rates. All other parameters remain the same. obtained error in each iteration. Any other parameter of the training remains the same unless if stated otherwise. Learning rate Learning rate refers to the learning rate of the voted perceptron training algorithm. It controls the fraction of computed deltas in each iteration of the training that are contributing to the weights of the feature functions. The number of ICM cycles is 10. The block overlap ratio is 0.25 and the width and height of each block is half of the mean width and height of all text characters of the page, respectively. Figure 4.16 shows the evolution of training error with 5 different learning rates. It suggests that a high value for learning rate forces the training to go into a cycle. On the other hand, the training process is smooth with a small value for learning rate (0.1 in this case) however the process converges to a higher training error. In this particular experiment, a learning rate of 0.25 can be considered as good because the number of misclassified sites decreases reasonably fast without going into cycles. Unfortunately, there are no rules of thumb to determine a best value for learning rate without considering the number of blocks, number of features functions and their characteristics. 84
tel-00912566, version 1 - 2 Dec 2013 Figure 4.17: This figure shows the evolution of training error rate with 3 different overlapping ratios. Overlapping ratio Overlapping ratio refers to the amount that two consecutive blocks overlap. For this experiment we consider three different overlapping ratio; without overlap, with 25% and 50% overlap. Everything else remain the same but the total number of sites. Figure 4.17 displays the percentage of misclassified sites during the training process with 3 different overlapping ratios. In conclusion, a smaller overlapping ratio results in less number of sites, much faster training process and less number of misclassified sites. However, despite the decrease in the number of misclassified sites, the number of misclassified sites that are located in between columns of text increases. For this reason, it is in our best interest to use a value higher than 0.25 for the overlapping ratio. Values higher than 0.5 result in a huge number of sites and increase the memory consumption and training time substantially. Maximum number of ICM cycles Maximum number of ICM cycles refers to the maximum number of cycles that are allowed for the iterated conditional modes inference algorithm before the algorithm converges. The ICM algorithm is supposed to converge to a fixed state of the system after several cycles, however the convergence is not guaranteed. In the former case, a maximum number of cycles is set to terminate the inference algorithm prematurely. Figure 4.18 shows the progress of the training algorithm with 5 different maximum number of ICM cycles. The results indicate that except of the first experiment with only one cycle, all other experiments perform the same way with slight changes. In conclusion 5 cycles can be considered a good value as a 85
- Page 43 and 44: would be difficult to draw a conclu
- Page 45 and 46: The proposed methods by Xiao [102],
- Page 47 and 48: tel-00912566, version 1 - 2 Dec 201
- Page 49 and 50: is assigning a label to a region of
- Page 51 and 52: fixed range. When the elongation ap
- Page 53 and 54: tel-00912566, version 1 - 2 Dec 201
- Page 55 and 56: The second method calculates the co
- Page 57 and 58: 3. Repeat for m = 1, 2, ..., M •
- Page 59 and 60: tel-00912566, version 1 - 2 Dec 201
- Page 61 and 62: Chapter 4 Region detection tel-0091
- Page 63 and 64: The next advantage of using CRFs is
- Page 65 and 66: weights that are assigned to edge a
- Page 67 and 68: { 1 if ys = text and y f 1 (y s , y
- Page 69 and 70: (a) Document (b) Filled text compon
- Page 71 and 72: tel-00912566, version 1 - 2 Dec 201
- Page 73 and 74: tel-00912566, version 1 - 2 Dec 201
- Page 75 and 76: f = [y c = 0] × [y tl = 0] f = [y
- Page 77 and 78: (a) Ground-truth (b) y c = 0 tel-00
- Page 79 and 80: ∂l λ = ∑ ( ∑y∈Y f k (y s ,
- Page 81 and 82: incorrect [100]. Several sufficient
- Page 83 and 84: tel-00912566, version 1 - 2 Dec 201
- Page 85 and 86: tel-00912566, version 1 - 2 Dec 201
- Page 87 and 88: tel-00912566, version 1 - 2 Dec 201
- Page 89 and 90: tel-00912566, version 1 - 2 Dec 201
- Page 91 and 92: Table 4.3: TION COUNT WEIGHTED SUCC
- Page 93: tel-00912566, version 1 - 2 Dec 201
- Page 97 and 98: Chapter 5 Text line detection tel-0
- Page 99 and 100: tel-00912566, version 1 - 2 Dec 201
- Page 101 and 102: tel-00912566, version 1 - 2 Dec 201
- Page 103 and 104: Having specified the model, a verti
- Page 105 and 106: • The fifth step is to remove ext
- Page 107 and 108: tel-00912566, version 1 - 2 Dec 201
- Page 109 and 110: text lines can be divided into two
- Page 111 and 112: the two children. The root node rep
- Page 113 and 114: leaves of the tree which contain on
- Page 115 and 116: tel-00912566, version 1 - 2 Dec 201
- Page 117 and 118: tel-00912566, version 1 - 2 Dec 201
- Page 119 and 120: tel-00912566, version 1 - 2 Dec 201
- Page 121 and 122: currently working on some of these
- Page 123 and 124: • fn (false negative) is the numb
- Page 125 and 126: 2 ∗ RA ∗ DR F − Measure = RA
- Page 127 and 128: • ”-tn”: This option uses the
- Page 129 and 130: [12] T. M. Breuel. Two geometric al
- Page 131 and 132: [39] B. Gatos, A. Antonacopoulos, a
- Page 133 and 134: [64] K. P. Murphy, Y. Weiss, and M.
- Page 135 and 136: [91] M. Stamp. A revealing introduc
- Page 137 and 138: Index tel-00912566, version 1 - 2 D
tel-00912566, version 1 - 2 Dec 2013<br />
Figure 4.17: This figure shows the evolution <strong>of</strong> training error rate with 3 different<br />
overlapping ratios.<br />
Overlapping ratio<br />
Overlapping ratio refers to the amount that two consecutive blocks overlap. For<br />
this experiment we consider three different overlapping ratio; without overlap,<br />
with 25% <strong>an</strong>d 50% overlap. Everything else remain the same but the total number<br />
<strong>of</strong> sites.<br />
Figure 4.17 displays the percentage <strong>of</strong> misclassified sites during the training<br />
process with 3 different overlapping ratios. In conclusion, a smaller overlapping<br />
ratio results in less number <strong>of</strong> sites, much faster training process <strong>an</strong>d less<br />
number <strong>of</strong> misclassified sites. However, despite the decrease in the number <strong>of</strong><br />
misclassified sites, the number <strong>of</strong> misclassified sites that are located in between<br />
columns <strong>of</strong> text increases. For this reason, it is in our best interest to use a<br />
value higher th<strong>an</strong> 0.25 for the overlapping ratio. Values higher th<strong>an</strong> 0.5 result<br />
in a huge number <strong>of</strong> sites <strong>an</strong>d increase the memory consumption <strong>an</strong>d training<br />
time subst<strong>an</strong>tially.<br />
Maximum number <strong>of</strong> ICM cycles<br />
Maximum number <strong>of</strong> ICM cycles refers to the maximum number <strong>of</strong> cycles that<br />
are allowed for the iterated conditional modes inference algorithm before the algorithm<br />
converges. The ICM algorithm is supposed to converge to a fixed state<br />
<strong>of</strong> the system after several cycles, however the convergence is not guar<strong>an</strong>teed. In<br />
the former case, a maximum number <strong>of</strong> cycles is set to terminate the inference<br />
algorithm prematurely.<br />
Figure 4.18 shows the progress <strong>of</strong> the training algorithm with 5 different<br />
maximum number <strong>of</strong> ICM cycles. The results indicate that except <strong>of</strong> the first<br />
experiment with only one cycle, all other experiments perform the same way<br />
with slight ch<strong>an</strong>ges. In conclusion 5 cycles c<strong>an</strong> be considered a good value as a<br />
85