Segmentation of heterogeneous document images : an ... - Tel
Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel
probability p(y|x) is maximized according to the trained CRF model. Finding the optimal label configuration in a linear-chain conditional random field or graphical models with tree-like structure is a straightforward process using the Viterbi algorithm. However, in two-dimensional CRF, no exact method exists for label decoding. Nearly all methods that perform decoding in two-dimensional CRFs are approximation methods that iteratively perform inference and change label configuration until they reach to a predefined number of iterations or until they converge. Examples of such methods are Monte Carlo methods [3], Loopy Belief Propagation [64], variational methods [43]. While the success of these methods is proven in some application like Turbo decoding [67], in the domain of computer vision [31], the precise conditions under which these methods will converge are still not well understood. Furthermore, since we work on high-resolution document images with typically 300 dpi, methods that demand high number of iterations to converge, are impractical. tel-00912566, version 1 - 2 Dec 2013 Another simple to implement solution is Iterated Conditional Models (ICM). ICM is an iterative method proposed by Besag in 1986 [9]. Instead of maximizing the probability as a whole, the method tries to maximize the conditional probability of each site by considering only its neighbors. At each iteration, the algorithm chooses sites from left to right and top to bottom and estimates the probability for all possible label configurations for the site in question. Then it picks the label that maximizes the local probability. Finding the global maximum is not guaranteed, but the method converges to a local maximum. Moreover, if instead of using all the neighbors of a site, we restrict the label of the site to be dependent merely on the sites that we have already visited in one iteration, then it takes just one iteration for the ICM to converge. Therefore, ICM is the method that we choose for label decoding. 4.6 Training (parameter/weight estimation) In this section, we discuss how to estimate the parameters λ j of a conditional random field. In general, conditional random fields may be trained with latent variables or for structure learning. However, we are provided with fully labeled data, which is the simplest case for training. Maximum likelihood is the foundation for training CRFs. Weights are chosen such that the training data has the highest probability under the model. The conditional log-likelihood of a set of training sites (s s , y s ) using λ as parameters is given by: l λ = ∑ ( F ) ∑ λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) − log Z(x s , λ) . s∈S k=1 where S is the total number of sites in training dataset and F is the total number of feature functions. Differentiating the log-likelihood function with respect to parameter λ k is given by: 68
∂l λ = ∑ ( ∑y∈Y f k (y s , x s ) − f k(y s , x s ) exp ∑ F i=1 λ ) if i (y s , x s ) ∂λ k Z(x s , λ) s∈S ⎛ ⎞ = ∑ ⎝f k (y s , x s ) − ∑ f k (y s , x s )P (y|x s ) ⎠ s∈S y∈Y = ∑ s∈S ( fk (y s , x s ) − E P (y|xs)[f k (y s , x s )] ) tel-00912566, version 1 - 2 Dec 2013 where Y = {text, non-text} and E p() [.] is the expected value of the model under the conditional probability distribution. For maximum likelihood solution, the equation will equal zero, and therefore the expectation of the feature f k with respect to the model distribution must be equal to the expected value of f k with respect to the empirical distribution. However, calculating the expectation requires the enumeration of all the y labels. In linear-chain models, inference techniques based on a variation of forward-backward algorithms can be performed to efficiently compute this expectation. However, in two-dimensional CRFs, approximation techniques are needed to simplify the computations. One solution is to use a Voted Perceptron Method. 4.6.1 Collin’s voted perceptron method Perceptrons [81] use an approximation of the gradient of the unregularized conditional log-likelihood. Perceptron-based training methods consider one misclassified instance at a time, along with its contribution to the gradient. The expectation of features are further approximated by a point estimate of the feature function vector at the best possible labeling. The approximation for the i th instance and the k th feature function can be written as: ∇ k l(λ) ≈ ( f k (y i , x i ) − f k (ŷ i , x k ) ) where ŷ i = arg max λ k f k (y, x i ) y Using this approximate gradient, the following first order update rule can be used for maximization: λ t+1 k = λ t k + α ( f k (y i , x i ) − f k (ŷ i , x i ) ) . where α is the learning rate. This update step is applied once for each missclassified instance x i in the training set and multiple passes are made over the training dataset. Thought, it has been noted that the final obtained weights suffer from over-fitting. As a solution, Collins [26] suggests a voting scheme, where, in a particular pass of the training data, all the updates are collected, 69
- Page 27 and 28: them. In such circumstances, it wou
- Page 29 and 30: tel-00912566, version 1 - 2 Dec 201
- Page 31 and 32: [21] is another texture-based metho
- Page 33 and 34: Figure 2.4: Part of a document in o
- Page 35 and 36: • Degraded quality due to ageing
- Page 37 and 38: 2.3.2 Handwritten text line detecti
- Page 39 and 40: (a) Divided strips and their projec
- Page 41 and 42: (a) Five zones 1-5 (b) Projection p
- Page 43 and 44: would be difficult to draw a conclu
- Page 45 and 46: The proposed methods by Xiao [102],
- Page 47 and 48: tel-00912566, version 1 - 2 Dec 201
- Page 49 and 50: is assigning a label to a region of
- Page 51 and 52: fixed range. When the elongation ap
- Page 53 and 54: tel-00912566, version 1 - 2 Dec 201
- Page 55 and 56: The second method calculates the co
- Page 57 and 58: 3. Repeat for m = 1, 2, ..., M •
- Page 59 and 60: tel-00912566, version 1 - 2 Dec 201
- Page 61 and 62: Chapter 4 Region detection tel-0091
- Page 63 and 64: The next advantage of using CRFs is
- Page 65 and 66: weights that are assigned to edge a
- Page 67 and 68: { 1 if ys = text and y f 1 (y s , y
- Page 69 and 70: (a) Document (b) Filled text compon
- Page 71 and 72: tel-00912566, version 1 - 2 Dec 201
- Page 73 and 74: tel-00912566, version 1 - 2 Dec 201
- Page 75 and 76: f = [y c = 0] × [y tl = 0] f = [y
- Page 77: (a) Ground-truth (b) y c = 0 tel-00
- Page 81 and 82: incorrect [100]. Several sufficient
- Page 83 and 84: tel-00912566, version 1 - 2 Dec 201
- Page 85 and 86: tel-00912566, version 1 - 2 Dec 201
- Page 87 and 88: tel-00912566, version 1 - 2 Dec 201
- Page 89 and 90: tel-00912566, version 1 - 2 Dec 201
- Page 91 and 92: Table 4.3: TION COUNT WEIGHTED SUCC
- Page 93 and 94: tel-00912566, version 1 - 2 Dec 201
- Page 95 and 96: tel-00912566, version 1 - 2 Dec 201
- Page 97 and 98: Chapter 5 Text line detection tel-0
- Page 99 and 100: tel-00912566, version 1 - 2 Dec 201
- Page 101 and 102: tel-00912566, version 1 - 2 Dec 201
- Page 103 and 104: Having specified the model, a verti
- Page 105 and 106: • The fifth step is to remove ext
- Page 107 and 108: tel-00912566, version 1 - 2 Dec 201
- Page 109 and 110: text lines can be divided into two
- Page 111 and 112: the two children. The root node rep
- Page 113 and 114: leaves of the tree which contain on
- Page 115 and 116: tel-00912566, version 1 - 2 Dec 201
- Page 117 and 118: tel-00912566, version 1 - 2 Dec 201
- Page 119 and 120: tel-00912566, version 1 - 2 Dec 201
- Page 121 and 122: currently working on some of these
- Page 123 and 124: • fn (false negative) is the numb
- Page 125 and 126: 2 ∗ RA ∗ DR F − Measure = RA
- Page 127 and 128: • ”-tn”: This option uses the
probability p(y|x) is maximized according to the trained CRF model.<br />
Finding the optimal label configuration in a linear-chain conditional r<strong>an</strong>dom<br />
field or graphical models with tree-like structure is a straightforward process<br />
using the Viterbi algorithm. However, in two-dimensional CRF, no exact<br />
method exists for label decoding. Nearly all methods that perform decoding in<br />
two-dimensional CRFs are approximation methods that iteratively perform inference<br />
<strong>an</strong>d ch<strong>an</strong>ge label configuration until they reach to a predefined number<br />
<strong>of</strong> iterations or until they converge. Examples <strong>of</strong> such methods are Monte Carlo<br />
methods [3], Loopy Belief Propagation [64], variational methods [43]. While<br />
the success <strong>of</strong> these methods is proven in some application like Turbo decoding<br />
[67], in the domain <strong>of</strong> computer vision [31], the precise conditions under which<br />
these methods will converge are still not well understood. Furthermore, since<br />
we work on high-resolution <strong>document</strong> <strong>images</strong> with typically 300 dpi, methods<br />
that dem<strong>an</strong>d high number <strong>of</strong> iterations to converge, are impractical.<br />
tel-00912566, version 1 - 2 Dec 2013<br />
Another simple to implement solution is Iterated Conditional Models (ICM).<br />
ICM is <strong>an</strong> iterative method proposed by Besag in 1986 [9]. Instead <strong>of</strong> maximizing<br />
the probability as a whole, the method tries to maximize the conditional<br />
probability <strong>of</strong> each site by considering only its neighbors. At each iteration, the<br />
algorithm chooses sites from left to right <strong>an</strong>d top to bottom <strong>an</strong>d estimates the<br />
probability for all possible label configurations for the site in question. Then it<br />
picks the label that maximizes the local probability. Finding the global maximum<br />
is not guar<strong>an</strong>teed, but the method converges to a local maximum. Moreover,<br />
if instead <strong>of</strong> using all the neighbors <strong>of</strong> a site, we restrict the label <strong>of</strong> the<br />
site to be dependent merely on the sites that we have already visited in one<br />
iteration, then it takes just one iteration for the ICM to converge. Therefore,<br />
ICM is the method that we choose for label decoding.<br />
4.6 Training (parameter/weight estimation)<br />
In this section, we discuss how to estimate the parameters λ j <strong>of</strong> a conditional<br />
r<strong>an</strong>dom field. In general, conditional r<strong>an</strong>dom fields may be trained with latent<br />
variables or for structure learning. However, we are provided with fully labeled<br />
data, which is the simplest case for training.<br />
Maximum likelihood is the foundation for training CRFs. Weights are chosen<br />
such that the training data has the highest probability under the model. The<br />
conditional log-likelihood <strong>of</strong> a set <strong>of</strong> training sites (s s , y s ) using λ as parameters<br />
is given by:<br />
l λ = ∑ ( F<br />
)<br />
∑<br />
λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) − log Z(x s , λ) .<br />
s∈S<br />
k=1<br />
where S is the total number <strong>of</strong> sites in training dataset <strong>an</strong>d F is the total<br />
number <strong>of</strong> feature functions. Differentiating the log-likelihood function with<br />
respect to parameter λ k is given by:<br />
68