Segmentation of heterogeneous document images : an ... - Tel

Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel

tel.archives.ouvertes.fr
from tel.archives.ouvertes.fr More from this publisher
14.01.2014 Views

probability p(y|x) is maximized according to the trained CRF model. Finding the optimal label configuration in a linear-chain conditional random field or graphical models with tree-like structure is a straightforward process using the Viterbi algorithm. However, in two-dimensional CRF, no exact method exists for label decoding. Nearly all methods that perform decoding in two-dimensional CRFs are approximation methods that iteratively perform inference and change label configuration until they reach to a predefined number of iterations or until they converge. Examples of such methods are Monte Carlo methods [3], Loopy Belief Propagation [64], variational methods [43]. While the success of these methods is proven in some application like Turbo decoding [67], in the domain of computer vision [31], the precise conditions under which these methods will converge are still not well understood. Furthermore, since we work on high-resolution document images with typically 300 dpi, methods that demand high number of iterations to converge, are impractical. tel-00912566, version 1 - 2 Dec 2013 Another simple to implement solution is Iterated Conditional Models (ICM). ICM is an iterative method proposed by Besag in 1986 [9]. Instead of maximizing the probability as a whole, the method tries to maximize the conditional probability of each site by considering only its neighbors. At each iteration, the algorithm chooses sites from left to right and top to bottom and estimates the probability for all possible label configurations for the site in question. Then it picks the label that maximizes the local probability. Finding the global maximum is not guaranteed, but the method converges to a local maximum. Moreover, if instead of using all the neighbors of a site, we restrict the label of the site to be dependent merely on the sites that we have already visited in one iteration, then it takes just one iteration for the ICM to converge. Therefore, ICM is the method that we choose for label decoding. 4.6 Training (parameter/weight estimation) In this section, we discuss how to estimate the parameters λ j of a conditional random field. In general, conditional random fields may be trained with latent variables or for structure learning. However, we are provided with fully labeled data, which is the simplest case for training. Maximum likelihood is the foundation for training CRFs. Weights are chosen such that the training data has the highest probability under the model. The conditional log-likelihood of a set of training sites (s s , y s ) using λ as parameters is given by: l λ = ∑ ( F ) ∑ λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) − log Z(x s , λ) . s∈S k=1 where S is the total number of sites in training dataset and F is the total number of feature functions. Differentiating the log-likelihood function with respect to parameter λ k is given by: 68

∂l λ = ∑ ( ∑y∈Y f k (y s , x s ) − f k(y s , x s ) exp ∑ F i=1 λ ) if i (y s , x s ) ∂λ k Z(x s , λ) s∈S ⎛ ⎞ = ∑ ⎝f k (y s , x s ) − ∑ f k (y s , x s )P (y|x s ) ⎠ s∈S y∈Y = ∑ s∈S ( fk (y s , x s ) − E P (y|xs)[f k (y s , x s )] ) tel-00912566, version 1 - 2 Dec 2013 where Y = {text, non-text} and E p() [.] is the expected value of the model under the conditional probability distribution. For maximum likelihood solution, the equation will equal zero, and therefore the expectation of the feature f k with respect to the model distribution must be equal to the expected value of f k with respect to the empirical distribution. However, calculating the expectation requires the enumeration of all the y labels. In linear-chain models, inference techniques based on a variation of forward-backward algorithms can be performed to efficiently compute this expectation. However, in two-dimensional CRFs, approximation techniques are needed to simplify the computations. One solution is to use a Voted Perceptron Method. 4.6.1 Collin’s voted perceptron method Perceptrons [81] use an approximation of the gradient of the unregularized conditional log-likelihood. Perceptron-based training methods consider one misclassified instance at a time, along with its contribution to the gradient. The expectation of features are further approximated by a point estimate of the feature function vector at the best possible labeling. The approximation for the i th instance and the k th feature function can be written as: ∇ k l(λ) ≈ ( f k (y i , x i ) − f k (ŷ i , x k ) ) where ŷ i = arg max λ k f k (y, x i ) y Using this approximate gradient, the following first order update rule can be used for maximization: λ t+1 k = λ t k + α ( f k (y i , x i ) − f k (ŷ i , x i ) ) . where α is the learning rate. This update step is applied once for each missclassified instance x i in the training set and multiple passes are made over the training dataset. Thought, it has been noted that the final obtained weights suffer from over-fitting. As a solution, Collins [26] suggests a voting scheme, where, in a particular pass of the training data, all the updates are collected, 69

probability p(y|x) is maximized according to the trained CRF model.<br />

Finding the optimal label configuration in a linear-chain conditional r<strong>an</strong>dom<br />

field or graphical models with tree-like structure is a straightforward process<br />

using the Viterbi algorithm. However, in two-dimensional CRF, no exact<br />

method exists for label decoding. Nearly all methods that perform decoding in<br />

two-dimensional CRFs are approximation methods that iteratively perform inference<br />

<strong>an</strong>d ch<strong>an</strong>ge label configuration until they reach to a predefined number<br />

<strong>of</strong> iterations or until they converge. Examples <strong>of</strong> such methods are Monte Carlo<br />

methods [3], Loopy Belief Propagation [64], variational methods [43]. While<br />

the success <strong>of</strong> these methods is proven in some application like Turbo decoding<br />

[67], in the domain <strong>of</strong> computer vision [31], the precise conditions under which<br />

these methods will converge are still not well understood. Furthermore, since<br />

we work on high-resolution <strong>document</strong> <strong>images</strong> with typically 300 dpi, methods<br />

that dem<strong>an</strong>d high number <strong>of</strong> iterations to converge, are impractical.<br />

tel-00912566, version 1 - 2 Dec 2013<br />

Another simple to implement solution is Iterated Conditional Models (ICM).<br />

ICM is <strong>an</strong> iterative method proposed by Besag in 1986 [9]. Instead <strong>of</strong> maximizing<br />

the probability as a whole, the method tries to maximize the conditional<br />

probability <strong>of</strong> each site by considering only its neighbors. At each iteration, the<br />

algorithm chooses sites from left to right <strong>an</strong>d top to bottom <strong>an</strong>d estimates the<br />

probability for all possible label configurations for the site in question. Then it<br />

picks the label that maximizes the local probability. Finding the global maximum<br />

is not guar<strong>an</strong>teed, but the method converges to a local maximum. Moreover,<br />

if instead <strong>of</strong> using all the neighbors <strong>of</strong> a site, we restrict the label <strong>of</strong> the<br />

site to be dependent merely on the sites that we have already visited in one<br />

iteration, then it takes just one iteration for the ICM to converge. Therefore,<br />

ICM is the method that we choose for label decoding.<br />

4.6 Training (parameter/weight estimation)<br />

In this section, we discuss how to estimate the parameters λ j <strong>of</strong> a conditional<br />

r<strong>an</strong>dom field. In general, conditional r<strong>an</strong>dom fields may be trained with latent<br />

variables or for structure learning. However, we are provided with fully labeled<br />

data, which is the simplest case for training.<br />

Maximum likelihood is the foundation for training CRFs. Weights are chosen<br />

such that the training data has the highest probability under the model. The<br />

conditional log-likelihood <strong>of</strong> a set <strong>of</strong> training sites (s s , y s ) using λ as parameters<br />

is given by:<br />

l λ = ∑ ( F<br />

)<br />

∑<br />

λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) − log Z(x s , λ) .<br />

s∈S<br />

k=1<br />

where S is the total number <strong>of</strong> sites in training dataset <strong>an</strong>d F is the total<br />

number <strong>of</strong> feature functions. Differentiating the log-likelihood function with<br />

respect to parameter λ k is given by:<br />

68

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!