Segmentation of heterogeneous document images : an ... - Tel

Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel

tel.archives.ouvertes.fr
from tel.archives.ouvertes.fr More from this publisher
14.01.2014 Views

tel-00912566, version 1 - 2 Dec 2013 Figure 4.2: This figure shows our two-dimensional conditional random fields model. Blue lines represent the boundaries between site. We divide the document image into rectangular blocks with equal heights and equal widths. The label on each site depends on the label of sites in its vicinity and observations defined for that site. Sites depend on observations by means of feature functions. The ground truth label for each site is also available for the purpose of training and evaluation. Note that the area that is shown on the image is part of a document that shows text lines from the main text and part of a side note. Width and height of sites in this image are not correct according and are shown for visualization purposes only. The reason is that documents come in different sizes and resolutions, and the size of each site must be normalized and the width should be small enough to pass between side notes and the main body. Each site may take one of the two labels; ”text” or ”non-text”. However, the label of each site depends on the labels of the sites in its vicinity; this includes the sites on its left, right, top and bottom. Furthermore, the label of each site may depend on observations from the document image. Generally, in conditional random fields, there is no restriction on where these observations come from. However, we restrict the observations to be the result from several filtering operations on the document image under the current site. Figure 4.2 Let x s = {x 1 , ..., x N } be the vector of observations available to site s and y s be one of the labels {text, non-text}. For each site s, the conditional probability of having a label y s given observations x s is defined as: p s (y s |x s ) ∝ exp ( F e ) ∑ ∑F n λ k fk(y e i,i∈N(s) , y s , x i,i∈N(s) , x s ) + µ k fk n (y s , x s ) k=1 where f n and f e are the node and edge feature functions,respectively, which are discussed later in section 4.2. F e is the total number of edge feature functions and F n is the total number of node feature functions. N(s) is the set of neighbors of the site s and is often called Markovian blanket of s. λ and µ are k=1 54

weights that are assigned to edge and node feature functions, respectively. A node feature function f n judges how likely a label y s is assigned to site s given the observations x s . An edge feature function f e indicates how likely two adjacent sites have particular label configuration given the observations at both sites. Feature functions and their weights can take any arbitrary real value. With a slight abuse of notation, it is usual to write this probability in a compact form as below: ( F ) ∑ p s (y s |x s ) ∝ exp λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) k=1 tel-00912566, version 1 - 2 Dec 2013 where F = F e + F n is the total number of feature functions. According to the Hammersley-Clifford theorem (1971), the conditional probability of a set of sites given a set of observations is proportional to the product of potential functions on cliques of the graph. If we take cliques to be pair of adjacent sites then: ( p(y|x) = 1 ∏ F ) ∑ exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s ) Z s∈S k=1 The scalar Z is the normalization factor, or partition function, to ensure that the probability is valid. It is defined as the sum of exponential terms for all possible label configurations. Computing such partition function is intractable for most applications due to computational costs. As a consequence, by assuming conditional independence of y’s given x’s, the conditional probability of the CRF model for the whole image can be defined: p(y|x) = ∏ ( F ) 1 ∑ exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s ) Z s s∈S k=1 Then Z for each site becomes: Z s = ∑ (y 1,y 2)∈Y 2 exp ( ∑ F ) λ i f i (y 1 , y 2 , x s , x N(s) ) i=1 where Y = {text, non-text}. Notice that the summation in partition function is still composed of 512 terms for a 3 × 3 sites. 4.1.1 The three basic problems for CRFs Given the CRF model of the previous section, there are three basic problems of interest that must be solved for the model to be useful in real-world applications. These problems are the following: • Problem 1: Given all the observations x, the label configuration y and a model ψ = (f, λ), how do we efficiently compute p(y|x, ψ), the conditional probability of label configuration given the model? (Marginal inference) 55

weights that are assigned to edge <strong>an</strong>d node feature functions, respectively.<br />

A node feature function f n judges how likely a label y s is assigned to site<br />

s given the observations x s . An edge feature function f e indicates how likely<br />

two adjacent sites have particular label configuration given the observations at<br />

both sites. Feature functions <strong>an</strong>d their weights c<strong>an</strong> take <strong>an</strong>y arbitrary real value.<br />

With a slight abuse <strong>of</strong> notation, it is usual to write this probability in a compact<br />

form as below:<br />

( F<br />

)<br />

∑<br />

p s (y s |x s ) ∝ exp λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s )<br />

k=1<br />

tel-00912566, version 1 - 2 Dec 2013<br />

where F = F e + F n is the total number <strong>of</strong> feature functions. According<br />

to the Hammersley-Clifford theorem (1971), the conditional probability <strong>of</strong> a set<br />

<strong>of</strong> sites given a set <strong>of</strong> observations is proportional to the product <strong>of</strong> potential<br />

functions on cliques <strong>of</strong> the graph. If we take cliques to be pair <strong>of</strong> adjacent sites<br />

then:<br />

(<br />

p(y|x) = 1 ∏ F<br />

)<br />

∑<br />

exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s )<br />

Z<br />

s∈S<br />

k=1<br />

The scalar Z is the normalization factor, or partition function, to ensure<br />

that the probability is valid. It is defined as the sum <strong>of</strong> exponential terms for all<br />

possible label configurations. Computing such partition function is intractable<br />

for most applications due to computational costs. As a consequence, by assuming<br />

conditional independence <strong>of</strong> y’s given x’s, the conditional probability <strong>of</strong> the<br />

CRF model for the whole image c<strong>an</strong> be defined:<br />

p(y|x) = ∏ ( F<br />

)<br />

1 ∑<br />

exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s )<br />

Z s<br />

s∈S<br />

k=1<br />

Then Z for each site becomes:<br />

Z s =<br />

∑<br />

(y 1,y 2)∈Y 2 exp<br />

(<br />

∑ F<br />

)<br />

λ i f i (y 1 , y 2 , x s , x N(s) )<br />

i=1<br />

where Y = {text, non-text}. Notice that the summation in partition function<br />

is still composed <strong>of</strong> 512 terms for a 3 × 3 sites.<br />

4.1.1 The three basic problems for CRFs<br />

Given the CRF model <strong>of</strong> the previous section, there are three basic problems<br />

<strong>of</strong> interest that must be solved for the model to be useful in real-world applications.<br />

These problems are the following:<br />

• Problem 1: Given all the observations x, the label configuration y <strong>an</strong>d a<br />

model ψ = (f, λ), how do we efficiently compute p(y|x, ψ), the conditional<br />

probability <strong>of</strong> label configuration given the model? (Marginal inference)<br />

55

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!