Segmentation of heterogeneous document images : an ... - Tel
Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel
tel-00912566, version 1 - 2 Dec 2013 Figure 4.2: This figure shows our two-dimensional conditional random fields model. Blue lines represent the boundaries between site. We divide the document image into rectangular blocks with equal heights and equal widths. The label on each site depends on the label of sites in its vicinity and observations defined for that site. Sites depend on observations by means of feature functions. The ground truth label for each site is also available for the purpose of training and evaluation. Note that the area that is shown on the image is part of a document that shows text lines from the main text and part of a side note. Width and height of sites in this image are not correct according and are shown for visualization purposes only. The reason is that documents come in different sizes and resolutions, and the size of each site must be normalized and the width should be small enough to pass between side notes and the main body. Each site may take one of the two labels; ”text” or ”non-text”. However, the label of each site depends on the labels of the sites in its vicinity; this includes the sites on its left, right, top and bottom. Furthermore, the label of each site may depend on observations from the document image. Generally, in conditional random fields, there is no restriction on where these observations come from. However, we restrict the observations to be the result from several filtering operations on the document image under the current site. Figure 4.2 Let x s = {x 1 , ..., x N } be the vector of observations available to site s and y s be one of the labels {text, non-text}. For each site s, the conditional probability of having a label y s given observations x s is defined as: p s (y s |x s ) ∝ exp ( F e ) ∑ ∑F n λ k fk(y e i,i∈N(s) , y s , x i,i∈N(s) , x s ) + µ k fk n (y s , x s ) k=1 where f n and f e are the node and edge feature functions,respectively, which are discussed later in section 4.2. F e is the total number of edge feature functions and F n is the total number of node feature functions. N(s) is the set of neighbors of the site s and is often called Markovian blanket of s. λ and µ are k=1 54
weights that are assigned to edge and node feature functions, respectively. A node feature function f n judges how likely a label y s is assigned to site s given the observations x s . An edge feature function f e indicates how likely two adjacent sites have particular label configuration given the observations at both sites. Feature functions and their weights can take any arbitrary real value. With a slight abuse of notation, it is usual to write this probability in a compact form as below: ( F ) ∑ p s (y s |x s ) ∝ exp λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s ) k=1 tel-00912566, version 1 - 2 Dec 2013 where F = F e + F n is the total number of feature functions. According to the Hammersley-Clifford theorem (1971), the conditional probability of a set of sites given a set of observations is proportional to the product of potential functions on cliques of the graph. If we take cliques to be pair of adjacent sites then: ( p(y|x) = 1 ∏ F ) ∑ exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s ) Z s∈S k=1 The scalar Z is the normalization factor, or partition function, to ensure that the probability is valid. It is defined as the sum of exponential terms for all possible label configurations. Computing such partition function is intractable for most applications due to computational costs. As a consequence, by assuming conditional independence of y’s given x’s, the conditional probability of the CRF model for the whole image can be defined: p(y|x) = ∏ ( F ) 1 ∑ exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s ) Z s s∈S k=1 Then Z for each site becomes: Z s = ∑ (y 1,y 2)∈Y 2 exp ( ∑ F ) λ i f i (y 1 , y 2 , x s , x N(s) ) i=1 where Y = {text, non-text}. Notice that the summation in partition function is still composed of 512 terms for a 3 × 3 sites. 4.1.1 The three basic problems for CRFs Given the CRF model of the previous section, there are three basic problems of interest that must be solved for the model to be useful in real-world applications. These problems are the following: • Problem 1: Given all the observations x, the label configuration y and a model ψ = (f, λ), how do we efficiently compute p(y|x, ψ), the conditional probability of label configuration given the model? (Marginal inference) 55
- Page 13 and 14: tel-00912566, version 1 - 2 Dec 201
- Page 15 and 16: detection and we conclude that the
- Page 17 and 18: tel-00912566, version 1 - 2 Dec 201
- Page 19 and 20: tel-00912566, version 1 - 2 Dec 201
- Page 21 and 22: Figure 1.8: A screen shot that show
- Page 23 and 24: Chapter 2 Related work tel-00912566
- Page 25 and 26: tel-00912566, version 1 - 2 Dec 201
- Page 27 and 28: them. In such circumstances, it wou
- Page 29 and 30: tel-00912566, version 1 - 2 Dec 201
- Page 31 and 32: [21] is another texture-based metho
- Page 33 and 34: Figure 2.4: Part of a document in o
- Page 35 and 36: • Degraded quality due to ageing
- Page 37 and 38: 2.3.2 Handwritten text line detecti
- Page 39 and 40: (a) Divided strips and their projec
- Page 41 and 42: (a) Five zones 1-5 (b) Projection p
- Page 43 and 44: would be difficult to draw a conclu
- Page 45 and 46: The proposed methods by Xiao [102],
- Page 47 and 48: tel-00912566, version 1 - 2 Dec 201
- Page 49 and 50: is assigning a label to a region of
- Page 51 and 52: fixed range. When the elongation ap
- Page 53 and 54: tel-00912566, version 1 - 2 Dec 201
- Page 55 and 56: The second method calculates the co
- Page 57 and 58: 3. Repeat for m = 1, 2, ..., M •
- Page 59 and 60: tel-00912566, version 1 - 2 Dec 201
- Page 61 and 62: Chapter 4 Region detection tel-0091
- Page 63: The next advantage of using CRFs is
- Page 67 and 68: { 1 if ys = text and y f 1 (y s , y
- Page 69 and 70: (a) Document (b) Filled text compon
- Page 71 and 72: tel-00912566, version 1 - 2 Dec 201
- Page 73 and 74: tel-00912566, version 1 - 2 Dec 201
- Page 75 and 76: f = [y c = 0] × [y tl = 0] f = [y
- Page 77 and 78: (a) Ground-truth (b) y c = 0 tel-00
- Page 79 and 80: ∂l λ = ∑ ( ∑y∈Y f k (y s ,
- Page 81 and 82: incorrect [100]. Several sufficient
- Page 83 and 84: tel-00912566, version 1 - 2 Dec 201
- Page 85 and 86: tel-00912566, version 1 - 2 Dec 201
- Page 87 and 88: tel-00912566, version 1 - 2 Dec 201
- Page 89 and 90: tel-00912566, version 1 - 2 Dec 201
- Page 91 and 92: Table 4.3: TION COUNT WEIGHTED SUCC
- Page 93 and 94: tel-00912566, version 1 - 2 Dec 201
- Page 95 and 96: tel-00912566, version 1 - 2 Dec 201
- Page 97 and 98: Chapter 5 Text line detection tel-0
- Page 99 and 100: tel-00912566, version 1 - 2 Dec 201
- Page 101 and 102: tel-00912566, version 1 - 2 Dec 201
- Page 103 and 104: Having specified the model, a verti
- Page 105 and 106: • The fifth step is to remove ext
- Page 107 and 108: tel-00912566, version 1 - 2 Dec 201
- Page 109 and 110: text lines can be divided into two
- Page 111 and 112: the two children. The root node rep
- Page 113 and 114: leaves of the tree which contain on
weights that are assigned to edge <strong>an</strong>d node feature functions, respectively.<br />
A node feature function f n judges how likely a label y s is assigned to site<br />
s given the observations x s . An edge feature function f e indicates how likely<br />
two adjacent sites have particular label configuration given the observations at<br />
both sites. Feature functions <strong>an</strong>d their weights c<strong>an</strong> take <strong>an</strong>y arbitrary real value.<br />
With a slight abuse <strong>of</strong> notation, it is usual to write this probability in a compact<br />
form as below:<br />
( F<br />
)<br />
∑<br />
p s (y s |x s ) ∝ exp λ k f k (y i,i∈N(s) , y s , x i,i∈N(s) , x s )<br />
k=1<br />
tel-00912566, version 1 - 2 Dec 2013<br />
where F = F e + F n is the total number <strong>of</strong> feature functions. According<br />
to the Hammersley-Clifford theorem (1971), the conditional probability <strong>of</strong> a set<br />
<strong>of</strong> sites given a set <strong>of</strong> observations is proportional to the product <strong>of</strong> potential<br />
functions on cliques <strong>of</strong> the graph. If we take cliques to be pair <strong>of</strong> adjacent sites<br />
then:<br />
(<br />
p(y|x) = 1 ∏ F<br />
)<br />
∑<br />
exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s )<br />
Z<br />
s∈S<br />
k=1<br />
The scalar Z is the normalization factor, or partition function, to ensure<br />
that the probability is valid. It is defined as the sum <strong>of</strong> exponential terms for all<br />
possible label configurations. Computing such partition function is intractable<br />
for most applications due to computational costs. As a consequence, by assuming<br />
conditional independence <strong>of</strong> y’s given x’s, the conditional probability <strong>of</strong> the<br />
CRF model for the whole image c<strong>an</strong> be defined:<br />
p(y|x) = ∏ ( F<br />
)<br />
1 ∑<br />
exp λ k f k (y i,i∈N(s) , y s , x s , x i,i∈N(s) , x s )<br />
Z s<br />
s∈S<br />
k=1<br />
Then Z for each site becomes:<br />
Z s =<br />
∑<br />
(y 1,y 2)∈Y 2 exp<br />
(<br />
∑ F<br />
)<br />
λ i f i (y 1 , y 2 , x s , x N(s) )<br />
i=1<br />
where Y = {text, non-text}. Notice that the summation in partition function<br />
is still composed <strong>of</strong> 512 terms for a 3 × 3 sites.<br />
4.1.1 The three basic problems for CRFs<br />
Given the CRF model <strong>of</strong> the previous section, there are three basic problems<br />
<strong>of</strong> interest that must be solved for the model to be useful in real-world applications.<br />
These problems are the following:<br />
• Problem 1: Given all the observations x, the label configuration y <strong>an</strong>d a<br />
model ψ = (f, λ), how do we efficiently compute p(y|x, ψ), the conditional<br />
probability <strong>of</strong> label configuration given the model? (Marginal inference)<br />
55