Segmentation of heterogeneous document images : an ... - Tel

Segmentation of heterogeneous document images : an ... - Tel Segmentation of heterogeneous document images : an ... - Tel

tel.archives.ouvertes.fr
from tel.archives.ouvertes.fr More from this publisher
14.01.2014 Views

tel-00912566, version 1 - 2 Dec 2013 Figure 4.1: Long-distance communication between image sites in CRFs. 52

The next advantage of using CRFs is that they provide a foundation that easily induces global, local and regional knowledge of the document in the process. Because of the conditional nature of the system, any interaction between the fields’ labels in a Markov network can be learned from available knowledge. In this chapter we first introduce two-dimensional CRFs. Next, we describe feature functions which are the most important building blocks of CRFs. Then we note different types of observations that we extract from a document image. These observation are used inside our feature functions. This chapter also includes methods for decoding optimal label configuration for two-dimensional CRFs and methods for training parameters of the model. Finally, we note the results. 4.1 Conditional random fields (CRFs) tel-00912566, version 1 - 2 Dec 2013 First appeared in the domain of natural language processing, conditional random fields are proposed by Lafferty et al. [50] as a framework for building probabilistic models to segment and label sequence data. Example of such sequence data can be found in a wide variety of problems in text and speech processing such as part-of-speech (POS) tagging. Among probabilistic models that perform the same task, we can name Hidden Markov Models (HMMs) [78] that are well understood and widely used throughout the literature. HMMs identify the most likely label sequence for any given observation sequence. They assign a joint probability p(x, y) to pairs of observation (x) and label (y) sequences. To define a joint probability of this type, models must enumerate all possible observation sequences which is intractable for most domains, unless the observation elements are independent of each other within the observation sequence. Although this assumption is appropriate for simple toy examples, most practical observations are best represented in terms of multiple features with long-range dependencies. CRFs address this issue by using a conditional probability p(y|x) over label sequence given an observation sequence, rather than a joint distribution over both label and observation sequences. CRFs first appeared in the form of chain-conditional random fields. In other words, several fields are connected in a sequential format and the label of each field depends on the label of the field on its left and on the whole observation sequence. This model best fits for applications in signal and natural language processing in which the data appear naturally in a row format. In our application, we deal with images, which can be expressed naturally in two dimensions. Thus, we are interested in two-dimensional conditional random fields. To obtain our two-dimensional random fields, we first divide the document image into rectangular blocks with equal heights and widths. We call each block a site. Contrary to other CRFs that use sites with fix sizes for all document images, we choose the height and width of sites to be identical to half of the mean of heights and widths of all text characters on the document, respectively. 53

tel-00912566, version 1 - 2 Dec 2013<br />

Figure 4.1:<br />

Long-dist<strong>an</strong>ce communication between image sites in CRFs.<br />

52

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!