Unni Cathrine Eiken February 2005

Unni Cathrine Eiken February 2005 Unni Cathrine Eiken February 2005

10.04.2013 Views

4 Classification In order to use the structures in the EPAS list as an aid in anaphora resolution, they have to be processed. The pre-processing in section 3.6.4 has shown that there does exist interesting distributions in the data set and indicates that certain groups of arguments display distributions particular for the domain. As a step toward exploring if these distributions can be used to represent selectional restrictions and thus function as real-world knowledge for the domain, the words in the EPAS list must be classified. This procedure uses the context pattens that a word occurs in to classify the word, for example allowing for an argument to be classified according to the predicates it co-occurs with. A classification of this type gives information about which word to expect in a given context pattern and the results can therefore be used in the process of chosing the most likely antecedent for an anaphor. In this respect, the most likely antecedent must be interpreted as the most likely antecedent given a particular contextual pattern. In the following, the EPAS list will first be classified to see if the context patterns represented by the EPAS contain enough information to suggest the correct antecedent for anaphoric expressions from the text collection. Then an association of concepts will be performed, creating bundles of those arguments which occur in similar contexts/with similar predicates. These concepts will then be applied in co-occurrence with the classification method to see if they improve the process of suggesting the correct antecedent for the anaphors. For the purposes of classification and testing, the EPAS list was divided into training and test sets. The test set consist of all structures containing pronouns, while the training set consists of the remaining EPAS. In the case of the test set, the correct antecedent for each pronoun was identified manually and added to the test file. When testing with the test instances, the classifier assigns an antecedent based on the patterns it has seen in the training set. In this way, the correct antecedent in each test case functions as a means of measuring the success rate of the classification. The test set provides a good way of testing the product of the classification and gives a measure as to whether the correct antecedent can be assigned based on training on occurrences of EPAS/context patterns. 58

The process of classifying the constituents of the EPAS is most useful if the aim of the classification is held clearly in mind. Classifying arguments relative to the predicates and the other arguments they co-occur with can give information about two things; • is the data set generalisable enough to allow inference of the single correct antecedent in each test case? • is the data set generalisable enough to allow inference of words within the semantic concept group that the correct antecedent belongs to? In this thesis, it is of interest to identify all the words which occur in specific environments. As a reaction, we are interested in finding the members which can co-occur in a specific pattern – and not necessarily only in the single correct antecedent. The classification phase in the present work has three steps; firstly classification through a memory-based learning algorithm, secondly association of semantic classes from the text material by looking at contextual environments, and thirdly classification through application of the concept groups gathered in step two. In the following, the classification method will be described in more detail. 4.1 Step I: Classification with TiMBL TiMBL (Tilburg Memory Based Learner) (Daelemans et al. 2003) is a memory-based learning (MBL) tool developed by the ILK research group at the University of Tilburg (ILK 2004). TiMBL has been developed with the domain of NLP specifically in mind and provides an implementation of several MBL algorithms. Within MBL, or lazy learning (Daelemans et al. 1999), training instances are simply stored in memory. Upon encountering new instances, classification is performed by comparing the new instance to the stored experiences and estimating the similarity of the new instance to the old ones. The stored example(s) most similar to the new instance is picked as its classification. This approach stands in opposition to rule-induction based methods, which also are called greedy algorithms. In greedy learning algorithms, the learning material is used to create a model with expected characteristics for each category to be learned. Daelemans et al. (1999) show that 59

The process of classifying the constituents of the EPAS is most useful if the aim of the<br />

classification is held clearly in mind. Classifying arguments relative to the predicates and the<br />

other arguments they co-occur with can give information about two things;<br />

• is the data set generalisable enough to allow inference of the single correct antecedent in<br />

each test case?<br />

• is the data set generalisable enough to allow inference of words within the semantic<br />

concept group that the correct antecedent belongs to?<br />

In this thesis, it is of interest to identify all the words which occur in specific environments. As a<br />

reaction, we are interested in finding the members which can co-occur in a specific pattern – and<br />

not necessarily only in the single correct antecedent.<br />

The classification phase in the present work has three steps; firstly classification through a<br />

memory-based learning algorithm, secondly association of semantic classes from the text<br />

material by looking at contextual environments, and thirdly classification through application of<br />

the concept groups gathered in step two. In the following, the classification method will be<br />

described in more detail.<br />

4.1 Step I: Classification with TiMBL<br />

TiMBL (Tilburg Memory Based Learner) (Daelemans et al. 2003) is a memory-based learning<br />

(MBL) tool developed by the ILK research group at the University of Tilburg (ILK 2004).<br />

TiMBL has been developed with the domain of NLP specifically in mind and provides an<br />

implementation of several MBL algorithms.<br />

Within MBL, or lazy learning (Daelemans et al. 1999), training instances are simply stored in<br />

memory. Upon encountering new instances, classification is performed by comparing the new<br />

instance to the stored experiences and estimating the similarity of the new instance to the old<br />

ones. The stored example(s) most similar to the new instance is picked as its classification. This<br />

approach stands in opposition to rule-induction based methods, which also are called greedy<br />

algorithms. In greedy learning algorithms, the learning material is used to create a model with<br />

expected characteristics for each category to be learned. Daelemans et al. (1999) show that<br />

59

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!