Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

web-scale systems. This work was published in the proceedings of IJCAI-09 [Bergsma etal., 2009b]. This same method can also be used to determine whether a pronoun in textrefers to a preceding noun phrase or is instead non-referential. This is the first system fornon-referential pronoun detection where all the key information is derived from unlabeleddata. The performance of the system exceeds that of (previously dominant) rule-basedapproaches. The work on non-referential it detection was first published in the proceedingsof ACL-08: HLT [Bergsma et al., 2008b].Chapter 4 improves on the lexical disambiguation classifiers of Chapter 3 by using asimple technique for learning better support vector machines (SVMs) using fewer trainingexamples. Rather than using the standard SVM regularization, we regularize towardlow weight-variance. Our new SVM objective remains a convex quadratic function of theweights, and is therefore computationally no harder to optimize than a standard SVM. Varianceregularization is shown to enable dramatic improvements in the learning rates of SVMson the three lexical disambiguation tasks that we also tackle in Chapter 3. A version of thischapter was published in the proceedings of CoNLL 2010 [Bergsma et al., 2010b]Chapter 5 looks at the effect of combining web-scale N-gram features with standard,lexicalized features in supervised classifiers. It extends the work in Chapter 3 both bytackling new problems and by simultaneously evaluating these two very different featureclasses. We show that including N-gram count features can advance the state-of-the-artaccuracy on standard data sets for adjective ordering, spelling correction, noun compoundbracketing, and verb part-of-speech disambiguation. More importantly, when operating onnew domains, or when labeled training data is not plentiful, we show that using web-scaleN-gram features is essential for achieving robust performance. A version of this chapterwas published in the proceedings of ACL 2010 [Bergsma et al., 2010c].Using Unlabeled Statistics to Generate Training ExamplesIn the second part of the dissertation, rather than using the unlabeled statistics solely asfeatures, we use them to generate labeled examples. By automatically labeling a largenumber of examples, we can train powerful discriminative models, leveraging fine-grainedfeatures of input words.Chapter 6 shows how this technique can be used to learn selectional preferences. Modelsof selectional preference are essential for resolving syntactic, word-sense, and referenceambiguity, and models of selectional preference have received a lot of attention in the NLPcommunity. We turn selectional preference into a supervised classification problem by askingour classifier to predict which predicate-argument pairs should have high associationin text. Positive examples are taken from observed predicate-argument pairs, while negativesare constructed from unobserved combinations. We train a classifier to distinguishthe positive from the negative instances. Features are constructed from the distribution ofthe argument in text. We show how to partition the examples for efficient training with 57thousand features and 6.5 million training instances. The model outperforms other recentapproaches, achieving excellent correlation with human plausibility judgments. Comparedto mutual information, our method identifies 66% more verb-object pairs in unseen text,and resolves 37% more pronouns correctly in a pronoun resolution experiment. This workwas originally published in EMNLP 2008 [Bergsma et al., 2008a].In Chapter 7, we apply this technique to learning a model of string similarity. Acharacter-based measure of similarity is an important component of many natural languageprocessing systems, including approaches to transliteration, coreference, word alignment,9

spelling correction, and the identification of cognates in related vocabularies. We turn stringsimilarity into a classification problem by asking our classifier to predict which bilingualword pairs are translations. Positive pairs are generated automatically from words with ahigh association in an aligned bitext, or mined from dictionary translations. Negatives areconstructed from pairs with a high amount of character overlap, but which are not translations.We gather features from substring pairs consistent with a character-based alignmentof the two strings. The main objective of this work was to demonstrate a better model ofstring similarity, not necessarily to demonstrate our method for generating training examples,however the overall framework of this work fits in nicely with this dissertation. Ourmodel achieves exceptional performance; on nine separate cognate identification experimentsusing six language pairs, we more than double the average precision of traditionalorthographic measures like longest common subsequence ratio and Dice’s coefficient. Wealso show strong improvements over other recent discriminative and heuristic similarityfunctions. This work was originally published in the proceedings of ACL 2007 [Bergsmaand Kondrak, 2007a].1.6 Summary of Main ContributionsThe main contribution of Chapter 3 is to show that we need not restrict ourselves to verylimited contextual information simply because we are working with web-scale volumes oftext. In particular, by using web-scale N-gram data (as opposed to, for example, searchengine data), we can:• combine information from multiple, overlapping sequences of context of varyinglengths, rather than using a single context pattern (Chapter 3), and• apply either discriminative techniques or simple unsupervised algorithms to integrateinformation from these overlapping contexts (Chapter 3).We also make useful contributions by showing how to:• detect non-referential pronouns by looking at the distribution of fillers that occur inpronominal context patterns (Section 3.7),• modify the SVM learning algorithm to be biased toward a solution that is a prioriknown to be effective, whenever features are based on counts (Chapter 4), and• operate on new domains with far greater robustness than approaches that simply usestandard lexical features (Chapter 5).• exploit preprocessing of web-scale N-gram data, either via part-of-speech tags addedto the source corpus (Chapter 5), or by truncating/stemming the N-grams themselves(Section 3.7).The technique of automatically generating training examples has also been used previouslyin NLP. Our main contributions are showing:• very clean pseudo-examples can be generated from aggregate statistics rather thanindividual words or sentences in text, and10

web-scale systems. This work was published in the proceedings of IJCAI-09 [Bergsma etal., 2009b]. This same method can also be used to determine whether a pronoun in textrefers to a preceding noun phrase or is instead non-referential. This is the first system <strong>for</strong>non-referential pronoun detection where all the key in<strong>for</strong>mation is derived from unlabeleddata. The per<strong>for</strong>mance of the system exceeds that of (previously dominant) rule-basedapproaches. The work on non-referential it detection was first published in the proceedingsof ACL-08: HLT [Bergsma et al., 2008b].Chapter 4 improves on the lexical disambiguation classifiers of Chapter 3 by using asimple technique <strong>for</strong> learning better support vector machines (SVMs) using fewer trainingexamples. Rather than using the standard SVM regularization, we regularize towardlow weight-variance. Our new SVM objective remains a convex quadratic function of theweights, and is there<strong>for</strong>e computationally no harder to optimize than a standard SVM. Varianceregularization is shown to enable dramatic improvements in the learning rates of SVMson the three lexical disambiguation tasks that we also tackle in Chapter 3. A version of thischapter was published in the proceedings of CoNLL 2010 [Bergsma et al., 2010b]Chapter 5 looks at the effect of combining web-scale N-gram features with standard,lexicalized features in supervised classifiers. It extends the work in Chapter 3 both bytackling new problems and by simultaneously evaluating these two very different featureclasses. We show that including N-gram count features can advance the state-of-the-artaccuracy on standard data sets <strong>for</strong> adjective ordering, spelling correction, noun compoundbracketing, and verb part-of-speech disambiguation. More importantly, when operating onnew domains, or when labeled training data is not plentiful, we show that using web-scaleN-gram features is essential <strong>for</strong> achieving robust per<strong>for</strong>mance. A version of this chapterwas published in the proceedings of ACL 2010 [Bergsma et al., 2010c].Using Unlabeled Statistics to Generate Training ExamplesIn the second part of the dissertation, rather than using the unlabeled statistics solely asfeatures, we use them to generate labeled examples. By automatically labeling a largenumber of examples, we can train powerful discriminative models, leveraging fine-grainedfeatures of input words.Chapter 6 shows how this technique can be used to learn selectional preferences. Modelsof selectional preference are essential <strong>for</strong> resolving syntactic, word-sense, and referenceambiguity, and models of selectional preference have received a lot of attention in the NLPcommunity. We turn selectional preference into a supervised classification problem by askingour classifier to predict which predicate-argument pairs should have high associationin text. Positive examples are taken from observed predicate-argument pairs, while negativesare constructed from unobserved combinations. We train a classifier to distinguishthe positive from the negative instances. Features are constructed from the distribution ofthe argument in text. We show how to partition the examples <strong>for</strong> efficient training with 57thousand features and 6.5 million training instances. The model outper<strong>for</strong>ms other recentapproaches, achieving excellent correlation with human plausibility judgments. Comparedto mutual in<strong>for</strong>mation, our method identifies 66% more verb-object pairs in unseen text,and resolves 37% more pronouns correctly in a pronoun resolution experiment. This workwas originally published in EMNLP 2008 [Bergsma et al., 2008a].In Chapter 7, we apply this technique to learning a model of string similarity. Acharacter-based measure of similarity is an important component of many natural languageprocessing systems, including approaches to transliteration, coreference, word alignment,9

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!