Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
web-scale systems. This work was published in the proceedings of IJCAI-09 [Bergsma etal., 2009b]. This same method can also be used to determine whether a pronoun in textrefers to a preceding noun phrase or is instead non-referential. This is the first system fornon-referential pronoun detection where all the key information is derived from unlabeleddata. The performance of the system exceeds that of (previously dominant) rule-basedapproaches. The work on non-referential it detection was first published in the proceedingsof ACL-08: HLT [Bergsma et al., 2008b].Chapter 4 improves on the lexical disambiguation classifiers of Chapter 3 by using asimple technique for learning better support vector machines (SVMs) using fewer trainingexamples. Rather than using the standard SVM regularization, we regularize towardlow weight-variance. Our new SVM objective remains a convex quadratic function of theweights, and is therefore computationally no harder to optimize than a standard SVM. Varianceregularization is shown to enable dramatic improvements in the learning rates of SVMson the three lexical disambiguation tasks that we also tackle in Chapter 3. A version of thischapter was published in the proceedings of CoNLL 2010 [Bergsma et al., 2010b]Chapter 5 looks at the effect of combining web-scale N-gram features with standard,lexicalized features in supervised classifiers. It extends the work in Chapter 3 both bytackling new problems and by simultaneously evaluating these two very different featureclasses. We show that including N-gram count features can advance the state-of-the-artaccuracy on standard data sets for adjective ordering, spelling correction, noun compoundbracketing, and verb part-of-speech disambiguation. More importantly, when operating onnew domains, or when labeled training data is not plentiful, we show that using web-scaleN-gram features is essential for achieving robust performance. A version of this chapterwas published in the proceedings of ACL 2010 [Bergsma et al., 2010c].Using Unlabeled Statistics to Generate Training ExamplesIn the second part of the dissertation, rather than using the unlabeled statistics solely asfeatures, we use them to generate labeled examples. By automatically labeling a largenumber of examples, we can train powerful discriminative models, leveraging fine-grainedfeatures of input words.Chapter 6 shows how this technique can be used to learn selectional preferences. Modelsof selectional preference are essential for resolving syntactic, word-sense, and referenceambiguity, and models of selectional preference have received a lot of attention in the NLPcommunity. We turn selectional preference into a supervised classification problem by askingour classifier to predict which predicate-argument pairs should have high associationin text. Positive examples are taken from observed predicate-argument pairs, while negativesare constructed from unobserved combinations. We train a classifier to distinguishthe positive from the negative instances. Features are constructed from the distribution ofthe argument in text. We show how to partition the examples for efficient training with 57thousand features and 6.5 million training instances. The model outperforms other recentapproaches, achieving excellent correlation with human plausibility judgments. Comparedto mutual information, our method identifies 66% more verb-object pairs in unseen text,and resolves 37% more pronouns correctly in a pronoun resolution experiment. This workwas originally published in EMNLP 2008 [Bergsma et al., 2008a].In Chapter 7, we apply this technique to learning a model of string similarity. Acharacter-based measure of similarity is an important component of many natural languageprocessing systems, including approaches to transliteration, coreference, word alignment,9
spelling correction, and the identification of cognates in related vocabularies. We turn stringsimilarity into a classification problem by asking our classifier to predict which bilingualword pairs are translations. Positive pairs are generated automatically from words with ahigh association in an aligned bitext, or mined from dictionary translations. Negatives areconstructed from pairs with a high amount of character overlap, but which are not translations.We gather features from substring pairs consistent with a character-based alignmentof the two strings. The main objective of this work was to demonstrate a better model ofstring similarity, not necessarily to demonstrate our method for generating training examples,however the overall framework of this work fits in nicely with this dissertation. Ourmodel achieves exceptional performance; on nine separate cognate identification experimentsusing six language pairs, we more than double the average precision of traditionalorthographic measures like longest common subsequence ratio and Dice’s coefficient. Wealso show strong improvements over other recent discriminative and heuristic similarityfunctions. This work was originally published in the proceedings of ACL 2007 [Bergsmaand Kondrak, 2007a].1.6 Summary of Main ContributionsThe main contribution of Chapter 3 is to show that we need not restrict ourselves to verylimited contextual information simply because we are working with web-scale volumes oftext. In particular, by using web-scale N-gram data (as opposed to, for example, searchengine data), we can:• combine information from multiple, overlapping sequences of context of varyinglengths, rather than using a single context pattern (Chapter 3), and• apply either discriminative techniques or simple unsupervised algorithms to integrateinformation from these overlapping contexts (Chapter 3).We also make useful contributions by showing how to:• detect non-referential pronouns by looking at the distribution of fillers that occur inpronominal context patterns (Section 3.7),• modify the SVM learning algorithm to be biased toward a solution that is a prioriknown to be effective, whenever features are based on counts (Chapter 4), and• operate on new domains with far greater robustness than approaches that simply usestandard lexical features (Chapter 5).• exploit preprocessing of web-scale N-gram data, either via part-of-speech tags addedto the source corpus (Chapter 5), or by truncating/stemming the N-grams themselves(Section 3.7).The technique of automatically generating training examples has also been used previouslyin NLP. Our main contributions are showing:• very clean pseudo-examples can be generated from aggregate statistics rather thanindividual words or sentences in text, and10
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17: Uses Web-Scale N-grams Auto-Creates
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
spelling correction, and the identification of cognates in related vocabularies. We turn stringsimilarity into a classification problem by asking our classifier to predict which bilingualword pairs are translations. Positive pairs are generated automatically from words with ahigh association in an aligned bitext, or mined from dictionary translations. Negatives areconstructed from pairs with a high amount of character overlap, but which are not translations.We gather features from substring pairs consistent with a character-based alignmentof the two strings. The main objective of this work was to demonstrate a better model ofstring similarity, not necessarily to demonstrate our method <strong>for</strong> generating training examples,however the overall framework of this work fits in nicely with this dissertation. Ourmodel achieves exceptional per<strong>for</strong>mance; on nine separate cognate identification experimentsusing six language pairs, we more than double the average precision of traditionalorthographic measures like longest common subsequence ratio and Dice’s coefficient. Wealso show strong improvements over other recent discriminative and heuristic similarityfunctions. This work was originally published in the proceedings of ACL 2007 [Bergsmaand Kondrak, 2007a].1.6 Summary of Main ContributionsThe main contribution of Chapter 3 is to show that we need not restrict ourselves to verylimited contextual in<strong>for</strong>mation simply because we are working with web-scale volumes oftext. In particular, by using web-scale N-gram data (as opposed to, <strong>for</strong> example, searchengine data), we can:• combine in<strong>for</strong>mation from multiple, overlapping sequences of context of varyinglengths, rather than using a single context pattern (Chapter 3), and• apply either discriminative techniques or simple unsupervised algorithms to integratein<strong>for</strong>mation from these overlapping contexts (Chapter 3).We also make useful contributions by showing how to:• detect non-referential pronouns by looking at the distribution of fillers that occur inpronominal context patterns (Section 3.7),• modify the SVM learning algorithm to be biased toward a solution that is a prioriknown to be effective, whenever features are based on counts (Chapter 4), and• operate on new domains with far greater robustness than approaches that simply usestandard lexical features (Chapter 5).• exploit preprocessing of web-scale N-gram data, either via part-of-speech tags addedto the source corpus (Chapter 5), or by truncating/stemming the N-grams themselves(Section 3.7).The technique of automatically generating training examples has also been used previouslyin NLP. Our main contributions are showing:• very clean pseudo-examples can be generated from aggregate statistics rather thanindividual words or sentences in text, and10