Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
automatically-labeled examples are used to train the classifier. Unfortunately, automaticallylabeledexamples are often incorrect. The classifier works hard to classify these examplescorrectly, and subsequently gets similar examples wrong that it encounters at testing. If wehave enough manually-labeled examples, it seems that we want the ultimate mediator ofthe value of our features to be performance on these labeled examples, not performance onany pseudo-examples. This mediation is, of course, exactly what supervised learning does.If we instead create features from unlabeled data, rather than using unlabeled data to createnew examples, standard supervised learning can be used.How can we include information from unlabeled data as new features in a supervisedlearner? Section 2.2 described a typical feature representation: each feature is a binaryindicator of whether a word is present or not in a document to be classified. When we extractfeatures from unlabeled data, we add new dimensions to the feature representation. Thesenew dimensions are for features that represent what we might call second-order interactions– co-occurrences of words with each other in unlabeled text.In very recent papers, both Huang and Yates [2009] and Turian et al. [2010] providecomparisons of different ways to extract new features from unlabeled data; they both evaluateperformance on a range of tasks.Features Directly From a Word’s Distribution in Unlabeled TextReturning to our sports example, we could have a feature for whether a word in a givendocument occurs elsewhere, in unlabeled data, with the word score. A classifier could learnthat this feature is associated with the sports class, because words like hockey, baseball,inning, win, etc. tend to occur with score, and some of these likely occur in the training set.So, although we may never see the word curling during training, it does occur in unlabeledtext with many of the same words that occur with other sports terms, like the word score. Soa document that contains curling will have the second-order score feature, and thus curling,through features created from its distribution, is still an indicator of sports. Directly havinga feature for each item that co-occurs in a word’s distribution is perhaps the simplest way toleverage unlabeled data in the feature representation. Huang and Yates [2009] essentiallyuse this as their multinomial representation. They find it performs worse on sequencelabelingtasks than distributional representations based on HMMs and latent-semantic analysis(two other effective approaches for creating features from unlabeled data). One issuewith using the distribution directly is that although sparsity is potentially alleviated at theword level (we can handle words even if we haven’t seen them in training data), we increasesparsity at the feature level: there are more features to train but the same amount of trainingdata. This might explain why [Huang and Yates, 2009] see improved performance onrare words but similar performance overall. We return to this issue in Chapter 5 when wepresent a distributional representation for verb part-of-speech tag disambiguation that mayalso suffer from these drawbacks (Section 5.6).Features from Similar Words or Distributional ClustersThere are many other ways to create features from unlabeled data. One popular approachis to summarize the distribution of words (in unlabeled data) using similar words. [Wanget al., 2005] use similar words to help generalization in dependency parsing. [Marton etal., 2009] use similar phrases to help improve the handling of out-of-vocabulary terms in amachine translation system. Another recent trend is to create features from automatically-33
generated word clusters. Several researchers have used the hierarchical Brown et al. [1992]clustering algorithm, and then created features for cluster membership at different levels ofthe hierarchy [Miller et al., 2004; Koo et al., 2008]. Rather than clustering single words,Lin and Wu [2009] use phrasal clusters, and provide features for cluster membership whendifferent numbers of clusters are used in the clustering.Features for the Output of Auxiliary ClassifiersAnother way to create features from unlabeled data is to create features for the output ofpredictions on auxiliary problems that can be trained solely with unlabeled data [Ando andZhang, 2005]. For example, we could create a prediction for whether the word arena occursin a document. We can take all the documents where arena does and does not occur, andbuild a classifier using all the other words in the document. This classifier may predictthat arena does occur if the words hockey, curling, fans, etc. occur. When the predictionsare used as features, if they are useful, they will receive high weight at training time. Attest time, if we see a word like curling, for example, even though it was never seen in ourlabeled set, it may cause the predictor for arena to return a high score, and thus also causethe document to be recognized as sports.Note that since these examples can be created automatically, this problem (and otherauxiliary problems in the Ando and Zhang approach) fall into the category of those withNatural Automatic Examples as discussed above. One possible direction for future work isto construct auxiliary problems with pseudo-negative examples. For example, we couldinclude the predictions of various configurations of our selectional-preference classifier(Chapter 6) as a feature in a discriminatively-trained language model. We took a similarapproach in our work on gender [Bergsma et al., 2009a]. We trained a classifier onautomatically-created examples, but used the output of this classifier as another feature ina classifier trained on a small amount of supervised data. This resulted in a substantial gainin performance over using the original prediction on its own: 95.5% versus 92.6% (but noteother features were combined with the prediction of the auxiliary classifier).Features used in this DissertationIn this dissertation, we create features from unsupervised data in several chapters and inseveral different ways. In Chapter 6, to assess whether a noun is compatible with a verb,we create features for the noun’s distribution only with other verbs. Thus we characterizea noun by its verb contexts, rather than its full distribution, using less features than anaive representation using the noun’s full distributional profile. Chapters 3 and 5 also selectivelyuse features from parts of the total distribution of a word, phrase, or pair of words (tocharacterize the relation between words, for noun compound bracketing and verb tag disambiguationin Chapter 5). In Chapter 3, we characterize contexts by using selected typesfrom the distribution of other words that occur in the context. For the adjective-orderingwork in Chapter 5, we choose an order based on the distribution of the adjectives individuallyand combined in a phrase. Our approaches are simple, but effective. Perhaps mostimportantly, by leveraging the counts in a web-scale N-gram corpus, they scale to makeuse of all the text data on the web. On the other hand, scaling most other semi-supervisedtechniques to even moderately-large collections of unlabeled text remains “future work” fora large number of published approaches in the machine learning and NLP literature.34
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41: positive examples from any collecti
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
automatically-labeled examples are used to train the classifier. Un<strong>for</strong>tunately, automaticallylabeledexamples are often incorrect. The classifier works hard to classify these examplescorrectly, and subsequently gets similar examples wrong that it encounters at testing. If wehave enough manually-labeled examples, it seems that we want the ultimate mediator ofthe value of our features to be per<strong>for</strong>mance on these labeled examples, not per<strong>for</strong>mance onany pseudo-examples. This mediation is, of course, exactly what supervised learning does.If we instead create features from unlabeled data, rather than using unlabeled data to createnew examples, standard supervised learning can be used.How can we include in<strong>for</strong>mation from unlabeled data as new features in a supervisedlearner? Section 2.2 described a typical feature representation: each feature is a binaryindicator of whether a word is present or not in a document to be classified. When we extractfeatures from unlabeled data, we add new dimensions to the feature representation. Thesenew dimensions are <strong>for</strong> features that represent what we might call second-order interactions– co-occurrences of words with each other in unlabeled text.In very recent papers, both Huang and Yates [2009] and Turian et al. [2010] providecomparisons of different ways to extract new features from unlabeled data; they both evaluateper<strong>for</strong>mance on a range of tasks.Features Directly From a Word’s Distribution in Unlabeled TextReturning to our sports example, we could have a feature <strong>for</strong> whether a word in a givendocument occurs elsewhere, in unlabeled data, with the word score. A classifier could learnthat this feature is associated with the sports class, because words like hockey, baseball,inning, win, etc. tend to occur with score, and some of these likely occur in the training set.So, although we may never see the word curling during training, it does occur in unlabeledtext with many of the same words that occur with other sports terms, like the word score. Soa document that contains curling will have the second-order score feature, and thus curling,through features created from its distribution, is still an indicator of sports. Directly havinga feature <strong>for</strong> each item that co-occurs in a word’s distribution is perhaps the simplest way toleverage unlabeled data in the feature representation. Huang and Yates [2009] essentiallyuse this as their multinomial representation. They find it per<strong>for</strong>ms worse on sequencelabelingtasks than distributional representations based on HMMs and latent-semantic analysis(two other effective approaches <strong>for</strong> creating features from unlabeled data). One issuewith using the distribution directly is that although sparsity is potentially alleviated at theword level (we can handle words even if we haven’t seen them in training data), we increasesparsity at the feature level: there are more features to train but the same amount of trainingdata. This might explain why [Huang and Yates, 2009] see improved per<strong>for</strong>mance onrare words but similar per<strong>for</strong>mance overall. We return to this issue in Chapter 5 when wepresent a distributional representation <strong>for</strong> verb part-of-speech tag disambiguation that mayalso suffer from these drawbacks (Section 5.6).Features from Similar Words or Distributional ClustersThere are many other ways to create features from unlabeled data. One popular approachis to summarize the distribution of words (in unlabeled data) using similar words. [Wanget al., 2005] use similar words to help generalization in dependency parsing. [Marton etal., 2009] use similar phrases to help improve the handling of out-of-vocabulary terms in amachine translation system. Another recent trend is to create features from automatically-33