automatically-labeled examples are used to train the classifier. Un<strong>for</strong>tunately, automaticallylabeledexamples are often incorrect. The classifier works hard to classify these examplescorrectly, and subsequently gets similar examples wrong that it encounters at testing. If wehave enough manually-labeled examples, it seems that we want the ultimate mediator ofthe value of our features to be per<strong>for</strong>mance on these labeled examples, not per<strong>for</strong>mance onany pseudo-examples. This mediation is, of course, exactly what supervised learning does.If we instead create features from unlabeled data, rather than using unlabeled data to createnew examples, standard supervised learning can be used.How can we include in<strong>for</strong>mation from unlabeled data as new features in a supervisedlearner? Section 2.2 described a typical feature representation: each feature is a binaryindicator of whether a word is present or not in a document to be classified. When we extractfeatures from unlabeled data, we add new dimensions to the feature representation. Thesenew dimensions are <strong>for</strong> features that represent what we might call second-order interactions– co-occurrences of words with each other in unlabeled text.In very recent papers, both Huang and Yates [2009] and Turian et al. [2010] providecomparisons of different ways to extract new features from unlabeled data; they both evaluateper<strong>for</strong>mance on a range of tasks.Features Directly From a Word’s Distribution in Unlabeled TextReturning to our sports example, we could have a feature <strong>for</strong> whether a word in a givendocument occurs elsewhere, in unlabeled data, with the word score. A classifier could learnthat this feature is associated with the sports class, because words like hockey, baseball,inning, win, etc. tend to occur with score, and some of these likely occur in the training set.So, although we may never see the word curling during training, it does occur in unlabeledtext with many of the same words that occur with other sports terms, like the word score. Soa document that contains curling will have the second-order score feature, and thus curling,through features created from its distribution, is still an indicator of sports. Directly havinga feature <strong>for</strong> each item that co-occurs in a word’s distribution is perhaps the simplest way toleverage unlabeled data in the feature representation. Huang and Yates [2009] essentiallyuse this as their multinomial representation. They find it per<strong>for</strong>ms worse on sequencelabelingtasks than distributional representations based on HMMs and latent-semantic analysis(two other effective approaches <strong>for</strong> creating features from unlabeled data). One issuewith using the distribution directly is that although sparsity is potentially alleviated at theword level (we can handle words even if we haven’t seen them in training data), we increasesparsity at the feature level: there are more features to train but the same amount of trainingdata. This might explain why [Huang and Yates, 2009] see improved per<strong>for</strong>mance onrare words but similar per<strong>for</strong>mance overall. We return to this issue in Chapter 5 when wepresent a distributional representation <strong>for</strong> verb part-of-speech tag disambiguation that mayalso suffer from these drawbacks (Section 5.6).Features from Similar Words or Distributional ClustersThere are many other ways to create features from unlabeled data. One popular approachis to summarize the distribution of words (in unlabeled data) using similar words. [Wanget al., 2005] use similar words to help generalization in dependency parsing. [Marton etal., 2009] use similar phrases to help improve the handling of out-of-vocabulary terms in amachine translation system. Another recent trend is to create features from automatically-33
generated word clusters. Several researchers have used the hierarchical Brown et al. [1992]clustering algorithm, and then created features <strong>for</strong> cluster membership at different levels ofthe hierarchy [Miller et al., 2004; Koo et al., 2008]. Rather than clustering single words,Lin and Wu [2009] use phrasal clusters, and provide features <strong>for</strong> cluster membership whendifferent numbers of clusters are used in the clustering.Features <strong>for</strong> the Output of Auxiliary ClassifiersAnother way to create features from unlabeled data is to create features <strong>for</strong> the output ofpredictions on auxiliary problems that can be trained solely with unlabeled data [Ando andZhang, 2005]. For example, we could create a prediction <strong>for</strong> whether the word arena occursin a document. We can take all the documents where arena does and does not occur, andbuild a classifier using all the other words in the document. This classifier may predictthat arena does occur if the words hockey, curling, fans, etc. occur. When the predictionsare used as features, if they are useful, they will receive high weight at training time. Attest time, if we see a word like curling, <strong>for</strong> example, even though it was never seen in ourlabeled set, it may cause the predictor <strong>for</strong> arena to return a high score, and thus also causethe document to be recognized as sports.Note that since these examples can be created automatically, this problem (and otherauxiliary problems in the Ando and Zhang approach) fall into the category of those with<strong>Natural</strong> Automatic Examples as discussed above. One possible direction <strong>for</strong> future work isto construct auxiliary problems with pseudo-negative examples. For example, we couldinclude the predictions of various configurations of our selectional-preference classifier(Chapter 6) as a feature in a discriminatively-trained language model. We took a similarapproach in our work on gender [Bergsma et al., 2009a]. We trained a classifier onautomatically-created examples, but used the output of this classifier as another feature ina classifier trained on a small amount of supervised data. This resulted in a substantial gainin per<strong>for</strong>mance over using the original prediction on its own: 95.5% versus 92.6% (but noteother features were combined with the prediction of the auxiliary classifier).Features used in this DissertationIn this dissertation, we create features from unsupervised data in several chapters and inseveral different ways. In Chapter 6, to assess whether a noun is compatible with a verb,we create features <strong>for</strong> the noun’s distribution only with other verbs. Thus we characterizea noun by its verb contexts, rather than its full distribution, using less features than anaive representation using the noun’s full distributional profile. Chapters 3 and 5 also selectivelyuse features from parts of the total distribution of a word, phrase, or pair of words (tocharacterize the relation between words, <strong>for</strong> noun compound bracketing and verb tag disambiguationin Chapter 5). In Chapter 3, we characterize contexts by using selected typesfrom the distribution of other words that occur in the context. For the adjective-orderingwork in Chapter 5, we choose an order based on the distribution of the adjectives individuallyand combined in a phrase. Our approaches are simple, but effective. Perhaps mostimportantly, by leveraging the counts in a web-scale N-gram corpus, they scale to makeuse of all the text data on the web. On the other hand, scaling most other semi-supervisedtechniques to even moderately-large collections of unlabeled text remains “future work” <strong>for</strong>a large number of published approaches in the machine learning and NLP literature.34
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41: positive examples from any collecti
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94:
without the need for manual annotat
- Page 95 and 96:
DSP uses these labels to identify o
- Page 97 and 98:
Semantic classesMotivated by previo
- Page 99 and 100:
empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102:
Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104:
SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106:
Chapter 7Alignment-Based Discrimina
- Page 107 and 108:
ious measures to learn the recurren
- Page 109 and 110:
how labeled word pairs can be colle
- Page 111 and 112:
Figure 7.1: LCSR histogram and poly
- Page 113 and 114:
0.711-pt Average Precision0.60.50.4
- Page 115 and 116:
Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118:
Chapter 8Conclusions and Future Wor
- Page 119 and 120:
8.3 Future WorkThis section outline
- Page 121 and 122:
My focus is thus on enabling robust
- Page 123 and 124:
[Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V