Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

automatically-labeled examples are used to train the classifier. Unfortunately, automaticallylabeledexamples are often incorrect. The classifier works hard to classify these examplescorrectly, and subsequently gets similar examples wrong that it encounters at testing. If wehave enough manually-labeled examples, it seems that we want the ultimate mediator ofthe value of our features to be performance on these labeled examples, not performance onany pseudo-examples. This mediation is, of course, exactly what supervised learning does.If we instead create features from unlabeled data, rather than using unlabeled data to createnew examples, standard supervised learning can be used.How can we include information from unlabeled data as new features in a supervisedlearner? Section 2.2 described a typical feature representation: each feature is a binaryindicator of whether a word is present or not in a document to be classified. When we extractfeatures from unlabeled data, we add new dimensions to the feature representation. Thesenew dimensions are for features that represent what we might call second-order interactions– co-occurrences of words with each other in unlabeled text.In very recent papers, both Huang and Yates [2009] and Turian et al. [2010] providecomparisons of different ways to extract new features from unlabeled data; they both evaluateperformance on a range of tasks.Features Directly From a Word’s Distribution in Unlabeled TextReturning to our sports example, we could have a feature for whether a word in a givendocument occurs elsewhere, in unlabeled data, with the word score. A classifier could learnthat this feature is associated with the sports class, because words like hockey, baseball,inning, win, etc. tend to occur with score, and some of these likely occur in the training set.So, although we may never see the word curling during training, it does occur in unlabeledtext with many of the same words that occur with other sports terms, like the word score. Soa document that contains curling will have the second-order score feature, and thus curling,through features created from its distribution, is still an indicator of sports. Directly havinga feature for each item that co-occurs in a word’s distribution is perhaps the simplest way toleverage unlabeled data in the feature representation. Huang and Yates [2009] essentiallyuse this as their multinomial representation. They find it performs worse on sequencelabelingtasks than distributional representations based on HMMs and latent-semantic analysis(two other effective approaches for creating features from unlabeled data). One issuewith using the distribution directly is that although sparsity is potentially alleviated at theword level (we can handle words even if we haven’t seen them in training data), we increasesparsity at the feature level: there are more features to train but the same amount of trainingdata. This might explain why [Huang and Yates, 2009] see improved performance onrare words but similar performance overall. We return to this issue in Chapter 5 when wepresent a distributional representation for verb part-of-speech tag disambiguation that mayalso suffer from these drawbacks (Section 5.6).Features from Similar Words or Distributional ClustersThere are many other ways to create features from unlabeled data. One popular approachis to summarize the distribution of words (in unlabeled data) using similar words. [Wanget al., 2005] use similar words to help generalization in dependency parsing. [Marton etal., 2009] use similar phrases to help improve the handling of out-of-vocabulary terms in amachine translation system. Another recent trend is to create features from automatically-33

generated word clusters. Several researchers have used the hierarchical Brown et al. [1992]clustering algorithm, and then created features for cluster membership at different levels ofthe hierarchy [Miller et al., 2004; Koo et al., 2008]. Rather than clustering single words,Lin and Wu [2009] use phrasal clusters, and provide features for cluster membership whendifferent numbers of clusters are used in the clustering.Features for the Output of Auxiliary ClassifiersAnother way to create features from unlabeled data is to create features for the output ofpredictions on auxiliary problems that can be trained solely with unlabeled data [Ando andZhang, 2005]. For example, we could create a prediction for whether the word arena occursin a document. We can take all the documents where arena does and does not occur, andbuild a classifier using all the other words in the document. This classifier may predictthat arena does occur if the words hockey, curling, fans, etc. occur. When the predictionsare used as features, if they are useful, they will receive high weight at training time. Attest time, if we see a word like curling, for example, even though it was never seen in ourlabeled set, it may cause the predictor for arena to return a high score, and thus also causethe document to be recognized as sports.Note that since these examples can be created automatically, this problem (and otherauxiliary problems in the Ando and Zhang approach) fall into the category of those withNatural Automatic Examples as discussed above. One possible direction for future work isto construct auxiliary problems with pseudo-negative examples. For example, we couldinclude the predictions of various configurations of our selectional-preference classifier(Chapter 6) as a feature in a discriminatively-trained language model. We took a similarapproach in our work on gender [Bergsma et al., 2009a]. We trained a classifier onautomatically-created examples, but used the output of this classifier as another feature ina classifier trained on a small amount of supervised data. This resulted in a substantial gainin performance over using the original prediction on its own: 95.5% versus 92.6% (but noteother features were combined with the prediction of the auxiliary classifier).Features used in this DissertationIn this dissertation, we create features from unsupervised data in several chapters and inseveral different ways. In Chapter 6, to assess whether a noun is compatible with a verb,we create features for the noun’s distribution only with other verbs. Thus we characterizea noun by its verb contexts, rather than its full distribution, using less features than anaive representation using the noun’s full distributional profile. Chapters 3 and 5 also selectivelyuse features from parts of the total distribution of a word, phrase, or pair of words (tocharacterize the relation between words, for noun compound bracketing and verb tag disambiguationin Chapter 5). In Chapter 3, we characterize contexts by using selected typesfrom the distribution of other words that occur in the context. For the adjective-orderingwork in Chapter 5, we choose an order based on the distribution of the adjectives individuallyand combined in a phrase. Our approaches are simple, but effective. Perhaps mostimportantly, by leveraging the counts in a web-scale N-gram corpus, they scale to makeuse of all the text data on the web. On the other hand, scaling most other semi-supervisedtechniques to even moderately-large collections of unlabeled text remains “future work” fora large number of published approaches in the machine learning and NLP literature.34

automatically-labeled examples are used to train the classifier. Un<strong>for</strong>tunately, automaticallylabeledexamples are often incorrect. The classifier works hard to classify these examplescorrectly, and subsequently gets similar examples wrong that it encounters at testing. If wehave enough manually-labeled examples, it seems that we want the ultimate mediator ofthe value of our features to be per<strong>for</strong>mance on these labeled examples, not per<strong>for</strong>mance onany pseudo-examples. This mediation is, of course, exactly what supervised learning does.If we instead create features from unlabeled data, rather than using unlabeled data to createnew examples, standard supervised learning can be used.How can we include in<strong>for</strong>mation from unlabeled data as new features in a supervisedlearner? Section 2.2 described a typical feature representation: each feature is a binaryindicator of whether a word is present or not in a document to be classified. When we extractfeatures from unlabeled data, we add new dimensions to the feature representation. Thesenew dimensions are <strong>for</strong> features that represent what we might call second-order interactions– co-occurrences of words with each other in unlabeled text.In very recent papers, both Huang and Yates [2009] and Turian et al. [2010] providecomparisons of different ways to extract new features from unlabeled data; they both evaluateper<strong>for</strong>mance on a range of tasks.Features Directly From a Word’s Distribution in Unlabeled TextReturning to our sports example, we could have a feature <strong>for</strong> whether a word in a givendocument occurs elsewhere, in unlabeled data, with the word score. A classifier could learnthat this feature is associated with the sports class, because words like hockey, baseball,inning, win, etc. tend to occur with score, and some of these likely occur in the training set.So, although we may never see the word curling during training, it does occur in unlabeledtext with many of the same words that occur with other sports terms, like the word score. Soa document that contains curling will have the second-order score feature, and thus curling,through features created from its distribution, is still an indicator of sports. Directly havinga feature <strong>for</strong> each item that co-occurs in a word’s distribution is perhaps the simplest way toleverage unlabeled data in the feature representation. Huang and Yates [2009] essentiallyuse this as their multinomial representation. They find it per<strong>for</strong>ms worse on sequencelabelingtasks than distributional representations based on HMMs and latent-semantic analysis(two other effective approaches <strong>for</strong> creating features from unlabeled data). One issuewith using the distribution directly is that although sparsity is potentially alleviated at theword level (we can handle words even if we haven’t seen them in training data), we increasesparsity at the feature level: there are more features to train but the same amount of trainingdata. This might explain why [Huang and Yates, 2009] see improved per<strong>for</strong>mance onrare words but similar per<strong>for</strong>mance overall. We return to this issue in Chapter 5 when wepresent a distributional representation <strong>for</strong> verb part-of-speech tag disambiguation that mayalso suffer from these drawbacks (Section 5.6).Features from Similar Words or Distributional ClustersThere are many other ways to create features from unlabeled data. One popular approachis to summarize the distribution of words (in unlabeled data) using similar words. [Wanget al., 2005] use similar words to help generalization in dependency parsing. [Marton etal., 2009] use similar phrases to help improve the handling of out-of-vocabulary terms in amachine translation system. Another recent trend is to create features from automatically-33

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!