Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

automatic examples heuristically. We turn to this in the following subsection.Learning with Pseudo-Negative ExamplesWhile the previous section described problems where there were natural positive and negativeexamples (e.g., the correct accent marker is positive, while others, including no accent,are negative), there is a large class of problems in NLP where we only have positive examplesand thus it’s not clear how to use a discriminative classifier to evaluate new potentialexamples. This is the situation with seed data: you are presented with a list of only positiveseeds, and there’s nothing obvious to discriminate these from.In these situations, researchers have devised various ways to automatically create negativeexamples. For example, let us return to the example of hypernyms. Although Hearst [1992]started her algorithm with only a few examples, this was an unnecessary handicap. Thousandsof examples of hypernym pairs can be extracted automatically from the lexical databaseWordNet [Miller et al., 1990]. Furthermore, WordNet has good coverage of the relationsinvolving nouns that are actually in WordNet (as opposed to, obviously, no coverage ofrelations involving words that aren’t mentioned in WordNet at all). Thus, pairs of wordsin WordNet that are not linked in a hypernym structure can potentially be taken as reliableexamples of words that are not hypernyms (since both words are in WordNet, if theywere hypernyms, the relation would generally be labeled). These could form our negativeexamples for discrimination.Recognizing this, Snow et al. [2005] use WordNet to generate a huge set of both positiveand negative hypernym pairs: exactly what we need as training data for a large-scalediscriminative classifier. With this resource, we need not iteratively discover contexts thatare useful for hypernymy: Snow et al. simply include, as features in the classifier, all thesyntactic paths connecting the pair of words in a large parsed corpus. That is, they havefeatures for how often a pair of words occurs in constructions like,“Xs and other Ys, Yssuch as Xs, Ys including Xs, etc.” Discriminative training, not heuristic weighting, willdecide the importance of these patterns in hypernymy. To classify any new example pair(i.e., for nouns that are not in WordNet), we can simply construct their feature vector ofsyntactic paths and apply the classifier. Snow et al. [2005] achieve very good performanceusing this approach.This approach could scale to make use of features derived from web-scale data. For anypair of words, we can efficiently extract all the N-grams in which both words occur. Thisis exactly what we proposed for discriminating object and subject relations for Bears wonand trophy won in our example in Chapter 1, Section 1.3. We can create features from theseN-grams, and apply training and classification.We recently used a similar technique for classifying the natural gender of English nouns[Bergsma et al., 2009a]. Rather than using WordNet to label examples, however, we usedco-occurrence statistics in a large corpus to reliably identify the most likely gender of thousandsof noun phrases. We then used this list to automatically label examples in raw text,and then proceeded to learn from these automatically-labeled examples. This paper couldhave served as another chapter in this dissertation, but the dissertation already seemed sufficientlylong without it.Several other recent uses of this approach are also worth mentioning. Okanohara andTsujii [2007] created examples automatically in order to train a discriminative wholesentencelanguage model. Language models are designed to tell us whether a sequenceof words is valid language (or likely, fluent, good English). We can automatically gather31

positive examples from any collection of well-formed sentences: they are all valid sentencesby definition. But how do we create negative examples? The innovation of Okanohara andTsujii is to create negative examples from sentences generated by an N-gram languagemodel. N-grams are the standard Markovized approximation to English, and their successin language modeling is one of the reasons for the statistical revolution in NLP discussed inSection 2.1 above. However, they often produce ill-formed sentences, and a classifier thatcan distinguish between valid English sentences and N-gram-model-generated sentencescould help us select better output sentences from our speech recognizers, machine translators,curse-word restoration systems, etc.The results of Okanohara and Tsujii’s classifier was promising: about 74% of sentencescould be classified correctly. However, they report that a native English speaker was ableto achieve 99% accuracy on a 100-sentence sample, indicating that there is much roomto improve. It is rare that humans can outperform computers on a task where we haveessentially unlimited amounts of training data. Indeed, learning curves in this work indicatethat performance is continuously improving up to 500,000 training examples. The mainlimitation seems to only be computational complexity.Smith and Eisner [2005] also automatically generate negative examples. They perturbtheir input sequence (e.g. the sentence word order) to create a neighborhood of implicitnegative evidence. Structures over the observed sentence should have higher likelihoodthan structures over the perturbed sequences.Chapter 6 describes an approach that creates both positive and negative examples ofselectional preference from corpus-wide statistics of predicate-argument pairs (rather thanonly using a local sentence to generate negatives, as in [Smith and Eisner, 2005]). Sincethe individual training instances encapsulate information from potentially thousands or millionsof sentences, this approach can scale better than some of the other semi-supervisedapproaches described in this chapter. In Chapter 7, we create examples by computing statisticsover an aligned bitext, and generate negative examples to be those that have a high stringoverlap with the positives, but which are not likely to be translations. We use automaticallycreatedexamples to mine richer features and demonstrate better models than previous work.However, note that there is a danger in solving problems on automatically-labeled examples:it is not always clear that the classifier you learn will transfer well to actual tasks,since you’re no longer learning a discriminator on manually-labeled examples. In the followingsection, we describe semi-supervised approaches that train over manually-labeleddata, and discuss how perhaps we can have the best of both worlds by including the outputof our pseudo-discriminators as features in a supervised model.2.5.5 Creating Features from Unlabeled DataWe have saved perhaps the simplest form of semi-supervised learning for last: an approachwhere we simply create features from our unlabeled data and use these features in oursupervised learners. Simplicity is good. 7The main problem with essentially all of the above approaches is that at some point,7 In the words of Mann and McCallum [2007]: “Research in semi-supervised learning has yielded manypublications over the past ten years, but there are surprisingly fewer cases of its use in application-orientedresearch, where the emphasis is on solving a task, not on exploring a new semi-supervised method. Thismay be partially due to the natural time it takes for new machine learning ideas to propagate to practitioners.We believe it is also due in large part to the complexity and unreliability of many existing semi-supervisedmethods.”32

automatic examples heuristically. We turn to this in the following subsection.<strong>Learning</strong> with Pseudo-Negative ExamplesWhile the previous section described problems where there were natural positive and negativeexamples (e.g., the correct accent marker is positive, while others, including no accent,are negative), there is a large class of problems in NLP where we only have positive examplesand thus it’s not clear how to use a discriminative classifier to evaluate new potentialexamples. This is the situation with seed data: you are presented with a list of only positiveseeds, and there’s nothing obvious to discriminate these from.In these situations, researchers have devised various ways to automatically create negativeexamples. For example, let us return to the example of hypernyms. Although Hearst [1992]started her algorithm with only a few examples, this was an unnecessary handicap. Thousandsof examples of hypernym pairs can be extracted automatically from the lexical databaseWordNet [Miller et al., 1990]. Furthermore, WordNet has good coverage of the relationsinvolving nouns that are actually in WordNet (as opposed to, obviously, no coverage ofrelations involving words that aren’t mentioned in WordNet at all). Thus, pairs of wordsin WordNet that are not linked in a hypernym structure can potentially be taken as reliableexamples of words that are not hypernyms (since both words are in WordNet, if theywere hypernyms, the relation would generally be labeled). These could <strong>for</strong>m our negativeexamples <strong>for</strong> discrimination.Recognizing this, Snow et al. [2005] use WordNet to generate a huge set of both positiveand negative hypernym pairs: exactly what we need as training data <strong>for</strong> a large-scalediscriminative classifier. With this resource, we need not iteratively discover contexts thatare useful <strong>for</strong> hypernymy: Snow et al. simply include, as features in the classifier, all thesyntactic paths connecting the pair of words in a large parsed corpus. That is, they havefeatures <strong>for</strong> how often a pair of words occurs in constructions like,“Xs and other Ys, Yssuch as Xs, Ys including Xs, etc.” Discriminative training, not heuristic weighting, willdecide the importance of these patterns in hypernymy. To classify any new example pair(i.e., <strong>for</strong> nouns that are not in WordNet), we can simply construct their feature vector ofsyntactic paths and apply the classifier. Snow et al. [2005] achieve very good per<strong>for</strong>manceusing this approach.This approach could scale to make use of features derived from web-scale data. For anypair of words, we can efficiently extract all the N-grams in which both words occur. Thisis exactly what we proposed <strong>for</strong> discriminating object and subject relations <strong>for</strong> Bears wonand trophy won in our example in Chapter 1, Section 1.3. We can create features from theseN-grams, and apply training and classification.We recently used a similar technique <strong>for</strong> classifying the natural gender of English nouns[Bergsma et al., 2009a]. Rather than using WordNet to label examples, however, we usedco-occurrence statistics in a large corpus to reliably identify the most likely gender of thousandsof noun phrases. We then used this list to automatically label examples in raw text,and then proceeded to learn from these automatically-labeled examples. This paper couldhave served as another chapter in this dissertation, but the dissertation already seemed sufficientlylong without it.Several other recent uses of this approach are also worth mentioning. Okanohara andTsujii [2007] created examples automatically in order to train a discriminative wholesentencelanguage model. <strong>Language</strong> models are designed to tell us whether a sequenceof words is valid language (or likely, fluent, good English). We can automatically gather31

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!