Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

which used some manual intervention, but later approaches have essentially differed quitelittle from her original proposal.Google co-founder Sergei Brin [1998] used a similar technique to extract relations suchas (author, title) from the web. Similar work was also presented in [Riloff and Jones, 1999]and [Agichtein and Gravano, 2000]. Pantel and Pennacchiotti [2006] used this approachto extract general semantic relations (such as part-of, succession, production, etc.), whilePaşca et al. [2006] present extraction results on a web-scale corpus. Another famous variationof this method is Ravichandran and Hovy’s system for finding patterns for answeringquestions [Ravichandran and Hovy, 2002]. They begin with seeds such as (Mozart, 1756)and use these to find patterns that contain the answers to questions such as When was Xborn?Note the contrast with the traditional supervised machine-learning framework, wherewe would have annotators mark up text with examples of hypernyms, relations, or questionanswerpairs, etc., and then learn a predictor from these labeled examples using supervisedlearning. In bootstrapping from seeds, we do not label segments of text, but rather pairsof words (labeling only one view of the problem). When we find instances of these pairsin text, we essentially label more data automatically, and then infer a context-based predictorfrom this labeled set. This context-based predictor can then be used to find moreexamples of the relation of interest (hypernyms, authors of books, question-answer pairs,etc.). Notice, however, that in contrast to standard supervised learning, we do not label anynegative examples, only positive instances. Thus, when building a context-based predictor,there is no obvious way to exploit our powerful machinery for feature-based discriminativelearning and classification. Very simple methods are instead used to keep track of the bestcontext-based patterns for identifying new examples in text.In iterative bootstrapping, although the first round of training often produces reasonableresults, things often go wrong in later iterations. The first round will inevitably producesome noise, some wrong pairs extracted by the predictor. The contexts extracted from thesefalse predictions will lead to more false pairs being extracted, and so on. In all publishedresearch on this topic that we are aware of, the precision of the extractions decreases in eachstage.2.5.4 Learning with Heuristically-Labeled ExamplesIn the above discussion of bootstrapping, we outlined a number of approaches that extendan existing set of classifications (or seeds) by iteratively classifying and learning from newexamples. Another interesting, non-iterative scenario is the situation where, rather thanhaving a few seed examples, we begin with many positive examples of a class or relation,and attempt to classify new relations in this context. With a relatively comprehensive setof seeds, there is little value in iterating to obtain more. 6 Also, having a lot of seeds canalso provide a way to generate the negative examples we need for discriminative learning.In this section we look at two flavours: special cases where the examples can be createdautomatically, and cases where we have only positive seeds, and so create pseudo-negativeexamples through some heuristic means.6 There are also non-iterative approaches that also start with limited seed data. Haghighi and Klein [2006]create a generative, unsupervised sequence prediction model, but add features to indicate if a word to be classifiedis distributionally-similar to a seed word. Like the approaches presented in our discussion of bootstrappingwith seeds, this system achieves impressive results starting with very little manually-provided information.29

Learning with Natural Automatic ExamplesSome of the lowest-hanging fruit in the history of NLP arose when researchers realized thatsome important problems in NLP could be solved by generating labeled training examplesautomatically from raw text.Consider the task of diacritic or accent restoration. In languages such as French orSpanish, accents are often omitted in informal correspondence, in all-capitalized text suchas headlines, and in lower-bit text encodings. Missing accents adversely affect both syntacticand semantic analysis. It would be nice to train a discriminative classifier to restorethese accents, but do we need someone to label the accents in unaccented text to provideus with labeled data? Yarowsky [1994] showed that we can simply take (readily-available)accented text, take the accents off and use them as labels, and then train predictors usingfeatures for everything except for the accents. We can essentially generate as many labeledexamples as we like this way. The true accent and the text provide the positive example.The unaccented or alternatively-accented text provides negative examples.We call these Natural Automatic Examples since they naturally provide the positive andnegative examples needed to solve the problem. We contrast these with problems in thefollowing section where, although one may have plentiful positive examples, one must usesome creativity to produce the negative examples.This approach also works for context-sensitive spelling correction. Here we try to determine,for example, whether someone who typed whether actually meant weather. Wetake well-edited text and, each time one of the words is used, we create a training example,with the word-actually-used as the label. We then see if we can predict these wordsfrom their confusable alternatives, using the surrounding context for features [Golding andRoth, 1999]. So the word-actually-used is the positive example (e.g. “whether or not”),while the alternative, unused words provide the negatives (e.g. “weather or not”). Bankoand Brill [2001] generate a lot of training data this way to produce their famous results onthe relative importance of the learning algorithm versus the amount of training data (theamount of training data is much much more important). In Chapter 3, we use this approachto generate data for both preposition selection and context-sensitive spelling correction.A similar approach could be used for training systems to segment text into paragraphs,to restore capitalization or punctuation, to do sentence-boundary detection (one must find anassiduous typist, like me, who consistently puts two spaces after periods, but only one afterabbreviations...), to convert curse word symbols like %*#@ back into the original curse,etc. (of course, some of these examples may benefit from a channel model rather thanexclusively a source/language model). The only limitation is the amount of training datayour algorithm can handle. In fact, by summarizing the training examples with N-grambasedfeatures as in Section 2.5.5 (rather than learning from each instance separately), therereally is no limitation on the amount of data you might learn from.There are a fairly limited number of problems in NLP where we can just create examplesautomatically this way. This is because in NLP, we are usually interested in generatingstructures over the data that are not surface apparent in naturally-occurring text. We returnto this when we discuss analysis and generation problems in Chapter 3. Natural automaticexamples abound in many other fields. You can build a discriminative classifier for whethera stock goes up or for whether someone defaults on their loan purely based on previousexamples. A search engine can easily predict whether someone will click on a search resultusing the history of clicks from other users for the same query [Joachims, 2002]. However,despite not having natural automatic examples for some problems, we can sometimes create30

<strong>Learning</strong> with <strong>Natural</strong> Automatic ExamplesSome of the lowest-hanging fruit in the history of NLP arose when researchers realized thatsome important problems in NLP could be solved by generating labeled training examplesautomatically from raw text.Consider the task of diacritic or accent restoration. In languages such as French orSpanish, accents are often omitted in in<strong>for</strong>mal correspondence, in all-capitalized text suchas headlines, and in lower-bit text encodings. Missing accents adversely affect both syntacticand semantic analysis. It would be nice to train a discriminative classifier to restorethese accents, but do we need someone to label the accents in unaccented text to provideus with labeled data? Yarowsky [1994] showed that we can simply take (readily-available)accented text, take the accents off and use them as labels, and then train predictors usingfeatures <strong>for</strong> everything except <strong>for</strong> the accents. We can essentially generate as many labeledexamples as we like this way. The true accent and the text provide the positive example.The unaccented or alternatively-accented text provides negative examples.We call these <strong>Natural</strong> Automatic Examples since they naturally provide the positive andnegative examples needed to solve the problem. We contrast these with problems in thefollowing section where, although one may have plentiful positive examples, one must usesome creativity to produce the negative examples.This approach also works <strong>for</strong> context-sensitive spelling correction. Here we try to determine,<strong>for</strong> example, whether someone who typed whether actually meant weather. Wetake well-edited text and, each time one of the words is used, we create a training example,with the word-actually-used as the label. We then see if we can predict these wordsfrom their confusable alternatives, using the surrounding context <strong>for</strong> features [Golding andRoth, 1999]. So the word-actually-used is the positive example (e.g. “whether or not”),while the alternative, unused words provide the negatives (e.g. “weather or not”). Bankoand Brill [2001] generate a lot of training data this way to produce their famous results onthe relative importance of the learning algorithm versus the amount of training data (theamount of training data is much much more important). In Chapter 3, we use this approachto generate data <strong>for</strong> both preposition selection and context-sensitive spelling correction.A similar approach could be used <strong>for</strong> training systems to segment text into paragraphs,to restore capitalization or punctuation, to do sentence-boundary detection (one must find anassiduous typist, like me, who consistently puts two spaces after periods, but only one afterabbreviations...), to convert curse word symbols like %*#@ back into the original curse,etc. (of course, some of these examples may benefit from a channel model rather thanexclusively a source/language model). The only limitation is the amount of training datayour algorithm can handle. In fact, by summarizing the training examples with N-grambasedfeatures as in Section 2.5.5 (rather than learning from each instance separately), therereally is no limitation on the amount of data you might learn from.There are a fairly limited number of problems in NLP where we can just create examplesautomatically this way. This is because in NLP, we are usually interested in generatingstructures over the data that are not surface apparent in naturally-occurring text. We returnto this when we discuss analysis and generation problems in Chapter 3. <strong>Natural</strong> automaticexamples abound in many other fields. You can build a discriminative classifier <strong>for</strong> whethera stock goes up or <strong>for</strong> whether someone defaults on their loan purely based on previousexamples. A search engine can easily predict whether someone will click on a search resultusing the history of clicks from other users <strong>for</strong> the same query [Joachims, 2002]. However,despite not having natural automatic examples <strong>for</strong> some problems, we can sometimes create30

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!