12.07.2015 Views

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

positive examples from any collection of well-<strong>for</strong>med sentences: they are all valid sentencesby definition. But how do we create negative examples? The innovation of Okanohara andTsujii is to create negative examples from sentences generated by an N-gram languagemodel. N-grams are the standard Markovized approximation to English, and their successin language modeling is one of the reasons <strong>for</strong> the statistical revolution in NLP discussed inSection 2.1 above. However, they often produce ill-<strong>for</strong>med sentences, and a classifier thatcan distinguish between valid English sentences and N-gram-model-generated sentencescould help us select better output sentences from our speech recognizers, machine translators,curse-word restoration systems, etc.The results of Okanohara and Tsujii’s classifier was promising: about 74% of sentencescould be classified correctly. However, they report that a native English speaker was ableto achieve 99% accuracy on a 100-sentence sample, indicating that there is much roomto improve. It is rare that humans can outper<strong>for</strong>m computers on a task where we haveessentially unlimited amounts of training data. Indeed, learning curves in this work indicatethat per<strong>for</strong>mance is continuously improving up to 500,000 training examples. The mainlimitation seems to only be computational complexity.Smith and Eisner [2005] also automatically generate negative examples. They perturbtheir input sequence (e.g. the sentence word order) to create a neighborhood of implicitnegative evidence. Structures over the observed sentence should have higher likelihoodthan structures over the perturbed sequences.Chapter 6 describes an approach that creates both positive and negative examples ofselectional preference from corpus-wide statistics of predicate-argument pairs (rather thanonly using a local sentence to generate negatives, as in [Smith and Eisner, 2005]). Sincethe individual training instances encapsulate in<strong>for</strong>mation from potentially thousands or millionsof sentences, this approach can scale better than some of the other semi-supervisedapproaches described in this chapter. In Chapter 7, we create examples by computing statisticsover an aligned bitext, and generate negative examples to be those that have a high stringoverlap with the positives, but which are not likely to be translations. We use automaticallycreatedexamples to mine richer features and demonstrate better models than previous work.However, note that there is a danger in solving problems on automatically-labeled examples:it is not always clear that the classifier you learn will transfer well to actual tasks,since you’re no longer learning a discriminator on manually-labeled examples. In the followingsection, we describe semi-supervised approaches that train over manually-labeleddata, and discuss how perhaps we can have the best of both worlds by including the outputof our pseudo-discriminators as features in a supervised model.2.5.5 Creating Features from Unlabeled DataWe have saved perhaps the simplest <strong>for</strong>m of semi-supervised learning <strong>for</strong> last: an approachwhere we simply create features from our unlabeled data and use these features in oursupervised learners. Simplicity is good. 7The main problem with essentially all of the above approaches is that at some point,7 In the words of Mann and McCallum [2007]: “Research in semi-supervised learning has yielded manypublications over the past ten years, but there are surprisingly fewer cases of its use in application-orientedresearch, where the emphasis is on solving a task, not on exploring a new semi-supervised method. Thismay be partially due to the natural time it takes <strong>for</strong> new machine learning ideas to propagate to practitioners.We believe it is also due in large part to the complexity and unreliability of many existing semi-supervisedmethods.”32

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!