Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

Chapter 4Improved Natural LanguageLearning viaVariance-Regularization SupportVector MachinesXKCD comic: Ninja Turtles http://xkcd.com/197/. The beauty ofthis comic is that it was also constructed using co-occurrence counts from theGoogle search engine. That is, the artist counted the number of pages forLeonardo and turtle vs. the number of pages for Leonardo and artist.The previous chapter presented SUPERLM, a supervised classifier that uses web-scaleN-gram counts as features. The classifier was trained as a multi-class SVM. In this chap-0 A version of this chapter has been published as [Bergsma et al., 2010b]55

ter, we present a simple technique for learning better SVMs using fewer training examples.Rather than using the standard SVM regularization, we regularize toward low weightvariance.Our new SVM objective remains a convex quadratic function of the weights, andis therefore computationally no harder to optimize than a standard SVM. Variance regularizationis shown to enable improvements in the learning rates of the SVMs on the threelexical disambiguation tasks studied in the previous chapter.4.1 IntroductionDiscriminative training is commonly used in NLP and speech to scale the contribution ofdifferent models or systems in a combined predictor. For example, discriminative trainingcan be used to scale the contribution of the language model and translation model inmachine translation [Och and Ney, 2002]. Without training data, it is often reasonable toweight the different models equally. We propose a simple technique that exploits this intuitionfor better learning with fewer training examples. We regularize the feature weights ina support vector machine [Cortes and Vapnik, 1995] toward a low-variance solution. Sincethe new SVM quadratic program is convex, it is no harder to optimize than the standardSVM objective.When training data is generated through human effort, faster learning saves time andmoney. When examples are labeled automatically, through user feedback [Joachims, 2002]or from textual pseudo-examples [Smith and Eisner, 2005; Okanohara and Tsujii, 2007],faster learning can reduce the lag before a new system is useful.We demonstrate faster learning on the same lexical disambiguation tasks evaluated inthe previous chapter. Recall that in a lexical disambiguation task, a system predicts a labelfor a word in text, based on the word’s context. Possible labels include part-of-speechtags, named-entity types, and word senses. A number of disambiguation systems makepredictions with the help of N-gram counts from a web-scale auxiliary corpus, typicallyacquiring these counts via a search-engine or N-gram corpus (Section 3.2.1).Ultimately, when discriminative training is used to set weights on various counts inorder to make good classifications, many of the learned feature weights have similar values.Good weights have low variance.For example, consider the task of preposition selection. A system selects the most likelypreposition given the context, and flags a possible error if it disagrees with the user’s choice:• I worked in Russia from 1997 to 2001.• I worked in Russia *during 1997 to 2001.Chapter 3 presented SUPERLM, which uses a variety of web counts to predict the correctpreposition. SUPERLM has features for COUNT(in Russia from), COUNT(Russia from1997), COUNT(from 1997 to), etc. If these are high, from is predicted. Similarly, there arefeatures for COUNT(in Russia during), COUNT(Russia during 1997), COUNT(during 1997to). These features predict during. All counts are in the log domain. The task has thirtyfourdifferent prepositions to choose from. A 34-way classifier is trained on examples ofcorrect preposition usage; it learns which context positions and sizes are most reliable andassigns feature weights accordingly.In Chapter 3, we saw that a very strong unsupervised baseline, however, is to simplyweight all the count features equally. In fact, the supervised approach required over 30,000training examples before it outperformed this baseline. In contrast, we show here that56

ter, we present a simple technique <strong>for</strong> learning better SVMs using fewer training examples.Rather than using the standard SVM regularization, we regularize toward low weightvariance.Our new SVM objective remains a convex quadratic function of the weights, andis there<strong>for</strong>e computationally no harder to optimize than a standard SVM. Variance regularizationis shown to enable improvements in the learning rates of the SVMs on the threelexical disambiguation tasks studied in the previous chapter.4.1 IntroductionDiscriminative training is commonly used in NLP and speech to scale the contribution ofdifferent models or systems in a combined predictor. For example, discriminative trainingcan be used to scale the contribution of the language model and translation model inmachine translation [Och and Ney, 2002]. Without training data, it is often reasonable toweight the different models equally. We propose a simple technique that exploits this intuition<strong>for</strong> better learning with fewer training examples. We regularize the feature weights ina support vector machine [Cortes and Vapnik, 1995] toward a low-variance solution. Sincethe new SVM quadratic program is convex, it is no harder to optimize than the standardSVM objective.When training data is generated through human ef<strong>for</strong>t, faster learning saves time andmoney. When examples are labeled automatically, through user feedback [Joachims, 2002]or from textual pseudo-examples [Smith and Eisner, 2005; Okanohara and Tsujii, 2007],faster learning can reduce the lag be<strong>for</strong>e a new system is useful.We demonstrate faster learning on the same lexical disambiguation tasks evaluated inthe previous chapter. Recall that in a lexical disambiguation task, a system predicts a label<strong>for</strong> a word in text, based on the word’s context. Possible labels include part-of-speechtags, named-entity types, and word senses. A number of disambiguation systems makepredictions with the help of N-gram counts from a web-scale auxiliary corpus, typicallyacquiring these counts via a search-engine or N-gram corpus (Section 3.2.1).Ultimately, when discriminative training is used to set weights on various counts inorder to make good classifications, many of the learned feature weights have similar values.Good weights have low variance.For example, consider the task of preposition selection. A system selects the most likelypreposition given the context, and flags a possible error if it disagrees with the user’s choice:• I worked in Russia from 1997 to 2001.• I worked in Russia *during 1997 to 2001.Chapter 3 presented SUPERLM, which uses a variety of web counts to predict the correctpreposition. SUPERLM has features <strong>for</strong> COUNT(in Russia from), COUNT(Russia from1997), COUNT(from 1997 to), etc. If these are high, from is predicted. Similarly, there arefeatures <strong>for</strong> COUNT(in Russia during), COUNT(Russia during 1997), COUNT(during 1997to). These features predict during. All counts are in the log domain. The task has thirtyfourdifferent prepositions to choose from. A 34-way classifier is trained on examples ofcorrect preposition usage; it learns which context positions and sizes are most reliable andassigns feature weights accordingly.In Chapter 3, we saw that a very strong unsupervised baseline, however, is to simplyweight all the count features equally. In fact, the supervised approach required over 30,000training examples be<strong>for</strong>e it outper<strong>for</strong>med this baseline. In contrast, we show here that56

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!