Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
Chapter 4Improved Natural LanguageLearning viaVariance-Regularization SupportVector MachinesXKCD comic: Ninja Turtles http://xkcd.com/197/. The beauty ofthis comic is that it was also constructed using co-occurrence counts from theGoogle search engine. That is, the artist counted the number of pages forLeonardo and turtle vs. the number of pages for Leonardo and artist.The previous chapter presented SUPERLM, a supervised classifier that uses web-scaleN-gram counts as features. The classifier was trained as a multi-class SVM. In this chap-0 A version of this chapter has been published as [Bergsma et al., 2010b]55
ter, we present a simple technique for learning better SVMs using fewer training examples.Rather than using the standard SVM regularization, we regularize toward low weightvariance.Our new SVM objective remains a convex quadratic function of the weights, andis therefore computationally no harder to optimize than a standard SVM. Variance regularizationis shown to enable improvements in the learning rates of the SVMs on the threelexical disambiguation tasks studied in the previous chapter.4.1 IntroductionDiscriminative training is commonly used in NLP and speech to scale the contribution ofdifferent models or systems in a combined predictor. For example, discriminative trainingcan be used to scale the contribution of the language model and translation model inmachine translation [Och and Ney, 2002]. Without training data, it is often reasonable toweight the different models equally. We propose a simple technique that exploits this intuitionfor better learning with fewer training examples. We regularize the feature weights ina support vector machine [Cortes and Vapnik, 1995] toward a low-variance solution. Sincethe new SVM quadratic program is convex, it is no harder to optimize than the standardSVM objective.When training data is generated through human effort, faster learning saves time andmoney. When examples are labeled automatically, through user feedback [Joachims, 2002]or from textual pseudo-examples [Smith and Eisner, 2005; Okanohara and Tsujii, 2007],faster learning can reduce the lag before a new system is useful.We demonstrate faster learning on the same lexical disambiguation tasks evaluated inthe previous chapter. Recall that in a lexical disambiguation task, a system predicts a labelfor a word in text, based on the word’s context. Possible labels include part-of-speechtags, named-entity types, and word senses. A number of disambiguation systems makepredictions with the help of N-gram counts from a web-scale auxiliary corpus, typicallyacquiring these counts via a search-engine or N-gram corpus (Section 3.2.1).Ultimately, when discriminative training is used to set weights on various counts inorder to make good classifications, many of the learned feature weights have similar values.Good weights have low variance.For example, consider the task of preposition selection. A system selects the most likelypreposition given the context, and flags a possible error if it disagrees with the user’s choice:• I worked in Russia from 1997 to 2001.• I worked in Russia *during 1997 to 2001.Chapter 3 presented SUPERLM, which uses a variety of web counts to predict the correctpreposition. SUPERLM has features for COUNT(in Russia from), COUNT(Russia from1997), COUNT(from 1997 to), etc. If these are high, from is predicted. Similarly, there arefeatures for COUNT(in Russia during), COUNT(Russia during 1997), COUNT(during 1997to). These features predict during. All counts are in the log domain. The task has thirtyfourdifferent prepositions to choose from. A 34-way classifier is trained on examples ofcorrect preposition usage; it learns which context positions and sizes are most reliable andassigns feature weights accordingly.In Chapter 3, we saw that a very strong unsupervised baseline, however, is to simplyweight all the count features equally. In fact, the supervised approach required over 30,000training examples before it outperformed this baseline. In contrast, we show here that56
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63: anaphoricity by [Denis and Baldridg
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
ter, we present a simple technique <strong>for</strong> learning better SVMs using fewer training examples.Rather than using the standard SVM regularization, we regularize toward low weightvariance.Our new SVM objective remains a convex quadratic function of the weights, andis there<strong>for</strong>e computationally no harder to optimize than a standard SVM. Variance regularizationis shown to enable improvements in the learning rates of the SVMs on the threelexical disambiguation tasks studied in the previous chapter.4.1 IntroductionDiscriminative training is commonly used in NLP and speech to scale the contribution ofdifferent models or systems in a combined predictor. For example, discriminative trainingcan be used to scale the contribution of the language model and translation model inmachine translation [Och and Ney, 2002]. Without training data, it is often reasonable toweight the different models equally. We propose a simple technique that exploits this intuition<strong>for</strong> better learning with fewer training examples. We regularize the feature weights ina support vector machine [Cortes and Vapnik, 1995] toward a low-variance solution. Sincethe new SVM quadratic program is convex, it is no harder to optimize than the standardSVM objective.When training data is generated through human ef<strong>for</strong>t, faster learning saves time andmoney. When examples are labeled automatically, through user feedback [Joachims, 2002]or from textual pseudo-examples [Smith and Eisner, 2005; Okanohara and Tsujii, 2007],faster learning can reduce the lag be<strong>for</strong>e a new system is useful.We demonstrate faster learning on the same lexical disambiguation tasks evaluated inthe previous chapter. Recall that in a lexical disambiguation task, a system predicts a label<strong>for</strong> a word in text, based on the word’s context. Possible labels include part-of-speechtags, named-entity types, and word senses. A number of disambiguation systems makepredictions with the help of N-gram counts from a web-scale auxiliary corpus, typicallyacquiring these counts via a search-engine or N-gram corpus (Section 3.2.1).Ultimately, when discriminative training is used to set weights on various counts inorder to make good classifications, many of the learned feature weights have similar values.Good weights have low variance.For example, consider the task of preposition selection. A system selects the most likelypreposition given the context, and flags a possible error if it disagrees with the user’s choice:• I worked in Russia from 1997 to 2001.• I worked in Russia *during 1997 to 2001.Chapter 3 presented SUPERLM, which uses a variety of web counts to predict the correctpreposition. SUPERLM has features <strong>for</strong> COUNT(in Russia from), COUNT(Russia from1997), COUNT(from 1997 to), etc. If these are high, from is predicted. Similarly, there arefeatures <strong>for</strong> COUNT(in Russia during), COUNT(Russia during 1997), COUNT(during 1997to). These features predict during. All counts are in the log domain. The task has thirtyfourdifferent prepositions to choose from. A 34-way classifier is trained on examples ofcorrect preposition usage; it learns which context positions and sizes are most reliable andassigns feature weights accordingly.In Chapter 3, we saw that a very strong unsupervised baseline, however, is to simplyweight all the count features equally. In fact, the supervised approach required over 30,000training examples be<strong>for</strong>e it outper<strong>for</strong>med this baseline. In contrast, we show here that56