Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
Here, we simply provide some intuitions on what kinds of weights will be learned. Tobe clear, note that ¯Wr , the rth row of the weight-matrix W, corresponds to the weights forpredicting candidate c r . Recall that in generation tasks, the set C and the set F may beidentical. So some of the weights in ¯W r will therefore correspond to features for patternsfilled with filler f r . Intuitively, these weights will be positive. That is, we will predictthe class among when there are high counts for the patterns filled with the filler among(c r =f r =among). On the other hand, we will choose not to pick among if the counts onpatterns filled with between are high. These tendencies are all learned by the learningalgorithm. The learning algorithm can also place higher absolute weights on the morepredictive context positions and sizes. For example, for many tasks, the patterns that beginwith a filler are more predictive than patterns that end with a filler. The learning algorithmattends to these differences in predictive power as it maximizes prediction accuracy on thetraining data.We now note some special features used by our classifier. If a pattern spans outside thecurrent sentence (when v 0 is close to the start or end), we use zero for the correspondingfeature value, but fire an indicator feature to flag that the pattern crosses a boundary. Thisfeature provides a kind of smoothing. Other features are possible: for generation tasks,we could also include synonyms of the output candidates as fillers. Features could also becreated for counts of patterns processed in some way (e.g. converting one or more contexttokens to wildcards, POS-tags, lower-case, etc.), provided the same processing can be doneto the N-gram corpus (we do such processing for the non-referential pronoun detectionfeatures described in Section 3.7).We call this approach SUPERLM because it is SUPERvised, and because, like an interpolatedlanguage model (LM), it mixes N-gram statistics of different orders to produce anoverall score for each filled context sequence. SUPERLM’s features differ from previouslexical disambiguation feature sets. In previous systems, attribute-value features flag thepresence or absence of a particular word, part-of-speech, or N-gram in the vicinity of thetarget [Roth, 1998]. Hundreds of thousands of features are used, and pruning and scalingcan be key issues [Carlson et al., 2001]. Performance scales logarithmically with thenumber of examples, even up to one billion training examples [Banko and Brill, 2001]. Incontrast, SUPERLM’s features are all aggregate counts of events in an external (web) corpus,not specific attributes of the current example. It has only 14|F|K parameters, for theweights assigned to the different counts. Much less training data is needed to achieve peakperformance. Chapter 5 contrasts the performance of classifiers with N-gram features andtraditional features on a range of tasks.3.3.2 SUMLMWe create an unsupervised version of SUPERLM. We produce a score for each filler bysumming the (unweighted) log-counts of all context patterns filled with that filler. Forexample, the score for among could be the sum of all 14 context patterns filled with among.For generation tasks, the filler with the highest score is taken as the label. For analysis tasks,we compare the scores of different fillers to arrive at a decision; Section 3.7.2 explains howthis is done for non-referential pronoun detection.We refer to this approach in our experiments as SUMLM.For generation problems where F =C, SUMLM is similar to a naive bayes classifier,41
ut without counts for the class prior. 3 Naive bayes has a long history in disambiguationproblems [Manning and Schütze, 1999], so it is not entirely surprising that our SUMLMsystem, with a similar form to naive bayes, is also effective.3.3.3 TRIGRAMPrevious web-scale approaches are also unsupervised. Most use one context pattern foreach filler: the trigram with the filler in the middle: {v −1 ,f,v 1 }. |F| counts are needed foreach example, and the filler with the most counts is taken as the label [Lapata and Keller,2005; Liu and Curran, 2006; Felice and Pulman, 2007]. Using only one count for eachlabel is usually all that is feasible when the counts are gathered using an Internet searchengine, which limits the number of queries that can be retrieved. With limited context, andsomewhat arbitrary search engine page counts, performance is limited. Web-based systemsare regarded as “baselines” compared to standard approaches [Lapata and Keller, 2005], or,worse, as scientifically unsound [Kilgarriff, 2007]. Rather than using search engines, higheraccuracy and reliability can be obtained using a large corpus of automatically downloadedweb documents [Liu and Curran, 2006]. We evaluate the trigram pattern approach, withcounts from the Google 5-gram corpus, and refer to it as TRIGRAM in our experiments.3.3.4 RATIOLMCarlson et al. [2008] proposed an unsupervised method for spelling correction that alsouses counts for various pattern fillers from the Google 5-gram Corpus. For every contextpattern spanning the target word, the algorithm calculates the ratio between the highestand second-highest filler counts. The position with the highest ratio is taken as the “mostdiscriminating,” and the filler with the higher count in this position is chosen as the label.The algorithm starts with 5-grams and backs off to lower orders if no 5-gram counts3 In this case, we can think of the features, x i, as being the context patterns, and the classes y as being thefillers. In a naive bayes classifier, we select the class, y, that has the highest score under:H(¯x) =Kargmaxr=1= KPr(y r|¯x)argmax Pr(y r)Pr(¯x|y r)r=1argmax Pr(y ∏ r) Pr(x i|y r)r=1i= KBayes decision rulenaive bayes assumption= Kargmaxlog(Pr(y ∑ r))+r=1i= Kargmaxlog(Pr(y ∑ r))+r=1ilog(Pr(x i|y r))logcnt(x i,y r)−logcnt(y r)= Kargmaxg(y ∑ r)+r=1ilogcnt(x i,f r)y r = f rwhere we collect all the terms that depend solely on the class into g(y r). Our SUMLM system is exactlythe same as this naive bayes classifier if we drop the g(y r) term. We tried various ways to model the classpriors using N-gram counts and incorporating them into our equations, but nothing performed as well as simplydropping them altogether. Another option we haven’t explored is simply having a single class bias parameterfor each class, λ r = g(y r), to be added to the filler counts. We would tune the λ r’s by hand for each taskwhere SUMLM is applied. However, this would make the model require some labeled data to tune, whereasour current SUMLM is parameter-free and entirely unsupervised.42
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49: For each target wordv 0 , there are
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
Here, we simply provide some intuitions on what kinds of weights will be learned. Tobe clear, note that ¯Wr , the rth row of the weight-matrix W, corresponds to the weights <strong>for</strong>predicting candidate c r . Recall that in generation tasks, the set C and the set F may beidentical. So some of the weights in ¯W r will there<strong>for</strong>e correspond to features <strong>for</strong> patternsfilled with filler f r . Intuitively, these weights will be positive. That is, we will predictthe class among when there are high counts <strong>for</strong> the patterns filled with the filler among(c r =f r =among). On the other hand, we will choose not to pick among if the counts onpatterns filled with between are high. These tendencies are all learned by the learningalgorithm. The learning algorithm can also place higher absolute weights on the morepredictive context positions and sizes. For example, <strong>for</strong> many tasks, the patterns that beginwith a filler are more predictive than patterns that end with a filler. The learning algorithmattends to these differences in predictive power as it maximizes prediction accuracy on thetraining data.We now note some special features used by our classifier. If a pattern spans outside thecurrent sentence (when v 0 is close to the start or end), we use zero <strong>for</strong> the correspondingfeature value, but fire an indicator feature to flag that the pattern crosses a boundary. Thisfeature provides a kind of smoothing. Other features are possible: <strong>for</strong> generation tasks,we could also include synonyms of the output candidates as fillers. Features could also becreated <strong>for</strong> counts of patterns processed in some way (e.g. converting one or more contexttokens to wildcards, POS-tags, lower-case, etc.), provided the same processing can be doneto the N-gram corpus (we do such processing <strong>for</strong> the non-referential pronoun detectionfeatures described in Section 3.7).We call this approach SUPERLM because it is SUPERvised, and because, like an interpolatedlanguage model (LM), it mixes N-gram statistics of different orders to produce anoverall score <strong>for</strong> each filled context sequence. SUPERLM’s features differ from previouslexical disambiguation feature sets. In previous systems, attribute-value features flag thepresence or absence of a particular word, part-of-speech, or N-gram in the vicinity of thetarget [Roth, 1998]. Hundreds of thousands of features are used, and pruning and scalingcan be key issues [Carlson et al., 2001]. Per<strong>for</strong>mance scales logarithmically with thenumber of examples, even up to one billion training examples [Banko and Brill, 2001]. Incontrast, SUPERLM’s features are all aggregate counts of events in an external (web) corpus,not specific attributes of the current example. It has only 14|F|K parameters, <strong>for</strong> theweights assigned to the different counts. Much less training data is needed to achieve peakper<strong>for</strong>mance. Chapter 5 contrasts the per<strong>for</strong>mance of classifiers with N-gram features andtraditional features on a range of tasks.3.3.2 SUMLMWe create an unsupervised version of SUPERLM. We produce a score <strong>for</strong> each filler bysumming the (unweighted) log-counts of all context patterns filled with that filler. Forexample, the score <strong>for</strong> among could be the sum of all 14 context patterns filled with among.For generation tasks, the filler with the highest score is taken as the label. For analysis tasks,we compare the scores of different fillers to arrive at a decision; Section 3.7.2 explains howthis is done <strong>for</strong> non-referential pronoun detection.We refer to this approach in our experiments as SUMLM.For generation problems where F =C, SUMLM is similar to a naive bayes classifier,41