Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
are available. This position-weighting (viz. feature-weighting) technique is similar to thedecision-list weighting in [Yarowsky, 1994]. We refer to this approach as RATIOLM in ourexperiments.3.4 Evaluation MethodologyWe compare our supervised and unsupervised systems on three experimental tasks: prepositionselection, context-sensitive spelling correction, and non-referential pronoun detection.We evaluate using accuracy: the percentage of correctly-selected labels. As a baseline(BASE), we state the accuracy of always choosing the most-frequent class. For spellingcorrection, we average accuracies across the five confusion sets. We also provide learningcurves by varying the number of labeled training examples. It is worth reiterating that thisdata is used solely to weight the contribution of the different filler counts; the filler countsthemselves do not change, as they are always extracted from the full Google 5-gram Corpus.For training SUPERLM, we use a support vector machine (SVM). SVMs achieve goodperformance on a range of tasks (Chapter 2, Section 2.3.4). We use a linear-kernel multiclassSVM (the efficient SVM multiclass instance of SVM struct [Tsochantaridis et al.,2004]). It slightly outperformed one-versus-all SVMs in preliminary experiments (and alater, more extensive study in Chapter 4 confirmed that these preliminary intuitions werejustified). We tune the SVM’s regularization parameter on the development sets. We applyadd-one smoothing to the counts used in SUMLM and SUPERLM, while we add 39 tothe counts in RATIOLM, following the approach of Carlson et al. [2008] (40 is the countcut-off used in the Google Corpus). For all unsupervised systems, we choose the most frequentclass if no counts are available. For SUMLM, we use the development sets to decidewhich orders of N-grams to combine, finding orders 3-5 optimal for preposition selection,2-5 optimal for spelling correction, and 4-5 optimal for non-referential pronoun detection.Development experiments also showed RATIOLM works better starting from 4-grams, notthe 5-grams originally used in [Carlson et al., 2008].3.5 Preposition Selection3.5.1 The Task of Preposition SelectionChoosing the correct preposition is one of the most difficult tasks for a second-languagelearner to master, and errors involving prepositions constitute a significant proportion oferrors made by learners of English [Chodorow et al., 2007]. Several automatic approachesto preposition selection have recently been developed [Felice and Pulman, 2007; Gamonet al., 2008]. We follow the experiments of Chodorow et al. [2007], who train a classifierto choose the correct preposition among 34 candidates. 4 In [Chodorow et al., 2007],feature vectors indicate words and part-of-speech tags near the preposition, similar to thefeatures used in most disambiguation systems, and unlike the aggregate counts we use inour supervised preposition-selection N-gram model (Section 3.3.1).4 Chodorow et al. do not identify the 34 prepositions they use. We use the 34 from the SemEval-07 prepositionsense-disambiguation task [Litkowski and Hargraves, 2007]: about, across, above, after, against, along,among, around, as, at, before, behind, beneath, beside, between, by, down, during, for, from, in, inside, into,like, of, off, on, onto, over, round, through, to, towards, with43
Accuracy (%)10090807060SUPERLMSUMLMRATIOLMTRIGRAM50100 1000 10000 100000 1e+06Number of training examplesFigure 3.1: Preposition selection learning curveFor preposition selection, like all generation disambiguation tasks, labeled data is essentiallyfree to create (i.e, the problem has natural automatic examples as explained inChapter 2, Section 2.5.4). Each preposition in edited text is assumed to be correct, automaticallyproviding an example of that preposition’s class. We extract examples from the NewYork Times (NYT) section of the Gigaword corpus [Graff, 2003]. We take the first 1 millionprepositions in NYT as a training set, 10K from the middle as a development set and 10Kfrom the end as a final unseen test set. We tokenize the corpus and identify prepositions bystring-match. Our system uses no parsing or part-of-speech tagging to extract the examplesor create the features.3.5.2 Preposition Selection ResultsPreposition selection is a difficult task with a low baseline: choosing the most-commonpreposition (of ) in our test set achieves 20.3%. Training on 7 million examples, Chodorowet al. [2007] achieved 69% on the full 34-way selection. Tetreault and Chodorow [2008]obtained a human upper bound by removing prepositions from text and asking annotatorsto fill in the blank with the best preposition (using the current sentence as context). Twoannotators achieved only 75% agreement with each other and with the original text.In light of these numbers, the accuracy of the N-gram models are especially impressive.SUPERLM reaches 75.4% accuracy, equal to the human agreement (but on different data).Performance continually improves with more training examples, but only by 0.25% from300K to 1M examples (Figure 3.1). SUMLM (73.7%) significantly outperforms RATIOLM(69.7%), and nearly matches the performance of SUPERLM. TRIGRAM performs worst(58.8%), but note it is the only previous web-scale approach applied to preposition selection[Felice and Pulman, 2007]. All differences are statistically significant (McNemar’s test,p
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51: ut without counts for the class pri
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
Accuracy (%)10090807060SUPERLMSUMLMRATIOLMTRIGRAM50100 1000 10000 100000 1e+06Number of training examplesFigure 3.1: Preposition selection learning curveFor preposition selection, like all generation disambiguation tasks, labeled data is essentiallyfree to create (i.e, the problem has natural automatic examples as explained inChapter 2, Section 2.5.4). Each preposition in edited text is assumed to be correct, automaticallyproviding an example of that preposition’s class. We extract examples from the NewYork Times (NYT) section of the Gigaword corpus [Graff, 2003]. We take the first 1 millionprepositions in NYT as a training set, 10K from the middle as a development set and 10Kfrom the end as a final unseen test set. We tokenize the corpus and identify prepositions bystring-match. Our system uses no parsing or part-of-speech tagging to extract the examplesor create the features.3.5.2 Preposition Selection ResultsPreposition selection is a difficult task with a low baseline: choosing the most-commonpreposition (of ) in our test set achieves 20.3%. Training on 7 million examples, Chodorowet al. [2007] achieved 69% on the full 34-way selection. Tetreault and Chodorow [2008]obtained a human upper bound by removing prepositions from text and asking annotatorsto fill in the blank with the best preposition (using the current sentence as context). Twoannotators achieved only 75% agreement with each other and with the original text.In light of these numbers, the accuracy of the N-gram models are especially impressive.SUPERLM reaches 75.4% accuracy, equal to the human agreement (but on different data).Per<strong>for</strong>mance continually improves with more training examples, but only by 0.25% from300K to 1M examples (Figure 3.1). SUMLM (73.7%) significantly outper<strong>for</strong>ms RATIOLM(69.7%), and nearly matches the per<strong>for</strong>mance of SUPERLM. TRIGRAM per<strong>for</strong>ms worst(58.8%), but note it is the only previous web-scale approach applied to preposition selection[Felice and Pulman, 2007]. All differences are statistically significant (McNemar’s test,p