Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

are available. This position-weighting (viz. feature-weighting) technique is similar to thedecision-list weighting in [Yarowsky, 1994]. We refer to this approach as RATIOLM in ourexperiments.3.4 Evaluation MethodologyWe compare our supervised and unsupervised systems on three experimental tasks: prepositionselection, context-sensitive spelling correction, and non-referential pronoun detection.We evaluate using accuracy: the percentage of correctly-selected labels. As a baseline(BASE), we state the accuracy of always choosing the most-frequent class. For spellingcorrection, we average accuracies across the five confusion sets. We also provide learningcurves by varying the number of labeled training examples. It is worth reiterating that thisdata is used solely to weight the contribution of the different filler counts; the filler countsthemselves do not change, as they are always extracted from the full Google 5-gram Corpus.For training SUPERLM, we use a support vector machine (SVM). SVMs achieve goodperformance on a range of tasks (Chapter 2, Section 2.3.4). We use a linear-kernel multiclassSVM (the efficient SVM multiclass instance of SVM struct [Tsochantaridis et al.,2004]). It slightly outperformed one-versus-all SVMs in preliminary experiments (and alater, more extensive study in Chapter 4 confirmed that these preliminary intuitions werejustified). We tune the SVM’s regularization parameter on the development sets. We applyadd-one smoothing to the counts used in SUMLM and SUPERLM, while we add 39 tothe counts in RATIOLM, following the approach of Carlson et al. [2008] (40 is the countcut-off used in the Google Corpus). For all unsupervised systems, we choose the most frequentclass if no counts are available. For SUMLM, we use the development sets to decidewhich orders of N-grams to combine, finding orders 3-5 optimal for preposition selection,2-5 optimal for spelling correction, and 4-5 optimal for non-referential pronoun detection.Development experiments also showed RATIOLM works better starting from 4-grams, notthe 5-grams originally used in [Carlson et al., 2008].3.5 Preposition Selection3.5.1 The Task of Preposition SelectionChoosing the correct preposition is one of the most difficult tasks for a second-languagelearner to master, and errors involving prepositions constitute a significant proportion oferrors made by learners of English [Chodorow et al., 2007]. Several automatic approachesto preposition selection have recently been developed [Felice and Pulman, 2007; Gamonet al., 2008]. We follow the experiments of Chodorow et al. [2007], who train a classifierto choose the correct preposition among 34 candidates. 4 In [Chodorow et al., 2007],feature vectors indicate words and part-of-speech tags near the preposition, similar to thefeatures used in most disambiguation systems, and unlike the aggregate counts we use inour supervised preposition-selection N-gram model (Section 3.3.1).4 Chodorow et al. do not identify the 34 prepositions they use. We use the 34 from the SemEval-07 prepositionsense-disambiguation task [Litkowski and Hargraves, 2007]: about, across, above, after, against, along,among, around, as, at, before, behind, beneath, beside, between, by, down, during, for, from, in, inside, into,like, of, off, on, onto, over, round, through, to, towards, with43

Accuracy (%)10090807060SUPERLMSUMLMRATIOLMTRIGRAM50100 1000 10000 100000 1e+06Number of training examplesFigure 3.1: Preposition selection learning curveFor preposition selection, like all generation disambiguation tasks, labeled data is essentiallyfree to create (i.e, the problem has natural automatic examples as explained inChapter 2, Section 2.5.4). Each preposition in edited text is assumed to be correct, automaticallyproviding an example of that preposition’s class. We extract examples from the NewYork Times (NYT) section of the Gigaword corpus [Graff, 2003]. We take the first 1 millionprepositions in NYT as a training set, 10K from the middle as a development set and 10Kfrom the end as a final unseen test set. We tokenize the corpus and identify prepositions bystring-match. Our system uses no parsing or part-of-speech tagging to extract the examplesor create the features.3.5.2 Preposition Selection ResultsPreposition selection is a difficult task with a low baseline: choosing the most-commonpreposition (of ) in our test set achieves 20.3%. Training on 7 million examples, Chodorowet al. [2007] achieved 69% on the full 34-way selection. Tetreault and Chodorow [2008]obtained a human upper bound by removing prepositions from text and asking annotatorsto fill in the blank with the best preposition (using the current sentence as context). Twoannotators achieved only 75% agreement with each other and with the original text.In light of these numbers, the accuracy of the N-gram models are especially impressive.SUPERLM reaches 75.4% accuracy, equal to the human agreement (but on different data).Performance continually improves with more training examples, but only by 0.25% from300K to 1M examples (Figure 3.1). SUMLM (73.7%) significantly outperforms RATIOLM(69.7%), and nearly matches the performance of SUPERLM. TRIGRAM performs worst(58.8%), but note it is the only previous web-scale approach applied to preposition selection[Felice and Pulman, 2007]. All differences are statistically significant (McNemar’s test,p

Accuracy (%)10090807060SUPERLMSUMLMRATIOLMTRIGRAM50100 1000 10000 100000 1e+06Number of training examplesFigure 3.1: Preposition selection learning curveFor preposition selection, like all generation disambiguation tasks, labeled data is essentiallyfree to create (i.e, the problem has natural automatic examples as explained inChapter 2, Section 2.5.4). Each preposition in edited text is assumed to be correct, automaticallyproviding an example of that preposition’s class. We extract examples from the NewYork Times (NYT) section of the Gigaword corpus [Graff, 2003]. We take the first 1 millionprepositions in NYT as a training set, 10K from the middle as a development set and 10Kfrom the end as a final unseen test set. We tokenize the corpus and identify prepositions bystring-match. Our system uses no parsing or part-of-speech tagging to extract the examplesor create the features.3.5.2 Preposition Selection ResultsPreposition selection is a difficult task with a low baseline: choosing the most-commonpreposition (of ) in our test set achieves 20.3%. Training on 7 million examples, Chodorowet al. [2007] achieved 69% on the full 34-way selection. Tetreault and Chodorow [2008]obtained a human upper bound by removing prepositions from text and asking annotatorsto fill in the blank with the best preposition (using the current sentence as context). Twoannotators achieved only 75% agreement with each other and with the original text.In light of these numbers, the accuracy of the N-gram models are especially impressive.SUPERLM reaches 75.4% accuracy, equal to the human agreement (but on different data).Per<strong>for</strong>mance continually improves with more training examples, but only by 0.25% from300K to 1M examples (Figure 3.1). SUMLM (73.7%) significantly outper<strong>for</strong>ms RATIOLM(69.7%), and nearly matches the per<strong>for</strong>mance of SUPERLM. TRIGRAM per<strong>for</strong>ms worst(58.8%), but note it is the only previous web-scale approach applied to preposition selection[Felice and Pulman, 2007]. All differences are statistically significant (McNemar’s test,p

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!