experiments, we took a very simple approach: we minimized the variance of all attributesthat are weighted equally in the unsupervised baselines. If a feature is not included in anunsupervised baseline, it is regularized toward zero.4.3 Experimental DetailsWe use the data sets from Chapter 3. Recall that in each case a classifier makes a decision <strong>for</strong>a particular word based on the word’s surrounding context. The attributes of the classifierare the log counts of different fillers occurring in the context patterns. We retrieve countsfrom the web-scale Google Web 5-gram Corpus [Brants and Franz, 2006], which includesN-grams of length one to five. We apply add-one smoothing to all counts. Every classifieralso has bias features (<strong>for</strong> every class). We simply include, where appropriate, attributesthat are always unity.We use LIBLINEAR [Fan et al., 2008] to train K-SVM and OvA-SVM, and SVM rank[Joachims, 2006] to train CS-SVM. For VAR-SVM, we solve the primal <strong>for</strong>m of the quadraticprogram directly in [CPLEX, 2005], a general optimization package.We vary the number of training examples <strong>for</strong> each classifier. The C-parameters of allSVMs are tuned on development data. We evaluate using accuracy: the percentage of testexamples that are classified correctly. We also provide the accuracy of the majority-classbaseline and best unsupervised system, as defined in Chapter 3.As an alternative way to increase the learning rate, we augment a classifier’s featuresusing the output of the unsupervised system: For each class, we include one feature <strong>for</strong>the sum of all counts (in the unsupervised system) that predict that class. We denote theseaugmented systems with a+as in K-SVM + and CS-SVM + .4.4 Applications4.4.1 Preposition SelectionFollowing the set-up in Chapter 3, a classifier selects a preposition from 34 candidates,using counts <strong>for</strong> filled 2-to-5-gram patterns. We again use the same 100K training, 10Kdevelopment, and 10K test examples. The unsupervised approach sums the counts of all 3-to-5-gram patterns <strong>for</strong> each preposition. We there<strong>for</strong>e regularize the variance of the 3-to-5-gram weights in VAR-SVM, and simultaneously minimize the norm of the 2-gram weights.ResultsThe majority-class is the preposition of; it occurs in 20.3% of test examples. The unsupervisedsystem scores 73.7%. For further perspective on these results, recall that [Chodorowet al., 2007] achieved 69% with 7M training examples, while [Tetreault and Chodorow,2008] found the human per<strong>for</strong>mance was around 75%. However, these results are not directlycomparable as they are on different data.Table 4.1 gives the accuracy <strong>for</strong> different amounts of training data. Here, as in the othertasks, K-SVM mirrors the learning rate from Chapter 3. There are several distinct phasesamong the relative ranking of the systems. For smaller amounts of training data (≤1000 examples)K-SVM per<strong>for</strong>ms worst, while VAR-SVM is statistically significantly better than all63
Training ExamplesSystem 10 100 1K 10K 100KOvA-SVM 16.0 50.6 66.1 71.1 73.5K-SVM 13.7 50.0 65.8 72.0 74.7K-SVM + 22.2 56.8 70.5 73.7 75.2CS-SVM 27.1 58.8 69.0 73.5 74.2CS-SVM + 39.6 64.8 71.5 74.0 74.4VAR-SVM 73.8 74.2 74.7 74.9 74.9Table 4.1: Accuracy (%) of preposition-selection SVMs. Unsupervised (SUMLM) accuracyis 73.7%.Training ExamplesSystem 10 100 1K 10K 100KCS-SVM 86.0 93.5 95.1 95.7 95.7CS-SVM + 91.0 94.9 95.3 95.7 95.7VAR-SVM 94.9 95.3 95.6 95.7 95.8Table 4.2: Accuracy (%) of spell-correction SVMs. Unsupervised (SUMLM) accuracy is94.8%.other systems, and always exceeds the per<strong>for</strong>mance of the unsupervised approach. 5 Augmentingthe attributes with sum counts (the+systems) strongly helps with fewer examples,especially in conjunction with the more efficient CS-SVM. However, VAR-SVM clearlyhelps more. We noted earlier that VAR-SVM is guaranteed to do as well as the unsupervisedsystem on the development data, but here we confirm that it can also exploit even smallamounts of training data to further improve accuracy.CS-SVM outper<strong>for</strong>ms K-SVM except with 100K examples, while OvA-SVM is betterthan K-SVM <strong>for</strong> small amounts of data. 6 K-SVM per<strong>for</strong>ms best with all the data; it uses themost expressive representation, but needs 100K examples to make use of it. On the otherhand, feature augmentation and variance regularization provide diminishing returns as theamount of training data increases.4.4.2 Context-Sensitive Spelling CorrectionRecall that in context-sensitive spelling correction, <strong>for</strong> every occurrence of a word in a predefinedconfusion set (e.g. {cite, sight, cite}), the classifier selects the most likely wordfrom the set. We use the five confusion sets from Chapter 3; four are binary and one isa 3-way classification. We use 100K training, 10K development, and 10K test examples<strong>for</strong> each, and average accuracy across the sets. All 2-to-5 gram counts are used in theunsupervised system, so the variance of all weights is regularized in VAR-SVM.5 Significance is calculated using a χ 2 test over the test set correct/incorrect contingency table.6 [Rifkin and Klautau, 2004] argue OvA-SVM is as good as K-SVM, but this is “predicated on the assumptionthat the classes are ‘independent’,” i.e., that examples from class 0 are no closer to class 1 than to class 2. Thisis not true of this task (e.g. ¯x be<strong>for</strong>e is closer to ¯x after than ¯x in, etc.).64
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71: We now show that ¯w T (diag(¯p)
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124:
[Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V