y regularizing a classifier toward equal weights, a supervised predictor outper<strong>for</strong>ms theunsupervised approach after only ten examples, and does as well with 1000 examples as thestandard classifier does with 100,000.Section 4.2 first describes a general multi-class SVM. We call the base vector of in<strong>for</strong>mationused by the SVM the attributes. A standard multi-class SVM creates features <strong>for</strong>the cross-product of attributes and classes. E.g., the attribute COUNT(Russia during 1997) isnot only a feature <strong>for</strong> predicting the preposition during, but also <strong>for</strong> predicting the 33 otherprepositions. The SVM must there<strong>for</strong>e learn to disregard many irrelevant features. We observethat this is not necessary, and develop an SVM that only uses the relevant attributesin the score <strong>for</strong> each class. Building on this efficient framework, we incorporate varianceregularization into the SVM’s quadratic program.We apply our algorithms to the three tasks studied in Chapter 3: preposition selection,context-sensitive spelling correction, and non-referential pronoun detection. We reproducethe Chapter 3 results using a multi-class SVM. Our new models achieve much better accuracywith fewer training examples. We also exceed the accuracy of a reasonable alternativetechnique <strong>for</strong> increasing the learning rate: including the output of the unsupervised systemas a feature in the classifier.Variance regularization is an elegant addition to the suite of methods in NLP that improveper<strong>for</strong>mance when access to labeled data is limited. Section 4.5 discusses somerelated approaches. While we motivate our algorithm as a way to learn better weightswhen the features are counts from an auxiliary corpus, there are other potential uses ofour method. We outline some of these in Section 4.6, and note other directions <strong>for</strong> futureresearch.4.2 Three Multi-Class SVM ModelsWe describe three max-margin multi-class classifiers and their corresponding quadratic programs.Although we describe linear SVMs, they can be extended to nonlinear cases in thestandard way by writing the optimal function as a linear combination of kernel functionsover the input examples.In each case, after providing the general technique, we relate the approach to ourmotivating application: learning weights <strong>for</strong> count features in a discriminative web-scaleN-gram model.4.2.1 Standard Multi-Class SVMWe define a K-class SVM following [Crammer and Singer, 2001]. This is a generalizationof binary SVMs [Cortes and Vapnik, 1995]. We have a set {(¯x 1 ,y 1 ),...,(¯x M ,y M )} of Mtraining examples. Each ¯x is an N-dimensional attribute vector, and y ∈ {1,...,K} areclasses. A classifier, H, maps an attribute vector, ¯x, to a class, y. H is parameterized by aK-by-N matrix of weights, W:H W (¯x) =Kargmaxr=1{ ¯W r · ¯x} (4.1)where ¯W r is the rth row of W. That is, the predicted label is the index of the row of Wthat has the highest inner-product with the attributes, ¯x.57
We seek weights such that the classifier makes few errors on training data and generalizeswell to unseen data. There areKN weights to learn, <strong>for</strong> the cross-product of attributesand classes. The most common approach is to trainK separate one-versus-all binary SVMs,one <strong>for</strong> each class. The weights learned <strong>for</strong> the rth SVM provide the weights ¯W r in (4.1).We call this approach OvA-SVM. Note in some settings various one-versus-one strategiesmay be more effective than one-versus-all [Hsu and Lin, 2002].The weights can also be found using a single constrained optimization [Vapnik, 1998;Weston and Watkins, 1998]. Following the soft-margin version in [Crammer and Singer,2001]:1minW,ξ 1 ,...,ξ M 2subjectto : ∀iK∑|| ¯W i || 2 +Ci=1ξ i ≥0∀r ≠ y i , ¯Wy i · ¯x i − ¯W r · ¯x i ≥1−ξ i (4.2)The constraints require the correct class to be scored higher than other classes by a certainmargin, with slack <strong>for</strong> non-separable cases. Minimizing the weights is a <strong>for</strong>m of regularization.Tuning the C-parameter controls the emphasis on regularization versus separationof training examples.We call this the K-SVM. The K-SVM outper<strong>for</strong>med the OvA-SVM in [Crammer andSinger, 2001], but see [Rifkin and Klautau, 2004]. The popularity of K-SVM is partly due toconvenience; it is included in popular SVM software like SVM-multiclass 1 and LIBLINEAR[Fan et al., 2008].Note that with two classes, K-SVM is less efficient than a standard binary SVM. Abinary classifier outputs class 1 if (¯w · ¯x > 0) and class 2 otherwise. The K-SVM encodesa binary classifier using ¯W 1 = ¯w and ¯W 2 = −¯w, there<strong>for</strong>e requiring twice the memoryof a binary SVM. However, both binary and 2-class <strong>for</strong>mulations have the same solution[Weston and Watkins, 1998].m∑i=1ξ iWeb-<strong>Scale</strong> N-gram K-SVMK-SVM was used to combine the N-gram counts in Chapter 3. This was the SUPERLMmodel. Recall that <strong>for</strong> preposition selection, attributes were web counts of patterns filledwith 34 prepositions, corresponding to the 34 classes. Each preposition serves as the fillerof each context pattern. Fourteen patterns were used <strong>for</strong> each filler: all five 5-grams, four4-grams, three 3-grams, and two 2-grams spanning the position to be predicted. There areN =14∗34 =476 total attributes, and there<strong>for</strong>e KN =476∗34 =16184 weights in theWmatrix.Figure 4.1 depicts the optimization problem <strong>for</strong> the preposition selection classifier. Forthe ith training example, the optimizer must set the weights such that the score <strong>for</strong> the trueclass (from) is higher than the scores of all the other classes by a margin of 1. Otherwise, itmust use the slack parameter, ξ i . The score is the linear product of the preposition-specificweights, ¯Wr and all the features, ¯x i . For illustration, seven of the thirty-four total classes aredepicted. Note these constraints must be collectively satisfied across all training examples.A K-SVM classifier can potentially exploit very subtle in<strong>for</strong>mation <strong>for</strong> this task. Let¯W in and ¯W be<strong>for</strong>e be weights <strong>for</strong> the classes in and be<strong>for</strong>e. Notice some of the attributes1 http://svmlight.joachims.org/svm_multiclass.html58
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65: ter, we present a simple technique
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118:
Chapter 8Conclusions and Future Wor
- Page 119 and 120:
8.3 Future WorkThis section outline
- Page 121 and 122:
My focus is thus on enabling robust
- Page 123 and 124:
[Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V