Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
Figure 4.1: Multi-class classification for web-scale N-gram modelsweighted in the inner products ¯W before· ¯x and ¯W in·¯x will be for counts of the prepositionafter. Relatively high counts for a context with after should deter us from choosing inmore than from choosing before. These correlations can be encoded in the classifier viathe corresponding weights on after-counts in ¯W in and ¯W before . Our experiments addresshow useful these correlations are and how much training data is needed before they can belearned and exploited effectively.4.2.2 SVM with Class-Specific AttributesSuppose we can partition our attribute vectors into sub-vectors that only include attributesthat we declare as relevant to the corresponding class: ¯x = (¯x 1 ,..., ¯x K ). We developa classifier that only uses the class-specific attributes in the score for each class. Theclassifier uses an N-dimensional weight vector, ¯w, which follows the attribute partition,¯w = (¯w 1 ,..., ¯w K ). The classifier is:H¯w (¯x) =Kargmaxr=1{¯w r · ¯x r } (4.3)We call this classifier the CS-SVM (an SVM with Class-Specific attributes).The weights can be determined using the follow (soft-margin) optimization:1min¯w,ξ 1 ,...,ξ m 2 ¯wT ¯w +Csubjectto : ∀im∑i=1ξ iξ i ≥0∀r ≠ y i , ¯w y i · ¯x i y i − ¯w r · ¯x i r ≥1−ξ i (4.4)There are several advantages to this formulation. Foremost, rather than having KNweights, it can have only N. For linear classifiers, the number of examples needed to59
each optimum performance is at most linear in the number of weights [Vapnik, 1998;Ng and Jordan, 2002]. In fact, both the total number and number of active features perexample decrease by K. Thus this reduction saves far more memory than what could beobtained by an equal reduction in dimensionality via pruning infrequent attributes.Also, note that unlike the K-SVM (Section 4.2.1), in the binary case the CS-SVM iscompletely equivalent (thus equally efficient) to a standard SVM.We will not always a priori know the class associated with each attribute. Also, someattributes may be predictive of multiple classes. In such cases, we can include ambiguousattributes in every sub-vector (needing N+D(K-1) total weights if D attributes are duplicated).In the degenerate case where every attribute is duplicated, CS-SVM is equivalent toK-SVM; both have KN weights.Optimization as a Binary SVMWe could solve the optimization problem in (4.4) directly using a quadratic programmingsolver. However, through an equivalent transformation into a binary SVM, we can takeadvantage of efficient, custom SVM optimization algorithms.We follow [Har-Peled et al., 2003] in transforming a multi-class example into a set ofbinary examples, each specifying a constraint from (4.4). We extend the attribute sub-vectorcorresponding to each class to be N-dimensional. We do this by substituting zero-vectorsfor all the other sub-vectors in the partition. The attribute vector for the rth class is then¯z r = (¯0,...,¯0,¯x r ,¯0,...,¯0). This is known as Kesler’s Construction and has a long historyin classification [Duda and Hart, 1973; Crammer and Singer, 2003]. We then create binaryrank constraints for a ranking SVM [Joachims, 2002] (ranking SVMs reduce to standardbinary SVMs). We create K instances for each multi-class example (¯x i ,y i ), with the transformedvector of the true class, ¯z y i, assigned a higher-rank than all the other, equally-rankedclasses, ¯z {r≠y i }. Training a ranking SVM using these constraints gives the same weightsas solving (4.4), but allows us to use efficient, custom SVM software. 2 Note the K-SVMcan also be trained this way, by including every attribute in every sub-vector, as describedearlier.Web-Scale N-gram CS-SVMReturning to our preposition selection example, an obvious attribute partition for the CS-SVMis to include as attributes for predicting preposition r only those counts for patterns filledwith preposition r. Thus ¯x in will only include counts for context patterns filled with in and¯x before will only include counts for context patterns filled with before. With 34 sub-vectorsand 14 attributes in each, there are only14∗34 =476 total weights. In contrast, K-SVM had16184 weights to learn.It is instructive to compare the CS-SVM in (4.3) to the unsupervised SUMLM approachin Chapter 3. That approach can be written as:H(¯x) =Kargmaxr=1{¯1· ¯x r } (4.5)2 One subtlety is whether to use a single slack,ξ i , for all K-1 constraints per examplei [Crammer and Singer,2001], or a different slack for each constraint [Joachims, 2002]. Using the former may be better as it results ina tighter bound on empirical risk [Tsochantaridis et al., 2005].60
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67: We seek weights such that the class
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
Figure 4.1: Multi-class classification <strong>for</strong> web-scale N-gram modelsweighted in the inner products ¯W be<strong>for</strong>e· ¯x and ¯W in·¯x will be <strong>for</strong> counts of the prepositionafter. Relatively high counts <strong>for</strong> a context with after should deter us from choosing inmore than from choosing be<strong>for</strong>e. These correlations can be encoded in the classifier viathe corresponding weights on after-counts in ¯W in and ¯W be<strong>for</strong>e . Our experiments addresshow useful these correlations are and how much training data is needed be<strong>for</strong>e they can belearned and exploited effectively.4.2.2 SVM with Class-Specific AttributesSuppose we can partition our attribute vectors into sub-vectors that only include attributesthat we declare as relevant to the corresponding class: ¯x = (¯x 1 ,..., ¯x K ). We developa classifier that only uses the class-specific attributes in the score <strong>for</strong> each class. Theclassifier uses an N-dimensional weight vector, ¯w, which follows the attribute partition,¯w = (¯w 1 ,..., ¯w K ). The classifier is:H¯w (¯x) =Kargmaxr=1{¯w r · ¯x r } (4.3)We call this classifier the CS-SVM (an SVM with Class-Specific attributes).The weights can be determined using the follow (soft-margin) optimization:1min¯w,ξ 1 ,...,ξ m 2 ¯wT ¯w +Csubjectto : ∀im∑i=1ξ iξ i ≥0∀r ≠ y i , ¯w y i · ¯x i y i − ¯w r · ¯x i r ≥1−ξ i (4.4)There are several advantages to this <strong>for</strong>mulation. Foremost, rather than having KNweights, it can have only N. For linear classifiers, the number of examples needed to59