Training ExamplesSystem 10 100 1KCS-SVM 59.0 71.0 84.3CS-SVM + 59.4 74.9 84.5VAR-SVM 70.2 76.2 84.5VAR-SVM+FreeB 64.2 80.3 84.5Table 4.3: Accuracy (%) of non-referential detection SVMs. Unsupervised (SUMLM) accuracyis 79.8%.ResultsOn this task, the majority-class baseline is much higher, 66.9%, and so is the accuracy ofthe top unsupervised system: 94.8%. Since four of the five sets are binary classifications,where K-SVM and CS-SVM are equivalent, we only give the accuracy of the CS-SVM (itdoes per<strong>for</strong>m better than K-SVM on the one 3-way set). VAR-SVM again exceeds the unsupervisedaccuracy <strong>for</strong> all training sizes, and generally per<strong>for</strong>ms as well as the augmentedCS-SVM + using an order of magnitude less training data (Table 4.2). Differences from≤1Kare significant.4.4.3 Non-Referential Pronoun DetectionWe use the same data and fillers as Chapter 3, and preprocess the N-gram corpus in thesame way.Recall that the unsupervised system picks non-referential if the difference between thesummed count of it fillers and the summed count of they fillers is above a threshold (notethis no longer fits Equation (4.5), with consequences discussed below). We thus separatelyminimize the variance of the it pattern weights and the they pattern weights. We use 1Ktraining, 533 development, and 534 test examples.ResultsThe most common class is referential, occurring in 59.4% of test examples. The unsupervisedsystem again does much better, at 79.8%.Annotated training examples are much harder to obtain <strong>for</strong> this task and we experimentwith a smaller range of training sizes (Table 4.3). The per<strong>for</strong>mance of VAR-SVM exceeds theper<strong>for</strong>mance of K-SVM across all training sizes (bold accuracies are significantly better thaneither CS-SVM <strong>for</strong> ≤100 examples). However, the gains were not as large as we had hoped,and accuracy remains worse than the unsupervised system when not using all the trainingdata. When using all the data, a fairly large C-parameter per<strong>for</strong>ms best on developmentdata, so regularization plays less of a role.After development experiments, we speculated that the poor per<strong>for</strong>mance relative to theunsupervised approach was related to class bias. In the other tasks, the unsupervised systemchooses the highest summed score. Here, the difference in it and they counts is compared toa threshold. Since the bias feature is regularized toward zero, then, unlike the other tasks,using a low C-parameter does not produce the unsupervised system, so per<strong>for</strong>mance canbegin below the unsupervised level.65
Since we wanted the system to learn this threshold, even when highly regularized, we removedthe regularization penalty from the bias weight, letting the optimization freely set theweight to minimize training error. With more freedom, the new classifier (VAR-SVM+FreeB)per<strong>for</strong>ms worse with 10 examples, but exceeds the unsupervised approach with 100 trainingpoints. Although this was somewhat successful, developing better strategies <strong>for</strong> biasremains useful future work.4.5 Related WorkThere is a large body of work on regularization in machine learning, including work thatuses positive semi-definite matrices in the SVM quadratic program. The graph Laplacianhas been used to encourage geometrically-similar feature vectors to be classified similarly[Belkin et al., 2006]. An appealing property of these approaches is that they incorporatein<strong>for</strong>mation from unlabeled examples. [Wang et al., 2006] use Laplacian regularization<strong>for</strong> the task of dependency parsing. They regularize such that features <strong>for</strong> distributionallysimilarwords have similar weights. Rather than penalize pairwise differences proportionalto a similarity function as they do, we simply penalize weight variance.In the field of computer vision, [Tefas et al., 2001] (binary) and [Kotsia et al., 2009](multi-class) also regularize weights with respect to a covariance matrix. They use labeleddata to find the sum of the sample covariance matrices from each class, similar to lineardiscriminant analysis. We propose the idea in general, and instantiate with a different Cmatrix: a variance regularizer over ¯w. Most importantly, our instantiated covariance matrixdoes not require labeled data to generate.In a Bayesian setting, [Raina et al., 2006] model feature correlations in a logistic regressionclassifier. They propose a method to construct a covariance matrix <strong>for</strong> a multivariateGaussian prior on the classifier’s weights. Labeled data <strong>for</strong> other, related tasks is used toinfer potentially correlated features on the target task. Like in our results, they found thatthe gains from modeling dependencies diminish as more training data is available.We also mention two related online learning approaches. Similar to our goal of regularizingtoward a good unsupervised system, [Crammer et al., 2006] regularize ¯w towarda (different) target vector at each update, rather than strictly minimizing ||¯w|| 2 . The targetvector is the vector learned from the cumulative effect of previous updates. [Dredze etal., 2008] maintain the variance of each weight and use this to guide the online updates.However, covariance between weights is not considered.We believe new SVM regularizations in general, and variance regularization in particular,will increasingly be used in combination with related NLP strategies that learn betterwhen labeled data is scarce. These may include: using more-general features, e.g. onesgenerated from raw text [Miller et al., 2004; Koo et al., 2008], leveraging out-of-domainexamples to improve in-domain classification [Blitzer et al., 2007; Daumé III, 2007], activelearning [Cohn et al., 1994; Tong and Koller, 2002], and approaches that treat unlabeleddata as labeled, such as bootstrapping [Yarowsky, 1995], co-training [Blum and Mitchell,1998], and self-training [McClosky et al., 2006a].4.6 Future WorkThe primary direction of future research will be to apply the VAR-SVM to new problemsand tasks. There are many situations where a system designer has an intuition about the66
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22:
Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73: Training ExamplesSystem 10 100 1K 1
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V