As described in Chapter 2, discriminative techniques are widely used in NLP. Theyhave been applied to the related tasks of word prediction and language modeling. Even-Zohar and Roth [2000] use a classifier to predict the most likely word to fill a position in asentence (in their experiments: a verb) from a set of candidates (sets of verbs), by inspectingthe context of the target token (e.g., the presence or absence of a particular nearby word inthe sentence). This approach can there<strong>for</strong>e learn which specific arguments occur with aparticular predicate. In comparison, our features are second-order: we learn what kindsof arguments occur with a predicate by encoding features of the arguments. The workof van den Bosch [2006] extends the work of Even-Zohar and Roth [2000] to all-wordprediction. Recent distributed and latent-variable models also represent words with featurevectors [Bengio et al., 2003; Blitzer et al., 2005]. Many of these approaches learn both thefeature weights and the feature representation. Vectors must be kept low-dimensional <strong>for</strong>tractability, while learning and inference on larger scales is impractical. By partitioning ourexamples by predicate, we can efficiently use high-dimensional, sparse vectors.6.3 Methodology6.3.1 Creating ExamplesTo learn a discriminative model of selectional preference, we create positive and negativetraining examples automatically from raw text. To create the positives, we automaticallyparse a large corpus, and then extract the predicate-argument pairs that have a statisticalassociation in this data. We measure this association using point-wise Mutual In<strong>for</strong>mation(MI) [Church and Hanks, 1990]. The MI between a verb predicate, v, and its objectargument, n, is:MI(v,n) = log Pr(v,n) Pr(n|v)= logPr(v)Pr(n) Pr(n)(6.2)If MI>0, the probability v and n occur together is greater than if they were independentlydistributed.We create sets of positive and negative examples separately <strong>for</strong> each predicate, v. First,we extract all pairs where MI(v,n)>τ as positives. For each positive, we create pseudonegativeexamples, (v,n ′ ), by pairing v with a new argument, n ′ , that either has MI belowthe threshold or did not occur with v in the corpus. We require each negative n ′ to havea similar frequency to its corresponding n. This prevents our learning algorithm from focusingon any accidental frequency-based bias. We mix in K negatives <strong>for</strong> each positive,sampling without replacement to create all the negatives <strong>for</strong> a particular predicate. For each1v,K+1of its examples will be positive. The threshold τ represents a trade-off betweencapturing a large number of positive pairs and ensuring these pairs have good association.Similarly, K is a trade-off between the number of examples and the computational efficiency.Ultimately, these parameters should be optimized <strong>for</strong> task per<strong>for</strong>mance.Of course, some negatives will actually be plausible arguments that were unobserveddue to sparseness. Fortunately, discriminative methods like soft-margin SVMs can learn inthe face of label error by allowing slack, subject to a tunable regularization penalty (Chapter2, Section 2.3.4).If MI is a sparse and imperfect model of SP, what can DSP gain by training on MI’sscores? We can regard DSP as learning a view of SP that is orthogonal to MI, in a cotrainingsense [Blum and Mitchell, 1998]. MI labels the data based solely on co-occurrence;85
DSP uses these labels to identify other regularities – ones that extend beyond co-occurringwords. For example, many instances of n where MI(eat,n)>τ also have MI(buy,n)>τand MI(cook,n)>τ. Also, compared to other nouns, a disproportionate number of eatnounsare lower-case, single-token words, and they rarely contain digits, hyphens, or beginwith a human first name like Bob. DSP encodes these interdependent properties as featuresin a linear classifier. This classifier can score any noun as a plausible argument of eat ifindicative features are present; MI can only assign high plausibility to observed (eat,n)pairs. Similarity-smoothed models can make use of the regularities across similar verbs,but not the finer-grained string- and token-based features.Our training examples are similar to the data created <strong>for</strong> pseudodisambiguation, theusual evaluation task <strong>for</strong> SP models [Erk, 2007; Keller and Lapata, 2003; Rooth et al.,1999]. This data consists of triples (v,n,n ′ ) where v,n is a predicate-argument pair observedin the corpus and v,n ′ has not been observed. The models score correctly if theyrank observed (thus plausible) arguments above corresponding unobserved (thus implausible)ones. We refer to this as Pairwise Disambiguation. Unlike this task, we classify eachpredicate-argument pair independently as plausible/implausible. We also use MI rather thanfrequency to define the positive pairs, ensuring that the positive pairs truly have a statisticalassociation, and are not simply the result of parser error or noise. 16.3.2 Partitioning <strong>for</strong> Efficient TrainingAfter creating our positive and negative training pairs, we must select a feature representation<strong>for</strong> our examples. Let Φ be a mapping from a predicate-argument pair (v,n) toa feature vector, Φ : (v,n) → 〈φ 1 ...φ k 〉. Predictions are made using a linear classifier,h(v,n) = ¯w·Φ(v,n), where ¯w is our learned weight vector.We can make training significantly more efficient by using a special <strong>for</strong>m of attributevaluefeatures. Let every feature φ i be of the <strong>for</strong>m φ i (v,n) = 〈v = ˆv ∧ f(n)〉. Thatis, every feature is an intersection of the occurrence of a particular predicate, ˆv, and somefeature of the argument f(n). For example, a feature <strong>for</strong> a verb-object pair might be, “theverb is eat and the object is lower-case.” In this representation, features <strong>for</strong> one predicatewill be completely independent from those <strong>for</strong> every other predicate. Thus rather than asingle training procedure, we can actually partition the examples by predicate, and train aclassifier <strong>for</strong> each predicate independently. The prediction becomes h v (n) = ¯w v · Φ v (n),where ¯w v are the learned weights corresponding to predicatevand all featuresΦ v (n)=f(n)depend on the argument only. 2Some predicate partitions may have insufficient examples <strong>for</strong> training. Also, a predicatemay occur in test data that was unseen during training. To handle these cases, wecluster low-frequency predicates. For assigning SP to verb-object pairs, we cluster all verbsthat have less than 250 positive examples, using clusters generated by the CBC algorithm[Pantel and Lin, 2002]. For example, the low-frequency verbs incarcerate, parole, and1 For a fixed verb, MI is proportional to Keller and Lapata [2003]’s conditional probability scores <strong>for</strong> pseudodisambiguationof (v,n,n ′ ) triples: Pr(v|n) = Pr(v,n)/Pr(n), which was shown to be a better measureof association than co-occurrence frequency f(v,n). Normalizing by Pr(v) (yielding MI) allows us to use aconstant threshold across all verbs. MI was also recently used <strong>for</strong> inference-rule SPs by Pantel et al. [2007].2 h v (n) should not be confused with the multi-class classifiers presented in previous chapters. There, the rin ¯W r · ¯x indexed a class and ¯W r · ¯x gave the score <strong>for</strong> each class. All the class scores needed to be computed<strong>for</strong> each example at test time. Here, <strong>for</strong> a given v, there is only one linear combination to compute: ¯w v ·f(n).A single binary plausible/implausible decision is made on the basis of this verb-specific decision function.86
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22:
Chapter 2Supervised and Semi-Superv
- Page 23 and 24:
emphasis on “deliverables and eva
- Page 25 and 26:
Figure 2.1: The linear classifier h
- Page 27 and 28:
The above experimental set-up is so
- Page 29 and 30:
and discriminative models therefore
- Page 31 and 32:
their slack value). In practice, I
- Page 33 and 34:
One way to find a better solution i
- Page 35 and 36:
Figure 2.2: Learning from labeled a
- Page 37 and 38:
algorithm). Yarowsky used it for wo
- Page 39 and 40:
Learning with Natural Automatic Exa
- Page 41 and 42:
positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93: without the need for manual annotat
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V