Interpolated Precision0.80.70.60.50.40.30.20.100 0.2 0.4 0.6 0.8 1RecallDSP>TMI>TDSP>0MI>0Figure 6.2: Pronoun resolution precision-recall on MUC.only once in the corpus. Even if we smooth MI by smoothing Pr(n|v) in Equation 6.2using modified KN-smoothing [Chen and Goodman, 1998], the recall of MI>0 on SJMonly increases from 44.1% to 44.9%, still far below DSP. Frequency-based models havefundamentally low coverage. As further evidence, if we build a model of MI on the SJMcorpus and use it in our pseudodisambiguation experiment (Section 6.4.3), MI>0 gets aMacroAvg precision of 86% but a MacroAvg recall of only 12%. 96.4.6 Pronoun ResolutionFinally, we evaluate DSP on a common application of selectional preferences: choosing thecorrect antecedent <strong>for</strong> pronouns in text [Dagan and Itai, 1990; Kehler et al., 2004]. We studythe cases where a pronoun is the direct object of a verb predicate,v. A pronoun’s antecedentmust obey v’s selectional preferences. If we have a better model of SP, we should be ableto better select pronoun antecedents. 10We parsed the MUC-7 [1997] coreference corpus and extracted all pronouns in a directobject relation. For each pronoun,p, modified by a verb,v, we extracted all preceding nounswithin the current or previous sentence. Thirty-nine anaphoric pronouns had an antecedentin this window and are used in the evaluation. For eachp, letN(p) + by the set of precedingnouns coreferent with p, and let N(p) − be the remaining non-coreferent nouns. We takeall (v,n + ) where n + ∈ N(p) + as positive, and all other pairs (v,n − ), n − ∈ N(p) − asnegative.We compare MI and DSP on this set, classifying every (v,n) with MI>T (or DSP>T )as positive. By varying T , we get a precision-recall curve (Figure 6.2). Precision is low9 Recall that even the Keller and Lapata [2003] system, built on the world’s largest corpus, achieves only34% recall (Table 6.1) (with only 48% of positives and 27% of all pairs previously observed, but note, onthe other hand, that low-count N-grams have been filtered from the N-gram corpus, and there<strong>for</strong>e perhaps thiseffect is overstated).10 Note we’re not trying to answer the question of whether selectional preferences are useful [Yang et al.,2005] or not [Kehler et al., 2004] <strong>for</strong> resolving pronouns when combined with features <strong>for</strong> recency, frequency,gender, syntactic role of the candidate, etc. We are only using this task as another evaluation <strong>for</strong> our models ofselectional preference.93
SystemAccMost-Recent Noun 17.9%Maximum MI 28.2%Maximum DSP 38.5%Table 6.4: Pronoun resolution accuracy on nouns in current or previous sentence in MUC.because, of course, there are many nouns that satisfy the predicate’s SPs that are not coreferent.DSP>0 has both a higher recall and higher precision than accepting every pair previouslyseen in text (the right-most point on MI>T ). The DSP>T system achieves higherprecision than MI>T <strong>for</strong> points where recall is greater than 60% (where MI0 is higher here than it is <strong>for</strong> general verb-objects (Section 6.4.5).On the subset of pairs with strong empirical association (MI>0), MI generally outper<strong>for</strong>msDSP at equivalent recall values.We next compare MI and DSP as stand-alone pronoun resolution systems (Table 6.4).As a standard baseline, <strong>for</strong> each pronoun, we choose the most recent noun in text as the pronoun’santecedent, achieving 17.9% resolution accuracy. This baseline is low because manyof the most-recent nouns are subjects of the pronoun’s verb phrase, and there<strong>for</strong>e resolutionwould violate syntactic coreference constraints. If we choose the previous noun with thehighest MI as antecedent, we get an accuracy of 28.2%, while choosing the previous nounwith the highest DSP achieves 38.5%. DSP resolves 37% more pronouns correctly than MI.6.5 Conclusions and Future WorkWe have proposed a simple, effective model of selectional preference based on discriminativetraining. <strong>Supervised</strong> techniques typically achieve better per<strong>for</strong>mance than unsupervisedmodels, and we duplicate these gains with DSP. Here, however, these gains come at no additionallabeling cost, as training examples are generated automatically from unlabeled text.DSP allows an arbitrary combination of features, including verb co-occurrence featuresthat yield high-quality similar-word lists as latent output. These lists not only indicatewhich verbs are associated with a common set of nouns; they provide insight into a chain ofnarrative events in which a particular noun may participate (e.g., a particular noun may bebought, then cooked, then garnished, and then likely eaten). DSP there<strong>for</strong>e learns similarin<strong>for</strong>mation to previous approaches that seek such narrative events directly and explicitly[Bean and Riloff, 2004; Chambers and Jurafsky, 2008]. However, note that we learn theassociation of events across a corpus rather than only in a particular segment of discourse,like a document or paragraph. One option <strong>for</strong> future work is to model the local and globaldistribution of nouns separately, allowing <strong>for</strong> finer-grained (and potentially sense-specific)plausibility predictions.The features used in DSP only scratch the surface of possible feature mining; in<strong>for</strong>mationfrom WordNet relations, Wikipedia categories, or parallel corpora could also providevaluable clues <strong>for</strong> selectional preference. Also, if any other system were to exceed DSP’sper<strong>for</strong>mance, it could also be included as one of DSP’s features.It would be interesting to expand our co-occurrence features, including co-occurrencecounts across more grammatical relations and using counts from external, unparsed corpora94
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22:
Chapter 2Supervised and Semi-Superv
- Page 23 and 24:
emphasis on “deliverables and eva
- Page 25 and 26:
Figure 2.1: The linear classifier h
- Page 27 and 28:
The above experimental set-up is so
- Page 29 and 30:
and discriminative models therefore
- Page 31 and 32:
their slack value). In practice, I
- Page 33 and 34:
One way to find a better solution i
- Page 35 and 36:
Figure 2.2: Learning from labeled a
- Page 37 and 38:
algorithm). Yarowsky used it for wo
- Page 39 and 40:
Learning with Natural Automatic Exa
- Page 41 and 42:
positive examples from any collecti
- Page 43 and 44:
generated word clusters. Several re
- Page 45 and 46:
One common disambiguation task is t
- Page 47 and 48:
3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50:
For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101: Verb Plaus./Implaus. Resnik Dagan e
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V