Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
Chapter 6Discriminative Learning ofSelectional Preference fromUnlabeled Text“Saying ‘I’m sorry’ is the same as saying ‘I apologize.’ Except at a funeral.”- Demetri Martin6.1 IntroductionSelectional preferences (SPs) tell us which arguments are plausible for a particular predicate.For example, a letter, a report or an e-mail are all plausible direct-object arguments forthe verb predicate write. On the other hand, a zebra, intransigence or nihilism are unlikelyarguments for this predicate. People tend to exude confidence but they don’t tend to exudepencils. Table 6.2 (Section 6.4.4) gives further examples. SPs can help resolve syntactic,word sense, and reference ambiguity [Clark and Weir, 2002], and so gathering them hasreceived a lot of attention in the NLP community.One way to determine SPs is from co-occurrences of predicates and arguments in text.Unfortunately, no matter how much text we use (even if, as in the previous chapters, we’reusing Google N-grams, which come from all the text on the web), many acceptable pairswill be missing. Bikel [2004] found that only 1.49% of the bilexical dependencies consideredby Collins’ parser during decoding were observed during training. In our parsedcorpus (Section 6.4.1), for example, we find eat with nachos, burritos, and tacos, but notwith the equally tasty quesadillas, chimichangas, or tostadas. Rather than solely relying onco-occurrence counts, we would like to use them to generalize to unseen pairs.In particular, we would like to exploit a number of arbitrary and potentially overlappingproperties of predicates and arguments when we assign SPs. We propose to do this byrepresenting these properties as features in a linear classifier, and training the weights usingdiscriminative learning. Positive examples are taken from observed predicate-argumentpairs, while pseudo-negatives are constructed from unobserved combinations. We train asupport vector machine (SVM) classifier to distinguish the positives from the negatives.We refer to our model’s scores as Discriminative Selectional Preference (DSP). By creatingtraining vectors automatically, DSP enjoys all the advantages of supervised learning, but0 A version of this chapter has been published as [Bergsma et al., 2008a]83
without the need for manual annotation of examples. Our proposed work here is thereforean example of a semi-supervised system that uses “Learning with Heuristically-LabeledExamples,” as described in Chapter 2, Section 2.5.4.We evaluate DSP on the task of assigning verb-object selectional preference. We encodea noun’s textual distribution as feature information. The learned feature weights are linguisticallyinteresting, yielding high-quality similar-word lists as latent information. With thesefeatures, DSP is also an example of a semi-supervised system that creates features fromunlabeled data (Section 2.5.5). It thus encapsulates the two main thrusts of this dissertation.Despite its representational power, DSP scales to real-world data sizes: examples arepartitioned by predicate, and a separate SVM is trained for each partition. This allows us toefficiently learn with over 57 thousand features and 6.5 million examples. DSP outperformsrecently proposed alternatives in a range of experiments, and better correlates with humanplausibility judgments. It also shows strong gains over a Mutual Information-based cooccurrencemodel on two tasks: identifying objects of verbs in an unseen corpus and findingpronominal antecedents in coreference data.6.2 Related WorkMost approaches to SPs generalize from observed predicate-argument pairs to semanticallysimilar ones by modeling the semantic class of the argument, following Resnik [1996]. Forexample, we might have a class Mexican Food and learn that the entire class is suitable foreating. Usually, the classes are from WordNet [Miller et al., 1990], although they can alsobe inferred from clustering [Rooth et al., 1999]. Brockmann and Lapata [2003] comparea number of WordNet-based approaches, including Resnik [1996], Li and Abe [1998], andClark and Weir [2002], and found that more sophisticated class-based approaches do notalways outperform frequency-based models.Another line of research generalizes using similar words. Suppose we are calculatingthe probability of a particular noun, n, occurring as the object argument of a given verbalpredicate, v. Let Pr(n|v) be the empirical maximum-likelihood estimate from observedtext. Dagan et al. [1999] define the similarity-weighted probability, Pr SIM , to be:Pr SIM (n|v) = ∑Sim(v ′ ,v)Pr(n|v ′ ) (6.1)v ′ ∈SIMS(v)where Sim(v ′ ,v) returns a real-valued similarity between two verbs v ′ and v (normalizedover all pair similarities in the sum). In contrast, Erk [2007] generalizes by substitutingsimilar arguments, while Wang et al. [2005] use the cross-product of similar pairs. One keyissue is how to define the set of similar words, SIMS(w). Erk [2007] compared a numberof techniques for creating similar-word sets and found that both the Jaccard coefficientand Lin [1998a]’s information-theoretic metric work best. Similarity-smoothed modelsare simple to compute, potentially adaptable to new domains, and require no manuallycompiledresources such as WordNet.Selectional preferences have also been a recent focus of researchers investigating thelearning of paraphrases and inference rules [Pantel et al., 2007; Roberto et al., 2007]. Inferencessuch as “[X wins Y] ⇒ [X plays Y]” are only valid for certain arguments X andY. We follow Pantel et al. [2007] in using automatically-extracted semantic classes to helpcharacterize plausible arguments.84
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91: other tasks we only had a handful o
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V
Chapter 6Discriminative <strong>Learning</strong> ofSelectional Preference fromUnlabeled Text“Saying ‘I’m sorry’ is the same as saying ‘I apologize.’ Except at a funeral.”- Demetri Martin6.1 IntroductionSelectional preferences (SPs) tell us which arguments are plausible <strong>for</strong> a particular predicate.For example, a letter, a report or an e-mail are all plausible direct-object arguments <strong>for</strong>the verb predicate write. On the other hand, a zebra, intransigence or nihilism are unlikelyarguments <strong>for</strong> this predicate. People tend to exude confidence but they don’t tend to exudepencils. Table 6.2 (Section 6.4.4) gives further examples. SPs can help resolve syntactic,word sense, and reference ambiguity [Clark and Weir, 2002], and so gathering them hasreceived a lot of attention in the NLP community.One way to determine SPs is from co-occurrences of predicates and arguments in text.Un<strong>for</strong>tunately, no matter how much text we use (even if, as in the previous chapters, we’reusing Google N-grams, which come from all the text on the web), many acceptable pairswill be missing. Bikel [2004] found that only 1.49% of the bilexical dependencies consideredby Collins’ parser during decoding were observed during training. In our parsedcorpus (Section 6.4.1), <strong>for</strong> example, we find eat with nachos, burritos, and tacos, but notwith the equally tasty quesadillas, chimichangas, or tostadas. Rather than solely relying onco-occurrence counts, we would like to use them to generalize to unseen pairs.In particular, we would like to exploit a number of arbitrary and potentially overlappingproperties of predicates and arguments when we assign SPs. We propose to do this byrepresenting these properties as features in a linear classifier, and training the weights usingdiscriminative learning. Positive examples are taken from observed predicate-argumentpairs, while pseudo-negatives are constructed from unobserved combinations. We train asupport vector machine (SVM) classifier to distinguish the positives from the negatives.We refer to our model’s scores as Discriminative Selectional Preference (DSP). By creatingtraining vectors automatically, DSP enjoys all the advantages of supervised learning, but0 A version of this chapter has been published as [Bergsma et al., 2008a]83