Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

Chapter 6Discriminative Learning ofSelectional Preference fromUnlabeled Text“Saying ‘I’m sorry’ is the same as saying ‘I apologize.’ Except at a funeral.”- Demetri Martin6.1 IntroductionSelectional preferences (SPs) tell us which arguments are plausible for a particular predicate.For example, a letter, a report or an e-mail are all plausible direct-object arguments forthe verb predicate write. On the other hand, a zebra, intransigence or nihilism are unlikelyarguments for this predicate. People tend to exude confidence but they don’t tend to exudepencils. Table 6.2 (Section 6.4.4) gives further examples. SPs can help resolve syntactic,word sense, and reference ambiguity [Clark and Weir, 2002], and so gathering them hasreceived a lot of attention in the NLP community.One way to determine SPs is from co-occurrences of predicates and arguments in text.Unfortunately, no matter how much text we use (even if, as in the previous chapters, we’reusing Google N-grams, which come from all the text on the web), many acceptable pairswill be missing. Bikel [2004] found that only 1.49% of the bilexical dependencies consideredby Collins’ parser during decoding were observed during training. In our parsedcorpus (Section 6.4.1), for example, we find eat with nachos, burritos, and tacos, but notwith the equally tasty quesadillas, chimichangas, or tostadas. Rather than solely relying onco-occurrence counts, we would like to use them to generalize to unseen pairs.In particular, we would like to exploit a number of arbitrary and potentially overlappingproperties of predicates and arguments when we assign SPs. We propose to do this byrepresenting these properties as features in a linear classifier, and training the weights usingdiscriminative learning. Positive examples are taken from observed predicate-argumentpairs, while pseudo-negatives are constructed from unobserved combinations. We train asupport vector machine (SVM) classifier to distinguish the positives from the negatives.We refer to our model’s scores as Discriminative Selectional Preference (DSP). By creatingtraining vectors automatically, DSP enjoys all the advantages of supervised learning, but0 A version of this chapter has been published as [Bergsma et al., 2008a]83

without the need for manual annotation of examples. Our proposed work here is thereforean example of a semi-supervised system that uses “Learning with Heuristically-LabeledExamples,” as described in Chapter 2, Section 2.5.4.We evaluate DSP on the task of assigning verb-object selectional preference. We encodea noun’s textual distribution as feature information. The learned feature weights are linguisticallyinteresting, yielding high-quality similar-word lists as latent information. With thesefeatures, DSP is also an example of a semi-supervised system that creates features fromunlabeled data (Section 2.5.5). It thus encapsulates the two main thrusts of this dissertation.Despite its representational power, DSP scales to real-world data sizes: examples arepartitioned by predicate, and a separate SVM is trained for each partition. This allows us toefficiently learn with over 57 thousand features and 6.5 million examples. DSP outperformsrecently proposed alternatives in a range of experiments, and better correlates with humanplausibility judgments. It also shows strong gains over a Mutual Information-based cooccurrencemodel on two tasks: identifying objects of verbs in an unseen corpus and findingpronominal antecedents in coreference data.6.2 Related WorkMost approaches to SPs generalize from observed predicate-argument pairs to semanticallysimilar ones by modeling the semantic class of the argument, following Resnik [1996]. Forexample, we might have a class Mexican Food and learn that the entire class is suitable foreating. Usually, the classes are from WordNet [Miller et al., 1990], although they can alsobe inferred from clustering [Rooth et al., 1999]. Brockmann and Lapata [2003] comparea number of WordNet-based approaches, including Resnik [1996], Li and Abe [1998], andClark and Weir [2002], and found that more sophisticated class-based approaches do notalways outperform frequency-based models.Another line of research generalizes using similar words. Suppose we are calculatingthe probability of a particular noun, n, occurring as the object argument of a given verbalpredicate, v. Let Pr(n|v) be the empirical maximum-likelihood estimate from observedtext. Dagan et al. [1999] define the similarity-weighted probability, Pr SIM , to be:Pr SIM (n|v) = ∑Sim(v ′ ,v)Pr(n|v ′ ) (6.1)v ′ ∈SIMS(v)where Sim(v ′ ,v) returns a real-valued similarity between two verbs v ′ and v (normalizedover all pair similarities in the sum). In contrast, Erk [2007] generalizes by substitutingsimilar arguments, while Wang et al. [2005] use the cross-product of similar pairs. One keyissue is how to define the set of similar words, SIMS(w). Erk [2007] compared a numberof techniques for creating similar-word sets and found that both the Jaccard coefficientand Lin [1998a]’s information-theoretic metric work best. Similarity-smoothed modelsare simple to compute, potentially adaptable to new domains, and require no manuallycompiledresources such as WordNet.Selectional preferences have also been a recent focus of researchers investigating thelearning of paraphrases and inference rules [Pantel et al., 2007; Roberto et al., 2007]. Inferencessuch as “[X wins Y] ⇒ [X plays Y]” are only valid for certain arguments X andY. We follow Pantel et al. [2007] in using automatically-extracted semantic classes to helpcharacterize plausible arguments.84

Chapter 6Discriminative <strong>Learning</strong> ofSelectional Preference fromUnlabeled Text“Saying ‘I’m sorry’ is the same as saying ‘I apologize.’ Except at a funeral.”- Demetri Martin6.1 IntroductionSelectional preferences (SPs) tell us which arguments are plausible <strong>for</strong> a particular predicate.For example, a letter, a report or an e-mail are all plausible direct-object arguments <strong>for</strong>the verb predicate write. On the other hand, a zebra, intransigence or nihilism are unlikelyarguments <strong>for</strong> this predicate. People tend to exude confidence but they don’t tend to exudepencils. Table 6.2 (Section 6.4.4) gives further examples. SPs can help resolve syntactic,word sense, and reference ambiguity [Clark and Weir, 2002], and so gathering them hasreceived a lot of attention in the NLP community.One way to determine SPs is from co-occurrences of predicates and arguments in text.Un<strong>for</strong>tunately, no matter how much text we use (even if, as in the previous chapters, we’reusing Google N-grams, which come from all the text on the web), many acceptable pairswill be missing. Bikel [2004] found that only 1.49% of the bilexical dependencies consideredby Collins’ parser during decoding were observed during training. In our parsedcorpus (Section 6.4.1), <strong>for</strong> example, we find eat with nachos, burritos, and tacos, but notwith the equally tasty quesadillas, chimichangas, or tostadas. Rather than solely relying onco-occurrence counts, we would like to use them to generalize to unseen pairs.In particular, we would like to exploit a number of arbitrary and potentially overlappingproperties of predicates and arguments when we assign SPs. We propose to do this byrepresenting these properties as features in a linear classifier, and training the weights usingdiscriminative learning. Positive examples are taken from observed predicate-argumentpairs, while pseudo-negatives are constructed from unobserved combinations. We train asupport vector machine (SVM) classifier to distinguish the positives from the negatives.We refer to our model’s scores as Discriminative Selectional Preference (DSP). By creatingtraining vectors automatically, DSP enjoys all the advantages of supervised learning, but0 A version of this chapter has been published as [Bergsma et al., 2008a]83

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!