Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
court-martial are all mapped to the same partition, while frequent verbs like arrest and executeeach have their own partition. About 5.5% of examples are clustered, correspondingto 30% of the 7367 total verbs. 40% of verbs (but only 0.6% of examples) were not in anyCBC cluster; these were mapped to a single backoff partition.The parameters for each partition, ¯w v , can be trained with any supervised learningtechnique. We use SVM (Section 6.4.1) because it is effective in similar high-dimensional,sparse-vector settings (Chapter 2, Section 2.3.4), and has an efficient implementation [Joachims,1999a]. In an SVM, the sign of h v (n) gives the classification. We can also use the scalarh v (n) as our DSP score (i.e. the positive distance of a point from the separating SVMhyperplane).6.3.3 FeaturesThis section details our argument features, f(n), for assigning verb-object selectional preference.For a verb predicate (or partition)vand object argumentn, the form of our classifierish v (n) = ¯w v ·f(n) = ∑ i wv i f i(n).Verb co-occurrenceWe provide features for the empirical probability of the noun occurring as the object argumentof other verbs, Pr(n|v ′ ). If we were to only use these features (indexing the featureweights by each verb v ′ ), the form of our classifier would be:h v (n) = ∑ v ′ w v v ′Pr(n|v′ ) (6.3)Note the similarity between Equation (6.3) and Equation (6.1). Now the feature weights,w v v ′ , take the role of the similarity function, Sim(v ′ ,v). Unlike Equation (6.1), however,these weights are not set by an external similarity algorithm, but are optimized to discriminatethe positive and negative training examples. We need not restrict ourselves to a shortlist of similar verbs; we include Pr obj (n|v ′ ) features for every verb that occurs more than 10times in our corpus. w v v ′ may be positive or negative, depending on the relation between v ′and v. We also include features for the probability of the noun occurring as the subject ofother verbs, Pr subj (n|v ′ ). For example, nouns that can be the object of eat will also occur asthe subject of taste and contain. Other contexts, such as adjectival and nominal predicates,could also aid the prediction, but have not been investigated.The advantage of tuning similarity to the application of interest has been shown previouslyby Weeds and Weir [2005]. They optimize a few meta-parameters separately for thetasks of thesaurus generation and pseudodisambiguation. Our approach discriminativelysets millions of individual similarity values. Like Weeds and Weir [2005], our similarityvalues are asymmetric.String-basedWe include several simple character-based features of the noun string: the number of tokens,the capitalization, and whether the string contains digits, hyphens, an apostrophe, or otherpunctuation. We also include a feature for the first and last token, and fire indicator featuresif any token in the noun occurs on in-house lists of given names, family names, cities,provinces, countries, corporations, languages, etc. We also fire a feature if a token is acorporate designation (like inc. or ltd.) or a human one (like Mr. or Sheik).87
Semantic classesMotivated by previous SP models that make use of semantic classes, we generate wordclusters using CBC [Pantel and Lin, 2002] on a 10 GB corpus, giving 3620 clusters. If anoun belongs in a cluster, a corresponding feature fires. If a noun is in none of the clusters,a no-class feature fires.As an example, CBC cluster 1891 contains:sidewalk, driveway, roadway, footpath, bridge, highway, road, runway, street, alley,path, Interstate, . . .In our training data, we have examples like widen highway, widen road and widen motorway.If we see that we can widen a highway, we learn that we can also widen a sidewalk,bridge, runway, etc.We also made use of the person-name/instance pairs automatically extracted by Fleischmanet al. [2003]. 3 This data provides counts for pairs such as “Edwin Moses, hurdler”and “William Farley, industrialist.” We have features for all concepts and therefore learntheir association with each verb.6.4 Experiments and Results6.4.1 Set upWe parsed the 3 GB AQUAINT corpus [Vorhees, 2002] using Minipar [Lin, 1998b], andcollected verb-object and verb-subject frequencies, building an empirical MI model fromthis data. Verbs and nouns were converted to their (possibly multi-token) root, and stringcase was preserved. Passive subjects (the car was bought) were converted to objects (boughtcar). We set the MI-threshold, τ, to be 0, and the negative-to-positive ratio, K, to be 2.Numerous previous pseudodisambiguation evaluations only include arguments that occurbetween 30 and 3000 times [Erk, 2007; Keller and Lapata, 2003; Rooth et al., 1999].Presumably the lower bound is to help ensure the negative argument is unobserved becauseit is unsuitable, not because of data sparseness. We wish to use our model on argumentsof any frequency, including those that never occurred in the training corpus (and thereforehave empty co-occurrence features (Section 6.3.3)). We proceed as follows: first, weexclude pairs whenever the noun occurs less than 3 times in our corpus, removing manymisspellings and other noise. Next, we omit verb co-occurrence features for nouns that occurless than 10 times, and instead fire a low-count feature. When we move to a new corpus,previously-unseen nouns are treated like these low-count training nouns. Note there are nospecific restrictions on the frequency of pairs.This processing results in a set of 6.8 million pairs, divided into 2318 partitions (192 ofwhich are verb clusters (Section 6.3.2)). For each partition, we take 95% of the examples fortraining, 2.5% for development and 2.5% for a final unseen test set. We provide full resultsfor two models: DSP cooc which only uses the verb co-occurrence features, and DSP all whichuses all the features mentioned in Section 6.3.3. Feature values are normalized within eachfeature type. We train our (linear kernel) discriminative models using SVM light [Joachims,1999a] on each partition, but set meta-parameters C (regularization) and j (cost of positivevs. negative misclassifications: max at j=2) on the macro-averaged score across all3 Available at http://www.mit.edu/˜mbf/instances.txt.gz88
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95: DSP uses these labels to identify o
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V
Semantic classesMotivated by previous SP models that make use of semantic classes, we generate wordclusters using CBC [Pantel and Lin, 2002] on a 10 GB corpus, giving 3620 clusters. If anoun belongs in a cluster, a corresponding feature fires. If a noun is in none of the clusters,a no-class feature fires.As an example, CBC cluster 1891 contains:sidewalk, driveway, roadway, footpath, bridge, highway, road, runway, street, alley,path, Interstate, . . .In our training data, we have examples like widen highway, widen road and widen motorway.If we see that we can widen a highway, we learn that we can also widen a sidewalk,bridge, runway, etc.We also made use of the person-name/instance pairs automatically extracted by Fleischmanet al. [2003]. 3 This data provides counts <strong>for</strong> pairs such as “Edwin Moses, hurdler”and “William Farley, industrialist.” We have features <strong>for</strong> all concepts and there<strong>for</strong>e learntheir association with each verb.6.4 Experiments and Results6.4.1 Set upWe parsed the 3 GB AQUAINT corpus [Vorhees, 2002] using Minipar [Lin, 1998b], andcollected verb-object and verb-subject frequencies, building an empirical MI model fromthis data. Verbs and nouns were converted to their (possibly multi-token) root, and stringcase was preserved. Passive subjects (the car was bought) were converted to objects (boughtcar). We set the MI-threshold, τ, to be 0, and the negative-to-positive ratio, K, to be 2.Numerous previous pseudodisambiguation evaluations only include arguments that occurbetween 30 and 3000 times [Erk, 2007; Keller and Lapata, 2003; Rooth et al., 1999].Presumably the lower bound is to help ensure the negative argument is unobserved becauseit is unsuitable, not because of data sparseness. We wish to use our model on argumentsof any frequency, including those that never occurred in the training corpus (and there<strong>for</strong>ehave empty co-occurrence features (Section 6.3.3)). We proceed as follows: first, weexclude pairs whenever the noun occurs less than 3 times in our corpus, removing manymisspellings and other noise. Next, we omit verb co-occurrence features <strong>for</strong> nouns that occurless than 10 times, and instead fire a low-count feature. When we move to a new corpus,previously-unseen nouns are treated like these low-count training nouns. Note there are nospecific restrictions on the frequency of pairs.This processing results in a set of 6.8 million pairs, divided into 2318 partitions (192 ofwhich are verb clusters (Section 6.3.2)). For each partition, we take 95% of the examples <strong>for</strong>training, 2.5% <strong>for</strong> development and 2.5% <strong>for</strong> a final unseen test set. We provide full results<strong>for</strong> two models: DSP cooc which only uses the verb co-occurrence features, and DSP all whichuses all the features mentioned in Section 6.3.3. Feature values are normalized within eachfeature type. We train our (linear kernel) discriminative models using SVM light [Joachims,1999a] on each partition, but set meta-parameters C (regularization) and j (cost of positivevs. negative misclassifications: max at j=2) on the macro-averaged score across all3 Available at http://www.mit.edu/˜mbf/instances.txt.gz88