12.07.2015 Views

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

court-martial are all mapped to the same partition, while frequent verbs like arrest and executeeach have their own partition. About 5.5% of examples are clustered, correspondingto 30% of the 7367 total verbs. 40% of verbs (but only 0.6% of examples) were not in anyCBC cluster; these were mapped to a single backoff partition.The parameters <strong>for</strong> each partition, ¯w v , can be trained with any supervised learningtechnique. We use SVM (Section 6.4.1) because it is effective in similar high-dimensional,sparse-vector settings (Chapter 2, Section 2.3.4), and has an efficient implementation [Joachims,1999a]. In an SVM, the sign of h v (n) gives the classification. We can also use the scalarh v (n) as our DSP score (i.e. the positive distance of a point from the separating SVMhyperplane).6.3.3 FeaturesThis section details our argument features, f(n), <strong>for</strong> assigning verb-object selectional preference.For a verb predicate (or partition)vand object argumentn, the <strong>for</strong>m of our classifierish v (n) = ¯w v ·f(n) = ∑ i wv i f i(n).Verb co-occurrenceWe provide features <strong>for</strong> the empirical probability of the noun occurring as the object argumentof other verbs, Pr(n|v ′ ). If we were to only use these features (indexing the featureweights by each verb v ′ ), the <strong>for</strong>m of our classifier would be:h v (n) = ∑ v ′ w v v ′Pr(n|v′ ) (6.3)Note the similarity between Equation (6.3) and Equation (6.1). Now the feature weights,w v v ′ , take the role of the similarity function, Sim(v ′ ,v). Unlike Equation (6.1), however,these weights are not set by an external similarity algorithm, but are optimized to discriminatethe positive and negative training examples. We need not restrict ourselves to a shortlist of similar verbs; we include Pr obj (n|v ′ ) features <strong>for</strong> every verb that occurs more than 10times in our corpus. w v v ′ may be positive or negative, depending on the relation between v ′and v. We also include features <strong>for</strong> the probability of the noun occurring as the subject ofother verbs, Pr subj (n|v ′ ). For example, nouns that can be the object of eat will also occur asthe subject of taste and contain. Other contexts, such as adjectival and nominal predicates,could also aid the prediction, but have not been investigated.The advantage of tuning similarity to the application of interest has been shown previouslyby Weeds and Weir [2005]. They optimize a few meta-parameters separately <strong>for</strong> thetasks of thesaurus generation and pseudodisambiguation. Our approach discriminativelysets millions of individual similarity values. Like Weeds and Weir [2005], our similarityvalues are asymmetric.String-basedWe include several simple character-based features of the noun string: the number of tokens,the capitalization, and whether the string contains digits, hyphens, an apostrophe, or otherpunctuation. We also include a feature <strong>for</strong> the first and last token, and fire indicator featuresif any token in the noun occurs on in-house lists of given names, family names, cities,provinces, countries, corporations, languages, etc. We also fire a feature if a token is acorporate designation (like inc. or ltd.) or a human one (like Mr. or Sheik).87

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!