Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

Franz, 2006]. This corpus simply lists, for sequences of words from length two to lengthfive, how often the sequence occurs in their web corpus. The web corpus was generatedfrom approximately 1 trillion tokens of online text. In this data, tokens appearing less than200 times have been mapped to the 〈UNK〉 symbol. Also, only N-grams appearing morethan 40 times are included. A number of researchers have begun using this N-gram corpus,rather than search engines, to collect their web-scale statistics [Vadas and Curran, 2007a;Felice and Pulman, 2007; Yuret, 2007; Kummerfeld and Curran, 2008; Carlson et al., 2008;Bergsma et al., 2008b; Tratz and Hovy, 2010]. Although this N-gram data is much smallerthan the source text from which it was taken, it is still a very large resource, occupyingapproximately 24 GB compressed, and containing billions of N-grams in hundreds of files.Special strategies are needed to effectively query large numbers of counts. Some of thesestrategies include pre-sorting queries to reduce passes through the data, hashing [Hawkeret al., 2007], storing the data in a database [Carlson et al., 2008], and using a trie structure[Sekine, 2008]. Our work in this area led to our recent participation in the 2009 JohnsHopkins University, Center for Speech and Language Processing, Workshop on UnsupervisedAcquisition of Lexical Knowledge from N-Grams, led by Dekang Lin. 2 A number ofongoing projects using web-scale N-gram counts have arisen from this workshop, and wediscuss some of these in Chapter 5. Lin et al. [2010] provides an overview of our work atthe workshop, including the construction of a new web-scale N-gram corpus.In this chapter, all N-gram counts are taken from the standard Google N-gram data.One thing that N-gram data does not provide is the document co-occurrence counts thathave proven useful in some applications discussed above. It could therefore be beneficialto the community to have a resource along the lines of the Google N-gram corpus, butwhere the corpus simply states how often pairs of words (or phrases) co-occur within afixed window on the web. I am putting this on my to-do list.3.3 Disambiguation with N-gram CountsSection 3.2.1 described how lexical disambiguation, for both generation and analysis tasks,can be performed by scoring various context sequences using a statistical model. We formalizethe context used by web-scale systems and then discuss various statistical modelsthat use this information.For a word in text, v 0 , we wish to assign an output, c i , from a fixed set of candidates,C = {c 1 ,c 2 ...,c K }. Assume that our target word v 0 occurs in a sequence of context tokens:V={v −4 ,v −3 ,v −2 ,v −1 ,v 0 ,v 1 ,v 2 ,v 3 ,v 4 }. The key to improved web-scale modelsis that they make use of a variety of context segments, of different sizes and positions,that span the target word v 0 . We call these segments context patterns. The words that replacethe target word are called pattern fillers. Let the set of pattern fillers be denoted byF = {f 1 ,f 2 ,...,f |F| }. Recall that for generation tasks, the filler set will usually be identicalto the set of output candidates (e.g., for word selection tasks,F =C={among,between}). Foranalysis tasks, we must use other fillers, chosen as surrogates for one of the semantic labels(e.g. for WSD of bass,C={Sense1,Sense2},F={tenor,alto,pitch,snapper,mackerel,tuna}).Each length-N context pattern, with a filler in place of v 0 , is an N-gram, for whichwe can retrieve a count from an auxiliary corpus. We retrieve counts from the web-scaleGoogle Web 5-gram Corpus, which includes N-grams of length one to five (Section 3.2.2).2 http://www.clsp.jhu.edu/workshops/ws09/groups/ualkn/39

For each target wordv 0 , there are five 5-gram context patterns that may span it. For Example(1) in Section 3.1, we can extract the following 5-gram patterns:system tried to decidev 0tried to decidev 0 theto decidev 0 the twodecidev 0 the two confusablev 0 the two confusable wordsSimilarly, there are four 4-gram patterns, three 3-gram patterns and two 2-gram patternsspanning the target. With |F| fillers, there are 14|F| filled patterns with relevant N-gramcounts. For example, for F ={among,between}, there are two filled 5-gram patterns thatbegin with the word decide: “decide among the two confusable” and “decide between thetwo confusable.” We collect counts for each of these, along with all the other filled patternsfor this example. When F ={among,between}, there are 28 relevant counts for eachexample.We now describe various systems that use these counts.3.3.1 SUPERLMWe use supervised learning to map a target word and its context to an output. There aretwo steps in this mapping: a) converting the word and its context into a feature vector, andb) applying a classifier to determine the output class.In order to use the standard x,y notation for classifiers, we write things as follows:Let ¯x = Φ(V) be a mapping of the input to a feature representation, ¯x. We might alsothink of the feature function as being parameterized by the set of fillers, F and the N-gramcorpus, R, so that ¯x = Φ (F,R) (V). The feature function Φ (F,R) (·) outputs the count (inlogarithmic form) of the different context patterns with the different fillers. Each of thesehas a corresponding dimension in the feature representation. IfN = 14|F| counts are used,then each ¯x is anN-dimensional feature vector.Now, the classifier outputs the index of the highest-scoring candidate in the set of candidateoutputs, C = {c 1 ,c 2 ...,c K }. That is, we let y ∈ {1,...,K} be the set of classesthat can be produced by the classifier. The classifier, H, is therefore a K-class classifier,mapping an attribute vector, ¯x, to a class, y. Using the standard [Crammer and Singer,2001]-style multi-class formulation, H is parameterized by a K-by-N matrix of weights,W:KH W (¯x) = argmax{ ¯W r · ¯x} (3.1)r=1where ¯W r is the rth row of W. That is, the predicted class is the index of the row of Wthat has the highest inner-product with the attributes, ¯x. The weights are optimized using aset of M training examples, {(¯x 1 ,y 1 ),...,(¯x M ,y M )}.This differs a little from the linear classifier that we presented in Section 2.2. Here weactually have K linear classifiers. Although there is only one set of N features, there isa different linear combination for each row of W. Therefore, the weight on a particularcount depends on the class we are scoring (corresponding to the row of W, r), as well asthe filler, the context position, and the context size, all of which select one of the 14|F|base features. There are therefore a total of 14|F|K count-weight parameters. Chapter 4formally describes how these parameters are learned using a multi-class SVM. Chapter 4also discusses enhancements to this model that can enable better performance with fewertraining examples.40

Franz, 2006]. This corpus simply lists, <strong>for</strong> sequences of words from length two to lengthfive, how often the sequence occurs in their web corpus. The web corpus was generatedfrom approximately 1 trillion tokens of online text. In this data, tokens appearing less than200 times have been mapped to the 〈UNK〉 symbol. Also, only N-grams appearing morethan 40 times are included. A number of researchers have begun using this N-gram corpus,rather than search engines, to collect their web-scale statistics [Vadas and Curran, 2007a;Felice and Pulman, 2007; Yuret, 2007; Kummerfeld and Curran, 2008; Carlson et al., 2008;Bergsma et al., 2008b; Tratz and Hovy, 2010]. Although this N-gram data is much smallerthan the source text from which it was taken, it is still a very large resource, occupyingapproximately 24 GB compressed, and containing billions of N-grams in hundreds of files.Special strategies are needed to effectively query large numbers of counts. Some of thesestrategies include pre-sorting queries to reduce passes through the data, hashing [Hawkeret al., 2007], storing the data in a database [Carlson et al., 2008], and using a trie structure[Sekine, 2008]. Our work in this area led to our recent participation in the 2009 JohnsHopkins University, Center <strong>for</strong> Speech and <strong>Language</strong> Processing, Workshop on UnsupervisedAcquisition of Lexical Knowledge from N-Grams, led by Dekang Lin. 2 A number ofongoing projects using web-scale N-gram counts have arisen from this workshop, and wediscuss some of these in Chapter 5. Lin et al. [2010] provides an overview of our work atthe workshop, including the construction of a new web-scale N-gram corpus.In this chapter, all N-gram counts are taken from the standard Google N-gram data.One thing that N-gram data does not provide is the document co-occurrence counts thathave proven useful in some applications discussed above. It could there<strong>for</strong>e be beneficialto the community to have a resource along the lines of the Google N-gram corpus, butwhere the corpus simply states how often pairs of words (or phrases) co-occur within afixed window on the web. I am putting this on my to-do list.3.3 Disambiguation with N-gram CountsSection 3.2.1 described how lexical disambiguation, <strong>for</strong> both generation and analysis tasks,can be per<strong>for</strong>med by scoring various context sequences using a statistical model. We <strong>for</strong>malizethe context used by web-scale systems and then discuss various statistical modelsthat use this in<strong>for</strong>mation.For a word in text, v 0 , we wish to assign an output, c i , from a fixed set of candidates,C = {c 1 ,c 2 ...,c K }. Assume that our target word v 0 occurs in a sequence of context tokens:V={v −4 ,v −3 ,v −2 ,v −1 ,v 0 ,v 1 ,v 2 ,v 3 ,v 4 }. The key to improved web-scale modelsis that they make use of a variety of context segments, of different sizes and positions,that span the target word v 0 . We call these segments context patterns. The words that replacethe target word are called pattern fillers. Let the set of pattern fillers be denoted byF = {f 1 ,f 2 ,...,f |F| }. Recall that <strong>for</strong> generation tasks, the filler set will usually be identicalto the set of output candidates (e.g., <strong>for</strong> word selection tasks,F =C={among,between}). Foranalysis tasks, we must use other fillers, chosen as surrogates <strong>for</strong> one of the semantic labels(e.g. <strong>for</strong> WSD of bass,C={Sense1,Sense2},F={tenor,alto,pitch,snapper,mackerel,tuna}).Each length-N context pattern, with a filler in place of v 0 , is an N-gram, <strong>for</strong> whichwe can retrieve a count from an auxiliary corpus. We retrieve counts from the web-scaleGoogle Web 5-gram Corpus, which includes N-grams of length one to five (Section 3.2.2).2 http://www.clsp.jhu.edu/workshops/ws09/groups/ualkn/39

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!