12.07.2015 Views

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

ut without counts <strong>for</strong> the class prior. 3 Naive bayes has a long history in disambiguationproblems [Manning and Schütze, 1999], so it is not entirely surprising that our SUMLMsystem, with a similar <strong>for</strong>m to naive bayes, is also effective.3.3.3 TRIGRAMPrevious web-scale approaches are also unsupervised. Most use one context pattern <strong>for</strong>each filler: the trigram with the filler in the middle: {v −1 ,f,v 1 }. |F| counts are needed <strong>for</strong>each example, and the filler with the most counts is taken as the label [Lapata and Keller,2005; Liu and Curran, 2006; Felice and Pulman, 2007]. Using only one count <strong>for</strong> eachlabel is usually all that is feasible when the counts are gathered using an Internet searchengine, which limits the number of queries that can be retrieved. With limited context, andsomewhat arbitrary search engine page counts, per<strong>for</strong>mance is limited. Web-based systemsare regarded as “baselines” compared to standard approaches [Lapata and Keller, 2005], or,worse, as scientifically unsound [Kilgarriff, 2007]. Rather than using search engines, higheraccuracy and reliability can be obtained using a large corpus of automatically downloadedweb documents [Liu and Curran, 2006]. We evaluate the trigram pattern approach, withcounts from the Google 5-gram corpus, and refer to it as TRIGRAM in our experiments.3.3.4 RATIOLMCarlson et al. [2008] proposed an unsupervised method <strong>for</strong> spelling correction that alsouses counts <strong>for</strong> various pattern fillers from the Google 5-gram Corpus. For every contextpattern spanning the target word, the algorithm calculates the ratio between the highestand second-highest filler counts. The position with the highest ratio is taken as the “mostdiscriminating,” and the filler with the higher count in this position is chosen as the label.The algorithm starts with 5-grams and backs off to lower orders if no 5-gram counts3 In this case, we can think of the features, x i, as being the context patterns, and the classes y as being thefillers. In a naive bayes classifier, we select the class, y, that has the highest score under:H(¯x) =Kargmaxr=1= KPr(y r|¯x)argmax Pr(y r)Pr(¯x|y r)r=1argmax Pr(y ∏ r) Pr(x i|y r)r=1i= KBayes decision rulenaive bayes assumption= Kargmaxlog(Pr(y ∑ r))+r=1i= Kargmaxlog(Pr(y ∑ r))+r=1ilog(Pr(x i|y r))logcnt(x i,y r)−logcnt(y r)= Kargmaxg(y ∑ r)+r=1ilogcnt(x i,f r)y r = f rwhere we collect all the terms that depend solely on the class into g(y r). Our SUMLM system is exactlythe same as this naive bayes classifier if we drop the g(y r) term. We tried various ways to model the classpriors using N-gram counts and incorporating them into our equations, but nothing per<strong>for</strong>med as well as simplydropping them altogether. Another option we haven’t explored is simply having a single class bias parameter<strong>for</strong> each class, λ r = g(y r), to be added to the filler counts. We would tune the λ r’s by hand <strong>for</strong> each taskwhere SUMLM is applied. However, this would make the model require some labeled data to tune, whereasour current SUMLM is parameter-free and entirely unsupervised.42

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!