Accuracy (%)100989694929088SUPERLMSUMLMRATIOLMTRIGRAM100 1000 10000 100000Number of training examplesFigure 3.3: Context-sensitive spelling correction learning curvewe select the most likely word from the set. The importance of using large volumes ofdata has previously been noted [Banko and Brill, 2001; Liu and Curran, 2006]. Impressivelevels of accuracy have been achieved on the standard confusion sets, <strong>for</strong> example, 100% ondisambiguating both {affect, effect} and {weather, whether} by Golding and Roth [1999].We thus restricted our experiments to the five confusion sets (of twenty-one in total) wherethe reported per<strong>for</strong>mance in [Golding and Roth, 1999] is below 90% (an average of 87%):{among, between},{amount, number},{cite, sight, site},{peace, piece}, and{raise, rise}.We again create labeled data automatically from the NYT portion of Gigaword. For eachconfusion set, we extract 100K examples <strong>for</strong> training, 10K <strong>for</strong> development, and 10K <strong>for</strong> afinal test set.3.6.2 Context-sensitive Spelling Correction ResultsFigure 3.3 provides the spelling correction learning curve, while Table 3.2 gives resultson the five confusion sets. Choosing the most frequent label averages 66.9% on this task(BASE). TRIGRAM scores 88.4%, comparable to the trigram (page count) results reportedin [Lapata and Keller, 2005]. SUPERLM again achieves the highest per<strong>for</strong>mance (95.7%),and it reaches this per<strong>for</strong>mance using many fewer training examples than with prepositionselection. This is because the number of parameters grows with the number of fillers timesthe number of labels (recall, there are 14|F|K count-weight parameters), and there are34 prepositions but only two-to-three confusable spellings. Note that we also include theper<strong>for</strong>mance reported in [Golding and Roth, 1999], although these results are reported on adifferent corpus.SUPERLM achieves a 24% relative reduction in error over RATIOLM (94.4%), whichwas the previous state-of-the-art [Carlson et al., 2008]. SUMLM (94.8%) also improveson RATIOLM, although results are generally similar on the different confusion sets. On{raise,rise}, SUPERLM’s supervised weighting of the counts by position and size does notimprove over SUMLM (Table 3.2). On all the other sets the per<strong>for</strong>mance is higher; <strong>for</strong>example, on {among,between}, the accuracy improves by 2.3%. On this set, counts <strong>for</strong>47
Set BASE [Golding and Roth, 1999] TRIGRAM SUMLM SUPERLMamong/between 60.3 86.0 80.8 90.5 92.8amount/number 75.6 86.2 83.9 93.2 93.7cite/sight/site 87.1 85.3 94.3 96.3 97.6peace/piece 60.8 88.0 92.3 97.7 98.0raise/rise 51.0 89.7 90.7 96.6 96.6Average 66.9 87.0 88.4 94.8 95.7Table 3.2: Context-sensitive spelling correction accuracy (%) on different confusion setsfillers near the beginning of the context pattern are more important, as the object of thepreposition is crucial <strong>for</strong> distinguishing these two classes (“between the two” but “amongthe three”). SUPERLM can exploit the relative importance of the different positions andthereby achieve higher per<strong>for</strong>mance.3.7 Non-referential Pronoun DetectionWe now present an application of our approach to a difficult analysis problem: detectingnon-referential pronouns. In fact, SUPERLM was originally devised <strong>for</strong> this task, and thensubsequently evaluated as a general solution to all lexical disambiguation problems. Moredetails on this particular application are available in our ACL 2008 paper [Bergsma et al.,2008b].3.7.1 The Task of Non-referential Pronoun DetectionCoreference resolution determines which noun phrases in a document refer to the samereal-world entity. As part of this task, coreference resolution systems must decide whichpronouns refer to preceding noun phrases (called antecedents) and which do not. In particular,a long-standing challenge has been to correctly classify instances of the Englishpronoun it. Consider the sentences:(1) You can make it in advance.(2) You can make it in Hollywood.In Example (1), it is an anaphoric pronoun referring to some previous noun phrase, like“the sauce” or “an appointment.” In Example (2), it is part of the idiomatic expression“make it” meaning “succeed.” A coreference resolution system should find an antecedent<strong>for</strong> the first it but not the second. Pronouns that do not refer to preceding noun phrases arecalled non-anaphoric or non-referential pronouns.The word it is one of the most frequent words in the English language, accounting <strong>for</strong>about 1% of tokens in text and over a quarter of all third-person pronouns. 5 Usually betweena quarter and a half of it instances are non-referential. As with other pronouns, the precedingdiscourse can affect it’s interpretation. For example, Example (2) can be interpreted asreferential if the preceding sentence is “You want to make a movie?” We show, however,5 e.g. http://ucrel.lancs.ac.uk/bncfreq/flists.html48
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55: We also follow Carlson et al. [2001
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108:
ious measures to learn the recurren
- Page 109 and 110:
how labeled word pairs can be colle
- Page 111 and 112:
Figure 7.1: LCSR histogram and poly
- Page 113 and 114:
0.711-pt Average Precision0.60.50.4
- Page 115 and 116:
Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118:
Chapter 8Conclusions and Future Wor
- Page 119 and 120:
8.3 Future WorkThis section outline
- Page 121 and 122:
My focus is thus on enabling robust
- Page 123 and 124:
[Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V