Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
MaxMin 2 3 4 52 50.2 63.8 70.4 72.63 66.8 72.1 73.74 69.3 70.65 57.8Table 3.1: SUMLM accuracy (%) combining N-grams from order Min to Max10090Accuracy (%)80706050SUPERLM-FRSUPERLM-DESUPERLMTRIGRAM-FRTRIGRAM-DETRIGRAM50 60 70 80 90 100Coverage (%)Figure 3.2: Preposition selection over high-confidence subsets, with and without languageconstraints (-FR,-DE)formance of the standard TRIGRAM approach (58.8%), because the standard TRIGRAMapproach only uses a single trigram (the one centered on the preposition) whereas SUMLMalways uses the three trigrams that span the confusable word.Coverage is the main issue affecting the 5-gram model: only 70.1% of the test exampleshad a 5-gram count for any of the 34 fillers. 93.4% of test examples had at least one 4-gramcount and 99.7% of examples had at least one trigram count.Summing counts from 3-5 results in the best performance on the development and testsets.We compare our use of the Google Corpus to extracting page counts from a searchengine, via the Google API (no longer in operation as of August 2009, but similar servicesexist). Since the number of queries allowed to the API is restricted, we test on only thefirst 1000 test examples. Using the Google Corpus, TRIGRAM achieves 61.1%, dropping to58.5% with search engine page counts. Although this is a small difference, the real issue isthe restricted number of queries allowed. For each example, SUMLM would need 14 countsfor each of the 34 fillers instead of just one. For training SUPERLM, which has 1 milliontraining examples, we need counts for 267 million unique N-grams. Using the Google APIwith a 1000-query-per-day quota, it would take over 732 years to collect all the counts fortraining. This is clearly why some web-scale systems use such limited context.45
We also follow Carlson et al. [2001] and Chodorow et al. [2007] in extracting a subset ofdecisions where our system has higher confidence. We only propose a label if the ratio betweenthe highest and second-highest score from our classifier is above a certain threshold,and then vary this threshold to produce accuracy at different coverage levels (Figure 3.2).The SUPERLM system can obtain close to 90% accuracy when deciding on 70% of examples,and above 95% accuracy when deciding on half the examples. The TRIGRAMperformance rises more slowly as coverage drops, reaching 80% accuracy when decidingon only 57% of examples.Many of SUPERLM’s errors involve choosing between prepositions that are unlikely tobe confused in practice, e.g. with/without. Chodorow et al. [2007] wrote post-processorrules to prohibit corrections in the case of antonyms. Note that the errors made by an Englishlearner also depend on their native language. A French speaker looking to translateau-dessus de has one option in some dictionaries: above. A German speaker looking totranslate über has, along with above, many more options. When making corrections, wecould combine SUPERLM (a source model) with the likelihood of each confusion dependingon the writer’s native language (a channel model). The channel model could be trainedon text written by second-language learners who speak, as a first language, the particularlanguage of interest.In the absence of such data, we only allow our system to make corrections in Englishif the proposed replacement shares a foreign-language translation in a particular Freelangonline bilingual dictionary (www.freelang.net/dictionary/). Put another way,we reduce the size of the preposition confusion set dynamically depending on the prepositionthat was used and the native language of the speaker. A particular preposition is onlysuggested as a correction if both the correction and the original preposition could have beenconfused by a foreign-language speaker translating a particular foreign-language prepositionwithout regard to context.To simulate the use of this module, we randomly flip 20% of our test-set prepositions toconfusable ones, and then apply our classifier with the aforementioned confusability (andconfidence) constraints. We experimented with French and German lexicons (Figure 3.2).These constraints strongly benefit both SUPERLM and TRIGRAM, with French constraints(−FR) helping slightly more than German (−DE) for higher coverage levels. There arefewer confusable prepositions in the French lexicon compared to German. As a baseline,if we assign our labels random scores, adding the French and German constraints resultsin 20% and 14% accuracy, respectively (compared to 134= 2.9% unconstrained). At 50%coverage, both constrained SUPERLM systems achieve close to 98% accuracy, a level thatcould provide very reliable feedback in second-language learning software.3.6 Context-Sensitive Spelling Correction3.6.1 The Task of Context-Sensitive Spelling CorrectionContext-sensitive spelling correction, or real-word error/malapropism detection [Goldingand Roth, 1999; Hirst and Budanitsky, 2005], is the task of identifying errors when a misspellingresults in a real word in the lexicon, e.g., using site when sight or cite was intended.Contextual spell checkers are among the most widely-used NLP technology, as they are includedin commercial word processing software [Church et al., 2007].For every occurrence of a word in a pre-defined confusion set (like{among, between}),46
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21 and 22: Chapter 2Supervised and Semi-Superv
- Page 23 and 24: emphasis on “deliverables and eva
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53: Accuracy (%)10090807060SUPERLMSUMLM
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
We also follow Carlson et al. [2001] and Chodorow et al. [2007] in extracting a subset ofdecisions where our system has higher confidence. We only propose a label if the ratio betweenthe highest and second-highest score from our classifier is above a certain threshold,and then vary this threshold to produce accuracy at different coverage levels (Figure 3.2).The SUPERLM system can obtain close to 90% accuracy when deciding on 70% of examples,and above 95% accuracy when deciding on half the examples. The TRIGRAMper<strong>for</strong>mance rises more slowly as coverage drops, reaching 80% accuracy when decidingon only 57% of examples.Many of SUPERLM’s errors involve choosing between prepositions that are unlikely tobe confused in practice, e.g. with/without. Chodorow et al. [2007] wrote post-processorrules to prohibit corrections in the case of antonyms. Note that the errors made by an Englishlearner also depend on their native language. A French speaker looking to translateau-dessus de has one option in some dictionaries: above. A German speaker looking totranslate über has, along with above, many more options. When making corrections, wecould combine SUPERLM (a source model) with the likelihood of each confusion dependingon the writer’s native language (a channel model). The channel model could be trainedon text written by second-language learners who speak, as a first language, the particularlanguage of interest.In the absence of such data, we only allow our system to make corrections in Englishif the proposed replacement shares a <strong>for</strong>eign-language translation in a particular Freelangonline bilingual dictionary (www.freelang.net/dictionary/). Put another way,we reduce the size of the preposition confusion set dynamically depending on the prepositionthat was used and the native language of the speaker. A particular preposition is onlysuggested as a correction if both the correction and the original preposition could have beenconfused by a <strong>for</strong>eign-language speaker translating a particular <strong>for</strong>eign-language prepositionwithout regard to context.To simulate the use of this module, we randomly flip 20% of our test-set prepositions toconfusable ones, and then apply our classifier with the a<strong>for</strong>ementioned confusability (andconfidence) constraints. We experimented with French and German lexicons (Figure 3.2).These constraints strongly benefit both SUPERLM and TRIGRAM, with French constraints(−FR) helping slightly more than German (−DE) <strong>for</strong> higher coverage levels. There arefewer confusable prepositions in the French lexicon compared to German. As a baseline,if we assign our labels random scores, adding the French and German constraints resultsin 20% and 14% accuracy, respectively (compared to 134= 2.9% unconstrained). At 50%coverage, both constrained SUPERLM systems achieve close to 98% accuracy, a level thatcould provide very reliable feedback in second-language learning software.3.6 Context-Sensitive Spelling Correction3.6.1 The Task of Context-Sensitive Spelling CorrectionContext-sensitive spelling correction, or real-word error/malapropism detection [Goldingand Roth, 1999; Hirst and Budanitsky, 2005], is the task of identifying errors when a misspellingresults in a real word in the lexicon, e.g., using site when sight or cite was intended.Contextual spell checkers are among the most widely-used NLP technology, as they are includedin commercial word processing software [Church et al., 2007].For every occurrence of a word in a pre-defined confusion set (like{among, between}),46