Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

MaxMin 2 3 4 52 50.2 63.8 70.4 72.63 66.8 72.1 73.74 69.3 70.65 57.8Table 3.1: SUMLM accuracy (%) combining N-grams from order Min to Max10090Accuracy (%)80706050SUPERLM-FRSUPERLM-DESUPERLMTRIGRAM-FRTRIGRAM-DETRIGRAM50 60 70 80 90 100Coverage (%)Figure 3.2: Preposition selection over high-confidence subsets, with and without languageconstraints (-FR,-DE)formance of the standard TRIGRAM approach (58.8%), because the standard TRIGRAMapproach only uses a single trigram (the one centered on the preposition) whereas SUMLMalways uses the three trigrams that span the confusable word.Coverage is the main issue affecting the 5-gram model: only 70.1% of the test exampleshad a 5-gram count for any of the 34 fillers. 93.4% of test examples had at least one 4-gramcount and 99.7% of examples had at least one trigram count.Summing counts from 3-5 results in the best performance on the development and testsets.We compare our use of the Google Corpus to extracting page counts from a searchengine, via the Google API (no longer in operation as of August 2009, but similar servicesexist). Since the number of queries allowed to the API is restricted, we test on only thefirst 1000 test examples. Using the Google Corpus, TRIGRAM achieves 61.1%, dropping to58.5% with search engine page counts. Although this is a small difference, the real issue isthe restricted number of queries allowed. For each example, SUMLM would need 14 countsfor each of the 34 fillers instead of just one. For training SUPERLM, which has 1 milliontraining examples, we need counts for 267 million unique N-grams. Using the Google APIwith a 1000-query-per-day quota, it would take over 732 years to collect all the counts fortraining. This is clearly why some web-scale systems use such limited context.45

We also follow Carlson et al. [2001] and Chodorow et al. [2007] in extracting a subset ofdecisions where our system has higher confidence. We only propose a label if the ratio betweenthe highest and second-highest score from our classifier is above a certain threshold,and then vary this threshold to produce accuracy at different coverage levels (Figure 3.2).The SUPERLM system can obtain close to 90% accuracy when deciding on 70% of examples,and above 95% accuracy when deciding on half the examples. The TRIGRAMperformance rises more slowly as coverage drops, reaching 80% accuracy when decidingon only 57% of examples.Many of SUPERLM’s errors involve choosing between prepositions that are unlikely tobe confused in practice, e.g. with/without. Chodorow et al. [2007] wrote post-processorrules to prohibit corrections in the case of antonyms. Note that the errors made by an Englishlearner also depend on their native language. A French speaker looking to translateau-dessus de has one option in some dictionaries: above. A German speaker looking totranslate über has, along with above, many more options. When making corrections, wecould combine SUPERLM (a source model) with the likelihood of each confusion dependingon the writer’s native language (a channel model). The channel model could be trainedon text written by second-language learners who speak, as a first language, the particularlanguage of interest.In the absence of such data, we only allow our system to make corrections in Englishif the proposed replacement shares a foreign-language translation in a particular Freelangonline bilingual dictionary (www.freelang.net/dictionary/). Put another way,we reduce the size of the preposition confusion set dynamically depending on the prepositionthat was used and the native language of the speaker. A particular preposition is onlysuggested as a correction if both the correction and the original preposition could have beenconfused by a foreign-language speaker translating a particular foreign-language prepositionwithout regard to context.To simulate the use of this module, we randomly flip 20% of our test-set prepositions toconfusable ones, and then apply our classifier with the aforementioned confusability (andconfidence) constraints. We experimented with French and German lexicons (Figure 3.2).These constraints strongly benefit both SUPERLM and TRIGRAM, with French constraints(−FR) helping slightly more than German (−DE) for higher coverage levels. There arefewer confusable prepositions in the French lexicon compared to German. As a baseline,if we assign our labels random scores, adding the French and German constraints resultsin 20% and 14% accuracy, respectively (compared to 134= 2.9% unconstrained). At 50%coverage, both constrained SUPERLM systems achieve close to 98% accuracy, a level thatcould provide very reliable feedback in second-language learning software.3.6 Context-Sensitive Spelling Correction3.6.1 The Task of Context-Sensitive Spelling CorrectionContext-sensitive spelling correction, or real-word error/malapropism detection [Goldingand Roth, 1999; Hirst and Budanitsky, 2005], is the task of identifying errors when a misspellingresults in a real word in the lexicon, e.g., using site when sight or cite was intended.Contextual spell checkers are among the most widely-used NLP technology, as they are includedin commercial word processing software [Church et al., 2007].For every occurrence of a word in a pre-defined confusion set (like{among, between}),46

We also follow Carlson et al. [2001] and Chodorow et al. [2007] in extracting a subset ofdecisions where our system has higher confidence. We only propose a label if the ratio betweenthe highest and second-highest score from our classifier is above a certain threshold,and then vary this threshold to produce accuracy at different coverage levels (Figure 3.2).The SUPERLM system can obtain close to 90% accuracy when deciding on 70% of examples,and above 95% accuracy when deciding on half the examples. The TRIGRAMper<strong>for</strong>mance rises more slowly as coverage drops, reaching 80% accuracy when decidingon only 57% of examples.Many of SUPERLM’s errors involve choosing between prepositions that are unlikely tobe confused in practice, e.g. with/without. Chodorow et al. [2007] wrote post-processorrules to prohibit corrections in the case of antonyms. Note that the errors made by an Englishlearner also depend on their native language. A French speaker looking to translateau-dessus de has one option in some dictionaries: above. A German speaker looking totranslate über has, along with above, many more options. When making corrections, wecould combine SUPERLM (a source model) with the likelihood of each confusion dependingon the writer’s native language (a channel model). The channel model could be trainedon text written by second-language learners who speak, as a first language, the particularlanguage of interest.In the absence of such data, we only allow our system to make corrections in Englishif the proposed replacement shares a <strong>for</strong>eign-language translation in a particular Freelangonline bilingual dictionary (www.freelang.net/dictionary/). Put another way,we reduce the size of the preposition confusion set dynamically depending on the prepositionthat was used and the native language of the speaker. A particular preposition is onlysuggested as a correction if both the correction and the original preposition could have beenconfused by a <strong>for</strong>eign-language speaker translating a particular <strong>for</strong>eign-language prepositionwithout regard to context.To simulate the use of this module, we randomly flip 20% of our test-set prepositions toconfusable ones, and then apply our classifier with the a<strong>for</strong>ementioned confusability (andconfidence) constraints. We experimented with French and German lexicons (Figure 3.2).These constraints strongly benefit both SUPERLM and TRIGRAM, with French constraints(−FR) helping slightly more than German (−DE) <strong>for</strong> higher coverage levels. There arefewer confusable prepositions in the French lexicon compared to German. As a baseline,if we assign our labels random scores, adding the French and German constraints resultsin 20% and 14% accuracy, respectively (compared to 134= 2.9% unconstrained). At 50%coverage, both constrained SUPERLM systems achieve close to 98% accuracy, a level thatcould provide very reliable feedback in second-language learning software.3.6 Context-Sensitive Spelling Correction3.6.1 The Task of Context-Sensitive Spelling CorrectionContext-sensitive spelling correction, or real-word error/malapropism detection [Goldingand Roth, 1999; Hirst and Budanitsky, 2005], is the task of identifying errors when a misspellingresults in a real word in the lexicon, e.g., using site when sight or cite was intended.Contextual spell checkers are among the most widely-used NLP technology, as they are includedin commercial word processing software [Church et al., 2007].For every occurrence of a word in a pre-defined confusion set (like{among, between}),46

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!