Large-Scale Semi-Supervised Learning for Natural Language ...
Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...
System IN O1 O2Baseline 70.5 66.8 84.1Dependency model 74.7 82.8 84.4SVM with N-GM features 89.5 81.6 86.2SVM with LEX features 81.1 70.9 79.0SVM with N-GM + LEX 91.6 81.6 87.4Table 5.5: NC-bracketing accuracy (%). SVM trained on WSJ, tested on WSJ (IN) andout-of-domain Grolier (O1) and Medline (O2).Accuracy (%)100959085807570656010N-GM+LEXN-GMLEX100Number of labeled examples1e3Figure 5.7: In-domain NC-bracketer learning curveAll web-based models (including the dependency model) exceed 81.5% on Grolier,which is the level of human agreement [Lauer, 1995b]. N-GM + LEX is highest on Medline,and close to the 88% human agreement [Nakov and Hearst, 2005a]. Out-of-domain, theLEX approach performs very poorly, close to or below the baseline accuracy. With littletraining data and cross-domain usage, N-gram features are essential.5.6 Verb Part-of-Speech DisambiguationOur final task is POS-tagging. We focus on one frequent and difficult tagging decision: thedistinction between a past-tense verb (VBD) and a past participle (VBN). For example, inthe troops stationed in Iraq, the verb stationed is a VBN; troops is the head of the (noun)phrase. On the other hand, for the troops vacationed in Iraq, the verb vacationed is a VBDand also the head. Some verbs make the distinction explicit (eat has VBD ate, VBN eaten),but most require context for resolution. This is exactly the task we presented in Section 1.3as an example of where unlabeled data can be useful. Recall that the verb in Bears won is aVBD while the verb in trophy won is a VBN.Conflating VBN/VBD is damaging because it affects downstream parsers and semanticrole labelers. The task is difficult because nearby POS tags can be identical in both cases.When the verb follows a noun, tag assignment can hinge on world-knowledge, i.e., theglobal lexical relation between the noun and verb (E.g., troops tends to be the object ofstationed but the subject of vacationed). 6 Web-scale N-gram data might help improve the6 HMM-style taggers, like the fast TnT tagger used on our web corpus, do not use bilexical features, and so79
VBN/VBD distinction by providing relational evidence, even if the verb, noun, or verb-nounpair were not observed in training data. This is what we showed in Section 1.3.We extract nouns followed by a VBN/VBD in the WSJ portion of the Treebank [Marcuset al., 1993], getting 23K training, 1091 development and 1130 test examples from sections2-22, 24, and 23, respectively. For out-of-domain data, we get 21K examples from theBrown portion of the Treebank and 6296 examples from tagged Medline abstracts in thePennBioIE corpus [Kulick et al., 2004].The majority class baseline is to choose VBD.5.6.1 Supervised Verb DisambiguationThere are two orthogonal sources of information for predicting VBN/VBD: 1) the noun-verbpair, and 2) the context around the pair. Both N-GM and LEX features encode both thesesources.LEX featuresFor 1), we use indicators for the noun and verb, the noun-verb pair, whether the verb is onan in-house list of said-verb (like warned, announced, etc.), whether the noun is capitalizedand whether it’s upper-case. Note that in training data, 97.3% of capitalized nouns arefollowed by a VBD and 98.5% of said-verbs are VBDs. For 2), we provide indicator featuresfor the words before the noun and after the verb.N-GM featuresFor 1), we characterize a noun-verb relation via features for the pair’s distribution in GoogleV2. Characterizing a word by its distribution has a long history in NLP; we apply similartechniques to relations, like [Turney, 2006], but with a larger corpus and richer annotations.We extract the 20 most-frequent N-grams that contain both the noun and the verb in thepair. For each of these, we convert the tokens to POS-tags, except for tokens that are amongthe 100 most frequent unigrams in our corpus, which we include in word form. We maskthe noun of interest as N and the verb of interest as V. This converted N-gram is the featurelabel. The value is the pattern’s log-count. A high count for patterns like (N that V), (Nhave V) suggests the relation is a VBD, while patterns (N that were V), (N V by), (V some N)indicate a VBN. (Again, refer to Section 1.3 for some more example patterns). As always,the classifier learns the association between patterns and classes.For 2), we use counts for the verb’s context co-occurring with a VBD or VBN tag in ourweb corpus (again exploiting the fact our web corpus contains tags). E.g., we see whetherVBD cases like troops ate or VBN cases like troops eaten are more frequent. Although ourcorpus contains many VBN/VBD errors, we hope the errors are random enough for aggregatecounts to be useful. The context is an N-gram spanning the VBN/VBD. We have log-countfeatures for all five such N-grams in the (previous-word, noun, verb, next-word) quadruple.The log-count is indexed by the position and length of the N-gram. We include separatecount features for contexts matching the specific noun and for when the noun token canmatch any word tagged as a noun.perform especially poorly on these cases. One motivation for our work was to develop a fast post-processor tofix VBN/VBD errors.80
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87: 90% of the time in Gutenberg. The L
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V
System IN O1 O2Baseline 70.5 66.8 84.1Dependency model 74.7 82.8 84.4SVM with N-GM features 89.5 81.6 86.2SVM with LEX features 81.1 70.9 79.0SVM with N-GM + LEX 91.6 81.6 87.4Table 5.5: NC-bracketing accuracy (%). SVM trained on WSJ, tested on WSJ (IN) andout-of-domain Grolier (O1) and Medline (O2).Accuracy (%)100959085807570656010N-GM+LEXN-GMLEX100Number of labeled examples1e3Figure 5.7: In-domain NC-bracketer learning curveAll web-based models (including the dependency model) exceed 81.5% on Grolier,which is the level of human agreement [Lauer, 1995b]. N-GM + LEX is highest on Medline,and close to the 88% human agreement [Nakov and Hearst, 2005a]. Out-of-domain, theLEX approach per<strong>for</strong>ms very poorly, close to or below the baseline accuracy. With littletraining data and cross-domain usage, N-gram features are essential.5.6 Verb Part-of-Speech DisambiguationOur final task is POS-tagging. We focus on one frequent and difficult tagging decision: thedistinction between a past-tense verb (VBD) and a past participle (VBN). For example, inthe troops stationed in Iraq, the verb stationed is a VBN; troops is the head of the (noun)phrase. On the other hand, <strong>for</strong> the troops vacationed in Iraq, the verb vacationed is a VBDand also the head. Some verbs make the distinction explicit (eat has VBD ate, VBN eaten),but most require context <strong>for</strong> resolution. This is exactly the task we presented in Section 1.3as an example of where unlabeled data can be useful. Recall that the verb in Bears won is aVBD while the verb in trophy won is a VBN.Conflating VBN/VBD is damaging because it affects downstream parsers and semanticrole labelers. The task is difficult because nearby POS tags can be identical in both cases.When the verb follows a noun, tag assignment can hinge on world-knowledge, i.e., theglobal lexical relation between the noun and verb (E.g., troops tends to be the object ofstationed but the subject of vacationed). 6 Web-scale N-gram data might help improve the6 HMM-style taggers, like the fast TnT tagger used on our web corpus, do not use bilexical features, and so79