ourselves. 1 We chose Gutenberg and Medline in order to provide challenging, distinct domainsfrom our training corpora. Our Gutenberg corpus consists of out-of-copyright books,automatically downloaded from the Project Gutenberg website. 2 The Medline data consistsof a large collection of online biomedical abstracts. We describe how labeled adjective andspelling examples are created from these corpora in the corresponding sections.5.2.3 Web-<strong>Scale</strong> Auxiliary DataThe most widely-used N-gram corpus is the Google 5-gram Corpus [Brants and Franz,2006].For our tasks, we also use Google V2: a new N-gram corpus (also with N-grams oflength one-to-five) that we created from the same one-trillion-word snapshot of the webas the Google 5-gram Corpus, but with several enhancements. These include: 1) Reducingnoise by removing duplicate sentences and sentences with a high proportion of nonalphanumericcharacters (together filtering about 80% of the source data), 2) pre-convertingall digits to the0character to reduce sparsity <strong>for</strong> numeric expressions, and 3) including thepart-of-speech (POS) tag distribution <strong>for</strong> each N-gram. The source data was automaticallytagged with TnT [Brants, 2000], using the Penn Treebank tag set. [Lin et al., 2010]provide more details on the N-gram data and N-gram search tools. Other recent projectsthat have made use of this new data include [Ji and Lin, 2009; Bergsma et al., 2010a;Pitler et al., 2010].The third enhancement is especially relevant here, as we can use the POS distributionto collect counts <strong>for</strong> N-grams of mixed words and tags. For example, we have developedan N-gram search engine that can count how often the adjective unprecedented precedesanother adjective in our web corpus (113K times) and how often it follows one (11K times).Thus, even if we haven’t seen a particular adjective pair directly, we can use the positionalpreferences of each adjective in order to order them.Early web-based models used search engines to collect N-gram counts, and thus couldnot use capitalization, punctuation, and annotations such as part-of-speech [Kilgarriff andGrefenstette, 2003]. For example, recall that we had to develop fillers to serve as “surrogates<strong>for</strong> nouns” <strong>for</strong> the non-referential pronoun detection system presented in Chapter 3, asnouns were not labeled in the data directly. If we had a POS-tagged web corpus, we couldhave looked up noun counts directly. Using a POS-tagged web corpus goes a long way toaddressing earlier criticisms of web-based NLP.5.3 Prenominal Adjective OrderingPrenominal adjective ordering strongly affects text readability. For example, while the unprecedentedstatistical revolution is fluent, the statistical unprecedented revolution is not.Many NLP systems need to handle adjective ordering robustly. In machine translation, ifa noun has two adjective modifiers, they must be ordered correctly in the target language.1 http://webdocs.cs.ualberta.ca/˜bergsma/Robust/ provides our Gutenberg corpus, a link toMedline, and also the generated examples <strong>for</strong> both Gutenberg and Medline.2 www.gutenberg.org. All books just released in 2009 and thus unlikely to occur in the source data <strong>for</strong>our N-gram corpus (from 2006). Of course, with removal of sentence duplicates and also N-gram thresholding,the possible presence of a test sentence in the massive source data is unlikely to affect results. [Carlson et al.,2008] reach a similar conclusion and provide some further justification.71
Adjective ordering is also needed in <strong>Natural</strong> <strong>Language</strong> Generation systems that produce in<strong>for</strong>mationfrom databases; <strong>for</strong> example, to convey in<strong>for</strong>mation (in sentences) about medicalpatients [Shaw and Hatzivassiloglou, 1999].We focus on the task of ordering a pair of adjectives independently of the noun theymodify and achieve good per<strong>for</strong>mance in this setting. Following the set-up of [Malouf,2000], we experiment on the 263K adjective pairs Malouf extracted from the British NationalCorpus (BNC). We use 90% of pairs <strong>for</strong> training, 5% <strong>for</strong> testing, and 5% <strong>for</strong> development.This <strong>for</strong>ms our in-domain data. 3We create out-of-domain examples by tokenizing Medline and Gutenberg (Section 5.2.2),then POS-tagging them with CRFTagger [Phan, 2006]. We create examples from all sequencesof two adjectives followed by a noun. Like [Malouf, 2000], we assume that editedtext has adjectives ordered fluently. As was the case <strong>for</strong> preposition selection and spellingcorrection, adjective ordering also permits the extraction of natural automatic examples asexplained in Chapter 2, Section 2.5.4. We extract 13K and 9.1K out-of-domain pairs fromGutenberg and Medline, respectively. 4The input to the system is a pair of adjectives, (a 1 ,a 2 ), ordered alphabetically. The taskis to classify this order as correct (the positive class) or incorrect (the negative class). Sinceboth classes are equally likely, the majority-class baseline is around 50% on each of thethree test sets.5.3.1 <strong>Supervised</strong> Adjective OrderingLEX featuresOur adjective ordering model with LEX features is a novel contribution of this work.We begin with two features <strong>for</strong> each pair: an indicator feature <strong>for</strong> a 1 , which gets afeature value of +1, and an indicator feature <strong>for</strong> a 2 , which gets a feature value of −1. Theparameters of the model are there<strong>for</strong>e weights on specific adjectives. The higher the weighton an adjective, the more it is preferred in the first position of a pair. If the alphabeticordering is correct, the weight on a 1 should be higher than the weight on a 2 , so that theclassifier returns a positive score. If the reverse ordering is preferred, a 2 should receivea higher weight. Training the model in this setting is a matter of assigning weights to allthe observed adjectives such that the training pairs are maximally ordered correctly. Thefeature weights thus implicitly produce a linear ordering of all observed adjectives. Theexamples can also be regarded as rank constraints in a discriminative ranker [Joachims,2002]. Transitivity is achieved naturally in that if we correctly order pairs a ≺ b and b ≺ cin the training set, then a ≺ c by virtue of the weights on a and c.While exploiting transitivity has been shown to improve adjective ordering, there aremany conflicting pairs that make a strict linear ordering of adjectives impossible [Malouf,2000]. We there<strong>for</strong>e provide an indicator feature <strong>for</strong> the pair a 1 a 2 , so the classifier canmemorize exceptions to the linear ordering, breaking strict order transitivity. Our classifierthus operates along the lines of rankers in the preference-based setting [Ailon and Mohri,2008].3 BNC is not a domain per se (rather a balanced corpus), but has a style and vocabulary distinct from ourOOD data.4 Like [Malouf, 2000], we convert our pairs to lower-case. Since the N-gram data includes case, we mergecounts from the upper and lower case combinations.72
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22:
Chapter 2Supervised and Semi-Superv
- Page 23 and 24:
emphasis on “deliverables and eva
- Page 25 and 26:
Figure 2.1: The linear classifier h
- Page 27 and 28:
The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79: § In-Domain (IN) Out-of-Domain #1
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106: Chapter 7Alignment-Based Discrimina
- Page 107 and 108: ious measures to learn the recurren
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V