so within some branches of psychology, linguistics and artificial intelligence even today.Manning and Schütze believe that“much of the skepticism towards probabilistic models <strong>for</strong> language (and cognitionin general) stems from the fact that the well-known early probabilisticmodels (developed in the 1940s and 1950s) are extremely simplistic. Becausethese simplistic models clearly do not do justice to the complexity of humanlanguage, it is easy to view probabilistic models in general as inadequate.”The stochastic paradigm became much more influential again after the 1970s and early1980s when N-gram models were successfully applied to speech recognition by the IBMThomas J. Watson Research Center [Jelinek, 1976; Bahl et al., 1983] and by James Bakerat Carnegie Mellon University [Baker, 1975]. Previous ef<strong>for</strong>ts in speech recognition hadbeen rather “ad hoc and fragile, and were demonstrated on only a few specially selected examples”[Russell and Norvig, 2003]. The work by Jelinek and others soon made it apparentthat data-driven approaches simply work better. As Hajič and Hajičová [2007] summarize:“[The] IBM Research group under Fred Jelinek’s leadership realized (and experimentallyshowed) that linguistic rules and Artificial Intelligence techniqueshad inferior results even when compared to very simplistic statistical techniques.This was first demonstrated on phonetic base<strong>for</strong>ms in the acousticmodel <strong>for</strong> a speech recognition system, but later it became apparent that thiscan be safely assumed almost <strong>for</strong> every other problem in the field (e.g., Jelinek[1976]). Statistical learning mechanisms were apparently and clearlysuperior to any human-designed rules, especially those using any preferencesystem, since humans are notoriously bad at estimating quantitative characteristicsin a system with many parameters (such as a natural language).”Probabilistic and machine learning techniques such as decision trees, clustering, EM,and maximum entropy gradually became the foundation of speech processing [Fung andRoth, 2005]. The successes in speech then inspired a range of empirical approaches tonatural language processing. Simple statistical techniques were soon applied to part-ofspeechtagging, parsing, machine translation, word-sense disambiguation, and a range ofother NLP tasks. While there was only one statistical paper at the ACL conference in 1990,virtually all papers in ACL today employ statistical techniques [Hajič and Hajičová, 2007].Of course, the fact that statistical techniques currently work better is only partly responsible<strong>for</strong> their rise to prominence. There was a fairly large gap in time between their provenper<strong>for</strong>mance on speech recognition and their widespread acceptance in NLP. Advancesin computer technology and the greater availability of data resources also played a role.According to Church and Mercer [1993]:“Back in the 1970s, the more data-intensive methods were probably beyond themeans of many researchers, especially those working in universities... Fortunately,as a result of improvements in computer technology and the increasingavailability of data due to numerous data collection ef<strong>for</strong>ts, the data-intensivemethods are no longer restricted to those working in affluent industrial laboratories.”Two other important developments were the practical application and commercializationof NLP algorithms and the emphasis that was placed on empirical evaluation. A greater13
emphasis on “deliverables and evaluation” [Church and Mercer, 1993] created a demand<strong>for</strong> robust techniques, empirically-validated on held-out data. Per<strong>for</strong>mance metrics fromspeech recognition and in<strong>for</strong>mation retrieval were adopted in many NLP sub-fields. Peoplestopped evaluating on their training set, and started using standard test sets. Machine learningresearchers, always looking <strong>for</strong> new sources of data, began evaluating their approacheson natural language, and publishing at high-impact NLP conferences.Flexible discriminative ML algorithms like maximum entropy [Berger et al., 1996] andconditional random fields [Lafferty et al., 2001] arose as natural successors to earlier statisticaltechniques like naive Bayes and hidden Markov models (generative approaches;Section 2.3.3). Indeed, since machine learning algorithms, especially discriminative techniques,could be specifically tuned to optimize a desired per<strong>for</strong>mance metric, ML systemsachieved superior per<strong>for</strong>mance in many competitions and evaluations. This has led to a shiftin the overall speech and language processing landscape. Originally, progress in statisticalspeech processing inspired advances in NLP; today many ML algorithms (such as structuredperceptrons and support vector machines) were first developed <strong>for</strong> NLP and in<strong>for</strong>mation retrievalapplications and then later applied to speech tasks [Fung and Roth, 2005].In the initial rush to adopt statistical techniques, many NLP tasks were decomposed intosub-problems that could be solved with well-understood and readily-available binary classifiers.In recent years, NLP systems have adopted more sophisticated ML techniques. Thesealgorithms are now capable of producing an entire annotation (like a parse-tree or translation)as a single global output, and suffer less from the propagation of errors commonin a pipelined, local-decision approach. These so-called “structured prediction” techniquesinclude conditional random fields [Lafferty et al., 2001], structured perceptrons [Collins,2002], structured SVMs [Tsochantaridis et al., 2004], and rerankers [Collins and Koo,2005]. Others have explored methods to produce globally-consistent structured output vialinear programming <strong>for</strong>mulations [Roth and Yih, 2004]. While we have also had success inusing global optimization techniques like integer linear programming [Bergsma and Kondrak,2007b] and re-ranking [Dou et al., 2009], the models used in this dissertation arerelatively simple linear classifiers, which we discuss in the following section. This dissertationfocuses on a) developing better features and b) automatically producing more labeledexamples. The advances we make are also applicable when using more sophisticated learningmethods.Finally, we note that recent years have also seen a strong focus on the development ofsemi-supervised learning techniques <strong>for</strong> NLP. This is also the focus of this dissertation. Wedescribe semi-supervised approaches more generally in Section 2.5.2.2 The Linear ClassifierA linear classifier is a very simple, unsophisticated concept. We explain it in the contextof text categorization, which will help make the equations more concrete <strong>for</strong> the reader.Text categorization is the problem of deciding whether an input document is a member ofa particular category or not. For example, we might want to classify a document as beingabout sports or not.Let’s refer to the input as d. So <strong>for</strong> text categorization, d is a document. We want todecide if d is about sports or not. On what shall we base this decision? We always base thedecision on some features of the input. For a document, we base the decision on the wordsin the document. We define a feature function Φ(d). This function takes the input d and14
- Page 1 and 2: University of AlbertaLarge-Scale Se
- Page 5 and 6: Table of Contents1 Introduction 11.
- Page 7 and 8: 7 Alignment-Based Discriminative St
- Page 9 and 10: List of Figures2.1 The linear class
- Page 11 and 12: drawn in by establishing a partial
- Page 13 and 14: (2) “He saw the trophy won yester
- Page 15 and 16: actual sentence said, “My son’s
- Page 17 and 18: Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20: spelling correction, and the identi
- Page 21: Chapter 2Supervised and Semi-Superv
- Page 25 and 26: Figure 2.1: The linear classifier h
- Page 27 and 28: The above experimental set-up is so
- Page 29 and 30: and discriminative models therefore
- Page 31 and 32: their slack value). In practice, I
- Page 33 and 34: One way to find a better solution i
- Page 35 and 36: Figure 2.2: Learning from labeled a
- Page 37 and 38: algorithm). Yarowsky used it for wo
- Page 39 and 40: Learning with Natural Automatic Exa
- Page 41 and 42: positive examples from any collecti
- Page 43 and 44: generated word clusters. Several re
- Page 45 and 46: One common disambiguation task is t
- Page 47 and 48: 3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50: For each target wordv 0 , there are
- Page 51 and 52: ut without counts for the class pri
- Page 53 and 54: Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74:
Training ExamplesSystem 10 100 1K 1
- Page 75 and 76:
Since we wanted the system to learn
- Page 77 and 78:
Chapter 5Creating Robust Supervised
- Page 79 and 80:
§ In-Domain (IN) Out-of-Domain #1
- Page 81 and 82:
Adjective ordering is also needed i
- Page 83 and 84:
Accuracy (%)10095908580757065601001
- Page 85 and 86:
System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88:
90% of the time in Gutenberg. The L
- Page 89 and 90:
VBN/VBD distinction by providing re
- Page 91 and 92:
other tasks we only had a handful o
- Page 93 and 94:
without the need for manual annotat
- Page 95 and 96:
DSP uses these labels to identify o
- Page 97 and 98:
Semantic classesMotivated by previo
- Page 99 and 100:
empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102:
Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104:
SystemAccMost-Recent Noun 17.9%Maxi
- Page 105 and 106:
Chapter 7Alignment-Based Discrimina
- Page 107 and 108:
ious measures to learn the recurren
- Page 109 and 110:
how labeled word pairs can be colle
- Page 111 and 112:
Figure 7.1: LCSR histogram and poly
- Page 113 and 114:
0.711-pt Average Precision0.60.50.4
- Page 115 and 116:
Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118:
Chapter 8Conclusions and Future Wor
- Page 119 and 120:
8.3 Future WorkThis section outline
- Page 121 and 122:
My focus is thus on enabling robust
- Page 123 and 124:
[Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126:
[Church and Mercer, 1993] Kenneth W
- Page 127 and 128:
[Grefenstette, 1999] Gregory Grefen
- Page 129 and 130:
[Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132:
[Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134:
[Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136:
[Wang et al., 2008] Qin Iris Wang,
- Page 137:
NNP noun, proper, singular Motown V