word alignment [Kondrak et al., 2003], sentence alignment [Simard et al., 1992; Church,1993; McEnery and Oakes, 1996; Melamed, 1999] and learning translation lexicons [Mannand Yarowsky, 2001; Koehn and Knight, 2002]. The related task of identifying transliterationshas also received much recent attention [Klementiev and Roth, 2006; Zelenko andAone, 2006; Yoon et al., 2007; Jiampojamarn et al., 2010]. Extending dictionaries withautomatically-acquired knowledge of cognates and transliterations can improve machinetranslation systems [Knight et al., 1995].Also, cognates have been used to help assess the readability of a <strong>for</strong>eign language textby new language learners [Uitdenbogerd, 2005]. Developing automatic ways to identifythese cognates is thus a prerequisite <strong>for</strong> a robust automatic readability assessment.We propose an alignment-based, discriminative approach to string similarity and weevaluate this approach on the task of cognate identification. Section 7.2 describes previousapproaches and their limitations. In Section 7.3, we explain our technique <strong>for</strong> automaticallycreating a cognate-identification training set. A novel aspect of this set is the inclusion ofcompetitive counter-examples <strong>for</strong> learning. Section 7.4 shows how discriminative featuresare created from a character-based, minimum-edit-distance alignment of a pair of strings.In Section 7.5, we describe our bitext and dictionary-based experiments on six languagepairs, including three based on non-Roman alphabets. In Section 7.6, we show significantimprovements over traditional approaches, as well as significant gains over more recenttechniques by [Ristad and Yianilos, 1998], [Tiedemann, 1999], [Kondrak, 2005], and [Klementievand Roth, 2006].7.2 Related WorkString similarity is a fundamental concept in a variety of fields and hence a range of techniqueshave been developed. We focus on approaches that have been applied to words, i.e.,uninterrupted sequences of characters found in natural language text. The most well-knownmeasure of the similarity of two strings is the edit distance or Levenshtein distance [Levenshtein,1966]: the number of insertions, deletions and substitutions required to trans<strong>for</strong>mone string into another. In our experiments, we use normalized edit distance (NED): editdistance divided by the length of the longer word. Other popular measures include Dice’sCoefficient (DICE) [Adamson and Boreham, 1974], and the length-normalized measureslongest common subsequence ratio (LCSR) [Melamed, 1999], the length of the longestcommon subsequence divided by the length of the longer word (used by [Melamed, 1998]),and longest common prefix ratio (PREFIX) [Kondrak, 2005], the length of the longestcommon prefix divided by the longer word length (four-letter prefix match was used by[Simard et al., 1992]). These baseline approaches have the important advantage of not requiringtraining data. We can also include in the non-learning category [Kondrak, 2005]’slongest common subsequence <strong>for</strong>mula (LCSF), a probabilistic measure designed to mitigateLCSR’s preference <strong>for</strong> shorter words.Although simple to use, the untrained measures cannot adapt to the specific spellingdifferences between a pair of languages. Researchers have there<strong>for</strong>e investigated adaptivemeasures that are learned from a set of known cognate pairs. [Ristad and Yianilos, 1998]developed a stochastic transducer version of edit distance learned from unaligned stringpairs. [Mann and Yarowsky, 2001] saw little improvement over edit distance when applyingthis transducer to cognates, even when filtering the transducer’s probabilities intodifferent weight classes to better approximate edit distance. [Tiedemann, 1999] used var-97
ious measures to learn the recurrent spelling changes between English and Swedish, andused these changes to re-weight LCSR to identify more cognates, with modest per<strong>for</strong>manceimprovements. [Mulloni and Pekar, 2006] developed a similar technique to improve NED<strong>for</strong> English/German.Essentially, all these techniques improve on the baseline approaches by using a set ofpositive (true) cognate pairs to re-weight the costs of edit operations or the score of sequencematches. Ideally, we would prefer a more flexible approach that can learn positiveor negative weights on substring pairings in order to better identify related strings. One systemthat can potentially provide this flexibility is a discriminative string-similarity approachto named-entity transliteration by [Klementiev and Roth, 2006]. Although not compared toother similarity measures in the original paper, we show that this discriminative techniquecan strongly outper<strong>for</strong>m traditional methods on cognate identification.Unlike many recent generative systems, the Klementiev and Roth approach does notexploit the known positions in the strings where the characters match. For example, [Brilland Moore, 2000] combine a character-based alignment with the expectation maximization(EM) algorithm to develop an improved probabilistic error model <strong>for</strong> spelling correction.[Rappoport and Levent-Levi, 2006] apply this approach to learn substring correspondences<strong>for</strong> cognates. [Zelenko and Aone, 2006] recently showed a [Klementiev and Roth, 2006]-style discriminative approach to be superior to alignment-based generative techniques <strong>for</strong>name transliteration. Our work successfully uses the alignment-based methodology of thegenerative approaches to enhance the feature set <strong>for</strong> discriminative string similarity. In workconcurrent to our original contribution in [Bergsma and Kondrak, 2007a], Yoon et al. [2007]apply a discriminative approach to recognizing transliterations at the phoneme level. Theyinclude binary features over aligned phoneme pairs, but do not use features over phonemesubsequences as would be the analog of our work.Finally, [Munteanu and Marcu, 2005] propose a similar approach to detect sentencesthat are translations in non-parallel corpora. The heart of their algorithm is a classifier thatinspects a pair of sentences and decides if they are translations. Like us, they also alignthe sentences and compute features based on the alignment, but they use more general features(e.g., number of words in a row that are aligned, etc.) rather than, say, phrase pairsthat are consistent with the alignment, which would be the direct analogue of our method.Although we originally developed our approach unaware of the connection to this work,the two approaches ultimately face many similar issues and developed similar solutions. Inparticular, they also automatically generate training pairs from both true sentence translations(positives) and competitive counter examples (negatives). Since they can also generatemany examples using this technique, it is surprising they did not also explore much richer,finer-grained features like those explored in this chapter.7.3 The Cognate Identification TaskGiven two string lists, E and F , the task of cognate identification is to find all pairs ofstrings (e,f) that are cognate. In other similarity-driven applications, E and F could bemisspelled and correctly spelled words, or the orthographic and the phonetic representationof words, etc. The task remains to link strings with common meaning in E and F usingonly the string similarity measure.We can facilitate the application of string similarity to cognates by using a definitionof cognation not dependent on etymological analysis. For example, [Mann and Yarowsky,98
- Page 1 and 2:
University of AlbertaLarge-Scale Se
- Page 5 and 6:
Table of Contents1 Introduction 11.
- Page 7 and 8:
7 Alignment-Based Discriminative St
- Page 9 and 10:
List of Figures2.1 The linear class
- Page 11 and 12:
drawn in by establishing a partial
- Page 13 and 14:
(2) “He saw the trophy won yester
- Page 15 and 16:
actual sentence said, “My son’s
- Page 17 and 18:
Uses Web-Scale N-grams Auto-Creates
- Page 19 and 20:
spelling correction, and the identi
- Page 21 and 22:
Chapter 2Supervised and Semi-Superv
- Page 23 and 24:
emphasis on “deliverables and eva
- Page 25 and 26:
Figure 2.1: The linear classifier h
- Page 27 and 28:
The above experimental set-up is so
- Page 29 and 30:
and discriminative models therefore
- Page 31 and 32:
their slack value). In practice, I
- Page 33 and 34:
One way to find a better solution i
- Page 35 and 36:
Figure 2.2: Learning from labeled a
- Page 37 and 38:
algorithm). Yarowsky used it for wo
- Page 39 and 40:
Learning with Natural Automatic Exa
- Page 41 and 42:
positive examples from any collecti
- Page 43 and 44:
generated word clusters. Several re
- Page 45 and 46:
One common disambiguation task is t
- Page 47 and 48:
3.2.2 Web-Scale Statistics in NLPEx
- Page 49 and 50:
For each target wordv 0 , there are
- Page 51 and 52:
ut without counts for the class pri
- Page 53 and 54:
Accuracy (%)10090807060SUPERLMSUMLM
- Page 55 and 56: We also follow Carlson et al. [2001
- Page 57 and 58: Set BASE [Golding and Roth, 1999] T
- Page 59 and 60: pronoun (#3) guarantees that at the
- Page 61 and 62: 807876F-Score747270Stemmed patterns
- Page 63 and 64: anaphoricity by [Denis and Baldridg
- Page 65 and 66: ter, we present a simple technique
- Page 67 and 68: We seek weights such that the class
- Page 69 and 70: each optimum performance is at most
- Page 71 and 72: We now show that ¯w T (diag(¯p)
- Page 73 and 74: Training ExamplesSystem 10 100 1K 1
- Page 75 and 76: Since we wanted the system to learn
- Page 77 and 78: Chapter 5Creating Robust Supervised
- Page 79 and 80: § In-Domain (IN) Out-of-Domain #1
- Page 81 and 82: Adjective ordering is also needed i
- Page 83 and 84: Accuracy (%)10095908580757065601001
- Page 85 and 86: System IN O1 O2Baseline 66.9 44.6 6
- Page 87 and 88: 90% of the time in Gutenberg. The L
- Page 89 and 90: VBN/VBD distinction by providing re
- Page 91 and 92: other tasks we only had a handful o
- Page 93 and 94: without the need for manual annotat
- Page 95 and 96: DSP uses these labels to identify o
- Page 97 and 98: Semantic classesMotivated by previo
- Page 99 and 100: empirical Pr(n|v) in Equation (6.2)
- Page 101 and 102: Verb Plaus./Implaus. Resnik Dagan e
- Page 103 and 104: SystemAccMost-Recent Noun 17.9%Maxi
- Page 105: Chapter 7Alignment-Based Discrimina
- Page 109 and 110: how labeled word pairs can be colle
- Page 111 and 112: Figure 7.1: LCSR histogram and poly
- Page 113 and 114: 0.711-pt Average Precision0.60.50.4
- Page 115 and 116: Fr-En Bitext Es-En Bitext De-En Bit
- Page 117 and 118: Chapter 8Conclusions and Future Wor
- Page 119 and 120: 8.3 Future WorkThis section outline
- Page 121 and 122: My focus is thus on enabling robust
- Page 123 and 124: [Bergsma and Cherry, 2010] Shane Be
- Page 125 and 126: [Church and Mercer, 1993] Kenneth W
- Page 127 and 128: [Grefenstette, 1999] Gregory Grefen
- Page 129 and 130: [Koehn, 2005] Philipp Koehn. Europa
- Page 131 and 132: [Mihalcea and Moldovan, 1999] Rada
- Page 133 and 134: [Ristad and Yianilos, 1998] Eric Sv
- Page 135 and 136: [Wang et al., 2008] Qin Iris Wang,
- Page 137: NNP noun, proper, singular Motown V