Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ... Large-Scale Semi-Supervised Learning for Natural Language ...

old.site.clsp.jhu.edu
from old.site.clsp.jhu.edu More from this publisher
12.07.2015 Views

Chapter 1IntroductionNatural language processing (NLP) is a field that develops computational techniques foranalyzing human language. NLP provides the algorithms for spelling correction, speechrecognition, and automatic translation that are used by millions of people every day.Recent years have seen an explosion in the availability of language in the form of electronictext. Web pages, e-mail, search-engine queries, and text-messaging have created astaggering and ever-increasing volume of language data. Processing this data is a greatchallenge. Users of the Internet want to find the right information quickly in a sea of irrelevantpages. Governments, businesses, and hospitals want to discover important trends andpatterns in their unstructured textual records.The challenge of unprecedented volumes of data also presents a significant opportunity.Online text is one of the largest and most diverse bodies of linguistic evidence ever compiled.We can use this evidence to train and test broad and powerful language-processingtools. In this dissertation, I explore ways to extract meaningful statistics from huge volumesof raw text, and I use these statistics to create intelligent NLP systems. Techniques frommachine learning play a central role in this work; machine learning provides principledways to combine linguistic intuitions with evidence from big data.1.1 What NLP Systems DoBefore we discuss exactly how unlabeled data can help improve NLP systems, it is importantto clarify exactly what modern NLP systems do and how they work. NLP systemstake sequences of words as input and automatically produce useful linguistic annotations asoutput. Suppose the following sentence exists on the web somewhere:• “The movie sucked.”Suppose you work for J.D. Power and Associates Web Intelligence Division. You createsystems that automatically analyze blogs and other web pages to find out what peoplethink about particular products, and then you sell this information to the producers of thoseproducts (and occasionally surprise them with the results). You might want to annotate thewhole sentence for its sentiment: whether the sentence is positive or negative in its tone:• “The movie sucked →〈Sentiment=NEGATIVE〉”Or suppose you are Google, and you wish to translate this sentence for a German user.The translation of the word sucked is ambiguous. Here, it likely does not mean, “to be1

drawn in by establishing a partial vacuum,” but rather, “to be disagreeable.” So anotherpotentially useful annotation is word sense:• “The movie sucked → The movie sucked〈Sense=IS-DISAGREEABLE〉.”More directly, we might consider the German translation itself as the annotation:• “The movie sucked → Der Film war schrecklich.Finally, if we’re the company Powerset, our stated objective is to produce “parse trees”for the entire web as a preprocessing step for our search engine. One part of parsing is tolabel the syntactic category of each word (i.e., which are nouns, which are verbs, etc.). Thepart-of-speech annotation might look as follows:• “The movie sucked → The\DT movie\NN sucked\VBDWhere DT means determiner, NN means a singular or mass noun, and VBD means apast-tense verb. 1 Again, note the potential ambiguity for the tag of sucked; it could alsobe labeled VBN (verb, past participle). For example, sucked is a VBN in the phrase, “themovie sucked into the vacuum cleaner was destroyed.”These outputs are just a few of the possible annotations that can be produced for textualnatural language input. Other branches and fields of NLP may operate over speechsignals rather than actual text. Also, in the natural language generation (NLG) community,the input may not be text, but information in another form, with the desired output beinggrammatically-correct English sentences. Most of the work in the NLP community, however,operates exactly in this framework: text comes in, annotations come out. But howdoes an NLP system produce these annotations automatically?1.2 Writing Rules vs. Machine LearningOne might imagine writing some rules to produce these annotations automatically. For partof-speechtagging, we might say, “if the word is movie, then label the word as NN.” Theseword-based rules fail when the word can have multiple tags (e.g. saw, wind, etc. can benouns or verbs). Also, no matter how many rules we write, there will always be new or rarewords that didn’t make our rule set. For ambiguous words, we could try to use rules thatdepend on the word’s context. Such a rule might be, “if the previous word is The and thenext word ends in -ed, then label as NN.” But this rule would fail for “the Oilers skated,”since here the tag is not NN but NNPS: a plural proper noun. We could change the ruleto: “if the previous word is The and the next word ends in -ed, and the word is lower-case,then label as NN.” But this would fail for “The begrudgingly viewed movie,” where now“begrudgingly” is an adverb, not a noun. We might imagine adding many many more rules.Also, we might wish to attach scores to our rules, to principally resolve conflicting rules.We could say, “if the word is wind, give the score for being a NN a ten and for being aVB a two,” and this score could be combined with other context-based scores, to produce adifferent cumulative score for each possible tag. The highest-scoring tag would be taken asthe output.1 Refer to Appendix A for definitions and examples from the Penn Treebank tag set, the most commonlyusedpart-of-speech tag set.2

Chapter 1Introduction<strong>Natural</strong> language processing (NLP) is a field that develops computational techniques <strong>for</strong>analyzing human language. NLP provides the algorithms <strong>for</strong> spelling correction, speechrecognition, and automatic translation that are used by millions of people every day.Recent years have seen an explosion in the availability of language in the <strong>for</strong>m of electronictext. Web pages, e-mail, search-engine queries, and text-messaging have created astaggering and ever-increasing volume of language data. Processing this data is a greatchallenge. Users of the Internet want to find the right in<strong>for</strong>mation quickly in a sea of irrelevantpages. Governments, businesses, and hospitals want to discover important trends andpatterns in their unstructured textual records.The challenge of unprecedented volumes of data also presents a significant opportunity.Online text is one of the largest and most diverse bodies of linguistic evidence ever compiled.We can use this evidence to train and test broad and powerful language-processingtools. In this dissertation, I explore ways to extract meaningful statistics from huge volumesof raw text, and I use these statistics to create intelligent NLP systems. Techniques frommachine learning play a central role in this work; machine learning provides principledways to combine linguistic intuitions with evidence from big data.1.1 What NLP Systems DoBe<strong>for</strong>e we discuss exactly how unlabeled data can help improve NLP systems, it is importantto clarify exactly what modern NLP systems do and how they work. NLP systemstake sequences of words as input and automatically produce useful linguistic annotations asoutput. Suppose the following sentence exists on the web somewhere:• “The movie sucked.”Suppose you work <strong>for</strong> J.D. Power and Associates Web Intelligence Division. You createsystems that automatically analyze blogs and other web pages to find out what peoplethink about particular products, and then you sell this in<strong>for</strong>mation to the producers of thoseproducts (and occasionally surprise them with the results). You might want to annotate thewhole sentence <strong>for</strong> its sentiment: whether the sentence is positive or negative in its tone:• “The movie sucked →〈Sentiment=NEGATIVE〉”Or suppose you are Google, and you wish to translate this sentence <strong>for</strong> a German user.The translation of the word sucked is ambiguous. Here, it likely does not mean, “to be1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!