12.07.2015 Views

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

Large-Scale Semi-Supervised Learning for Natural Language ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

[Daumé III, 2007] Hal Daumé III. Frustratingly easy domain adaptation. In ACL, 2007.[Denis and Baldridge, 2007] Pascal Denis and Jason Baldridge. Joint determination ofanaphoricity and coreference using integer programming. In NAACL-HLT, 2007.[Dou et al., 2009] Qing Dou, Shane Bergsma, Sittichai Jiampojamarn, and Grzegorz Kondrak.A ranking approach to stress prediction <strong>for</strong> letter-to-phoneme conversion. In ACL-IJCNLP, 2009.[Dredze et al., 2008] Mark Dredze, Koby Crammer, and Fernando Pereira. Confidenceweightedlinear classification. In ICML, 2008.[Duda and Hart, 1973] Richard O. Duda and Peter E. Hart.Scene Analysis. John Wiley & Sons, 1973.Pattern Classification and[Erk, 2007] Katrin Erk. A simple, similarity-based model <strong>for</strong> selectional preference. InACL, 2007.[Etzioni et al., 2005] Oren Etzioni, Michael Cafarella, Doug Downey, Ana-Maria Popescu,Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. Unsupervisednamed-entity extraction from the web: an experimental study. Artif. Intell., 165(1),2005.[Evans, 2001] Richard Evans. Applying machine learning toward an automatic classificationof it. Literary and Linguistic Computing, 16(1), 2001.[Even-Zohar and Roth, 2000] Yair Even-Zohar and Dan Roth. A classification approach toword prediction. In NAACL, 2000.[Evert, 2004] Stefan Evert. Significance tests <strong>for</strong> the evaluation of ranking methods. InCOLING, 2004.[Fan et al., 2008] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, andChih-Jen Lin. LIBLINEAR: A library <strong>for</strong> large linear classification. JMLR, 9:1871–1874, 2008.[Felice and Pulman, 2007] Rachele De Felice and Stephen G. Pulman. Automatically acquiringmodels of preposition use. In ACL-SIGSEM Workshop on Prepositions, 2007.[Fleischman et al., 2003] Michael Fleischman, Eduard Hovy, and Abdessamad Echihabi.Offline strategies <strong>for</strong> online question answering: answering questions be<strong>for</strong>e they areasked. In ACL, 2003.[Fung and Roth, 2005] Pascale Fung and Dan Roth. Guest editors introduction: Machinelearning in speech and language technologies. Machine <strong>Learning</strong>, 60(1-3):5–9, 2005.[Gale et al., 1992] William A. Gale, Kenneth W. Church, and David Yarowsky. One senseper discourse. In DARPA Speech and <strong>Natural</strong> <strong>Language</strong> Workshop, 1992.[Gamon et al., 2008] Michael Gamon, Jianfeng Gao, Chris Brockett, Alexandre Klementiev,William B. Dolan, Dmitriy Belenko, and Lucy Vanderwende. Using contextualspeller techniques and language modeling <strong>for</strong> ESL error correction. In IJCNLP, 2008.[Ge et al., 1998] Niyu Ge, John Hale, and Eugene Charniak. A statistical approach toanaphora resolution. In Proceedings of the Sixth Workshop on Very <strong>Large</strong> Corpora,1998.[Gildea, 2001] Dan Gildea. Corpus variation and parser per<strong>for</strong>mance. In EMNLP, 2001.[Golding and Roth, 1999] Andrew R. Golding and Dan Roth. A Winnow-based approachto context-sensitive spelling correction. Mach. Learn., 34(1-3):107–130, 1999.[Graff, 2003] David Graff. English gigaword. LDC2003T05, 2003.117

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!