S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Friday Evening Posters 3023–3029<br />
mary experiment used cohort pairs (windmill/window), familiarizing<br />
only one member in a given voice. <strong>The</strong>n eye movements were monitored<br />
while subjects identified words from a screen containing target,<br />
cohort, and unrelated pictures. When hearing the familiar voice, normalization<br />
models predict heightened early activation for target and<br />
cohort, since experience with wind- applies to both. Episodic models<br />
predict bias only to words previously heard. <strong>The</strong> fixations roughly followed<br />
an episodic pattern, but only after the disambiguation point (/m/<br />
in windmill). Thus, unlike both models, indexical information may facilitate<br />
perception of sublexical units (not words).<br />
(3023)<br />
Prosodic Influences on Segmental Context Effects: An Analysis of<br />
Neural Dynamics. DAVID W. GOW, JR. & JENNIFER A. SEGAWA,<br />
Massachusetts General Hospital—<strong>The</strong> structure of spoken language<br />
is organized around a hierarchy of units ranging from feature cues to<br />
words, phrases, sentences, and discourses. Gow’s (2003) feature cue<br />
parsing theory suggests that perceptual grouping or unitization<br />
processes produce progressive and regressive perceptual context effects<br />
in the perception of assimilated speech. In the present work, we<br />
explore the role of higher order prosodic boundaries in this unitization<br />
process. We examined the neural dynamics that produce or block<br />
assimilation context effects, using a multimodal imaging strategy involving<br />
fMRI, MEG, EEG, and anatomical MRI data and analyses of<br />
gamma phase locking and Granger causation. <strong>The</strong>se analysis tools allowed<br />
us to identify activation dependencies between ROIs making up<br />
a large distributed cell assembly responsible for different elements of<br />
spoken language processing. <strong>The</strong> results are discussed in the context<br />
of the general problems of unitization and the integration of multiple<br />
representations in speech perception.<br />
(3024)<br />
<strong>The</strong> Role of Orthography in Spoken Word Recognition. LARISSA J.<br />
RANBOM & CYNTHIA M. CONNINE, Binghamton University (sponsored<br />
by Cynthia M. Connine)—Two experiments investigated the role<br />
of orthography in the representation and processing of spoken words.<br />
<strong>The</strong> experiments capitalized on English spelling conventions, which<br />
can include letters that are not pronounced in the spoken form (i.e.,<br />
the t in castle). Processing of silent-letter words pronounced with and<br />
without the silent letter (e.g., castle pronounced with or without a /t/)<br />
was compared with control words with no silent letter (e.g., hassle<br />
pronounced with or without a /t/). In Experiment 1, a same/different<br />
task for words pronounced correctly or with the inserted phoneme<br />
showed greater confusability for the silent-letter words (e.g., when the<br />
inserted phoneme matched the orthography) than for the controls. In<br />
Experiment 2, equivalent priming effects were found for the correct<br />
and segment-added pronunciation only for silent-letter words. We<br />
suggest that phonological representations computed from an orthographic<br />
form are represented in the lexicon and are active during spoken<br />
word recognition.<br />
(3025)<br />
Years of Exposure to English Predicts Perception and Production of<br />
/r/ and /l/ by Native Speakers of Japanese. ERIN M. INGVALSON,<br />
Carnegie Mellon University, JAMES L. MCCLELLAND, Stanford<br />
University, & LORI L. HOLT, Carnegie Mellon University—Length<br />
of residency (LOR) in a second language (L2) environment is a reliable<br />
predictor of overall L2 proficiency. We examined whether LOR<br />
in an English-speaking society would predict native Japanese (NJ)<br />
speakers’ proficiency with English /r/ and /l/, notoriously difficult<br />
sounds for NJ speakers. NJ participants with under 2, 2–5, or<br />
10+ years of residency were assessed on perception and production of<br />
/r/ and /l/ plus other proficiency and exposure variables. Of interest<br />
was onset frequency of the third formant (F3), the most reliable cue<br />
differentiating English /r/ and /l/ but one that is very difficult for native<br />
Japanese. Longer LOR was associated with greater reliance on<br />
F3, and F3 weighting was correlated with rated English-like production.<br />
Reliance on F3 approached native levels for some in the longest<br />
90<br />
LOR group, suggesting that adult plasticity extends to some of the<br />
most difficult aspects of L2 proficiency.<br />
(3026)<br />
Apparent Lexical Compensation for Coarticulation Effects Are<br />
Due to Experimentally Induced Biases. ALEXANDRA JESSE &<br />
JAMES M. MCQUEEN, Max Planck Institute for Psycholinguistics,<br />
& DENNIS NORRIS, MRC Cognition and Brain Sciences Unit—An<br />
empirical pillar of the interactive view of speech perception (Mc-<br />
Clelland et al., 2006) is the evidence of apparent top-down influence<br />
of lexical knowledge on compensation for coarticulation. Magnuson<br />
et al. (2003), for example, showed that lexical knowledge about an<br />
ambiguous fricative (“sh” not “s” forms a word in “bru?”) appeared<br />
to change perception of a following ambiguous plosive (more “t” responses<br />
in “bru?-?apes”), like hearing an unambiguous fricative<br />
would. A series of phonetic-categorization experiments with the<br />
Magnuson et al. materials show that their result was due to experimentally<br />
induced response biases rather than lexical knowledge. <strong>The</strong><br />
direction of the effect varied as a function of the probability of word<br />
(bliss/brush) and nonword (blish/bruss) trials during practice. With<br />
these probabilities equated, no lexical compensation for coarticulation<br />
effect was found. <strong>The</strong>se findings suggest that speech perception<br />
is not interactive, and that listeners are sensitive to biases created by<br />
only 16 practice trials.<br />
(3027)<br />
Gradient Sensitivity to Continuous Acoustic Detail: Avoiding the<br />
Lexical Garden-Path. BOB MCMURRAY, University of Iowa, &<br />
RICHARD N. ASLIN & MICHAEL K. TANENHAUS, University of<br />
Rochester (sponsored by Richard N. Aslin)—Spoken word recognition<br />
is gradiently sensitive to cues like voice onset time (VOT; McMurray,<br />
Tanenhaus, & Aslin, 2002). We ask how long such detail is retained<br />
and whether it facilitates recognition. A lexical garden-path paradigm<br />
used pairs such as barricade/parakeet. If voicing were ambiguous, the<br />
system must wait considerable time to identify the referent. VOT was<br />
varied from the target (barricade) to a garden-path inducing nonword<br />
(parricade). If VOT is retained, it could facilitate reactivation at the<br />
point-of-disambiguation (POD). However, if it was lost, there should<br />
be no effect of VOT on disambiguation. Using eyetracking, we examined<br />
cases where listeners overtly committed (fixated) the competitor<br />
prior to the POD. Recovery time was linearly related to VOT.<br />
This replicated when the screen did not contain the competitor. Thus,<br />
sensitivity to continuous acoustic detail persists and can facilitate online<br />
recognition. TRACE can only model this under restricted parameter<br />
sets, suggesting important constraints on this model.<br />
(3028)<br />
Word Segmentation in the “Real World” of Conversational Speech.<br />
JOSEPH D. W. STEPHENS & MARK A. PITT, Ohio State University—<br />
Language comprehension requires segmentation of continuous speech<br />
into discrete words. An understanding of the problems faced by the<br />
perceptual system during segmentation can be gained from analyses<br />
of corpora of spontaneous speech. Analyses of phonetic transcriptions<br />
of the Buckeye Corpus were used to define the problem in detail. <strong>The</strong><br />
results revealed that in some high-frequency environments (e.g., following<br />
schwa), acoustic and phonetic information was ambiguous in<br />
the location of word boundaries. Experiments were then performed to<br />
investigate how listeners resolve these ambiguities. In line with the<br />
work of Mattys et al. (2005), the results suggest that contextual information<br />
drives segmentation because strong acoustic cues to word<br />
boundaries are often absent.<br />
(3029)<br />
Effects of Nonspeech Contexts on Speech Categorization: A Critical<br />
Examination. NAVIN VISWANATHAN, JAMES S. MAGNUSON, &<br />
CAROL A. FOWLER, University of Connecticut and Haskins Laboratories—On<br />
the general auditory account of speech perception, compensation<br />
for coarticulation results from spectral contrast rather than