29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Saturday Morning Papers 195–199<br />

10:20–10:35 (195)<br />

Rapid Lexical–Semantic Integration in Speech Processing: Evidence<br />

From Pause Detection. SVEN L. MATTYS & CHRISTOPHER W.<br />

PLEYDELL-PEARCE, University of Bristol—In this study, we use<br />

pause detection (PD) as a new tool for studying the online integration<br />

of lexical and semantic information during speech comprehension.<br />

When listeners were asked to detect 200-msec pauses inserted into the<br />

last word of a spoken sentence, their detection latencies were influenced<br />

by the lexical–semantic information provided by the sentence.<br />

Pauses took longer to detect when they were inserted within a word<br />

that had multiple potential endings in the context of the sentence than<br />

when inserted within words with a unique ending. An event-related<br />

potential variant of the PD procedure revealed brain correlates of<br />

pauses as early as 101–125 msec following pause onset and patterns<br />

of lexical–semantic integration that mirrored those obtained with PD<br />

within 160 msec. Thus, both the behavioral and the electrophysiological<br />

responses to pauses suggest that lexical and semantic processes<br />

are highly interactive and that their integration occurs rapidly during<br />

speech comprehension.<br />

10:40–10:55 (196)<br />

Spatiotemporal Properties of Brain Activation Underlying Lexical<br />

Influences on Speech Perception. DAVID W. GOW & CHRISTINA<br />

CONGLETON, Massachusetts General Hospital, & SEPPO P. AHL-<br />

FORS & ERIC HALGREN, MGH/MIT/HMS Athinoula A. Martinos<br />

Center—Behavioral evidence from a variety of paradigms suggests<br />

that there is a two-way relationship between fine-grained phonetic<br />

factors and lexical activation. Within-category phonetic variation affects<br />

the course of lexical activation, and lexical knowledge affects the<br />

interpretation of phonetic ambiguity. We examined this relationship<br />

in a study combining MEG and fMRI to provide high spatiotemporal<br />

resolution imaging data while participants performed a phonetic categorization<br />

task. In each trial, listeners heard a token of “_ampoo” or<br />

“_andal” in which the initial segment was /s/, /ʃ/, or a fricative judged<br />

to be intermediate between /s/ and /ʃ/. Listeners showed a robust behavioral<br />

Ganong effect, interpreting the ambiguous fricative as /s/ in<br />

“_ampoo” and /ʃ/ in “_andal.” Physiological data showed a distinctive<br />

pattern of activation reflecting the interaction between phonetic and<br />

lexical activation. <strong>The</strong> results are discussed in the context of the general<br />

problem of interaction between top-down and bottom-up perceptual<br />

processes.<br />

11:00–11:15 (197)<br />

Entering the Lexicon: Form and Function. LAURA LEACH & AR-<br />

THUR G. SAMUEL, SUNY, Stony Brook (read by Arthur G. Samuel)—<br />

Lexical entries contain semantic, syntactic, phonological, and orthographic<br />

information. However, they are not static repositories; lexical<br />

entries dynamically interact. We are studying the acquisition and development<br />

of new lexical entries (e.g., for “figondalis”). Adult participants<br />

either incidentally learned new “words” while doing a phoneme<br />

monitoring task or were trained to associate each word with a picture<br />

31<br />

of an unusual object. We then measured both form learning (recognizing<br />

words in noise) and functionality (the ability of the word to<br />

support perceptual learning of its constituent phonemes). Across<br />

5 days of training and testing, form learning increased steadily and<br />

substantially in both training conditions. However, new words were<br />

not well integrated into the lexicon when trained via phoneme monitoring;<br />

their ability to support perceptual learning was small and did<br />

not increase over training. In contrast, learning words as the names of<br />

objects rapidly produced lexical entries with both form and function.<br />

11:20–11:35 (198)<br />

Auditory Language Perception Is Unimpaired by Concurrent Saccades<br />

and Visuospatial Attention Shifts. WERNER SOMMER, Humboldt<br />

University, Berlin, OLAF DIMIGEN, University of Potsdam and<br />

Humboldt University, Berlin, ULRIKE SCHILD, Humboldt University,<br />

Berlin, & ANNETTE HOHLFELD, Universidad Complutense de<br />

Madrid—Language perception at the semantic level—as indicated by<br />

the N400 component in the event-related brain potential—can be severely<br />

delayed in time when other tasks are performed concurrently.<br />

This interference could be explained by a central processing bottleneck<br />

or by attention shifts elicited by the additional tasks. Here, we<br />

assessed whether the additional requirement of performing saccades<br />

of 10º to the left and right would aggravate the N400 delay. In a first<br />

experiment, delays of N400 were more pronounced when the additional<br />

task involved saccades than when it did not. However, when the<br />

saccade-induced delay of information input in the additional task was<br />

compensated by correcting SOA, the effects of additional tasks with<br />

and without saccades on N400 latency were indistinguishable. It is<br />

concluded that language perception is unimpaired by exogenously<br />

triggered visuospatial attention shifts, lending further support to the<br />

bottleneck account of concurrent task effects.<br />

11:40–11:55 (199)<br />

<strong>The</strong> Time Course of Shifts in Overt Attention Toward Visual Objects<br />

During Language-Mediated Visual Search. JAMES M. MCQUEEN,<br />

Max Planck Institute for Psycholinguistics, & FALK HUETTIG, Ghent<br />

University—A visual world study is reported investigating the time<br />

course of how phonological, visual shape, and semantic information<br />

accessed from spoken words is used to direct gaze toward objects in<br />

a visual environment. During the acoustic unfolding of Dutch words<br />

(e.g., “kerk,” church), eye movements were monitored to pictures of<br />

phonological, shape, and semantic competitors (e.g., a cherry [“kers”],<br />

a house [“huis”], and a grave [“graf ”]) and to pictures of objects unrelated<br />

on all three dimensions. Time course differences were observed<br />

with attentional shifts to phonological competitors preceding shifts to<br />

shape competitors, which in turn preceded shifts to semantic competitors.<br />

<strong>The</strong>se data suggest that, during language-mediated visual search,<br />

attention is directed to the item with the currently highest priority ranking.<br />

This ranking is codetermined by the type of lexical information<br />

that becomes available as the spoken word unfolds and by the types of<br />

featural information that are copresent in the visual display.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!