29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Friday Evening Posters 3065–3070<br />

of a word may affect the process of spoken word recognition. Two<br />

cross-modal recognition priming experiments examined word-final<br />

flapping, in which a final /t/ is expressed as a more /d/-like flap. In<br />

Experiment 1, a smaller priming effect was found for flapped productions<br />

of words such as eat, as compared with the typical form. <strong>The</strong><br />

priming disadvantage for the flapped production may result from the<br />

presence of phonological mismatch, the presence of orthographic mismatch,<br />

or the presence of both types of mismatch. Experiment 2 used<br />

a second class of words, such as looked, with a typical spoken form<br />

(/lυkt/) that does not correspond with their orthography. Flapped productions<br />

of these words result in an orthographic match but a phonological<br />

mismatch. <strong>The</strong> flapped productions yielded priming effects<br />

comparable to those of the typical productions, suggesting that both<br />

phonology and orthographic characteristics influence spoken word<br />

recognition.<br />

(3065)<br />

Learning New Phonological Variants in Spoken Word Recognition:<br />

Episodes and Abstraction. ELENI N. PINNOW & CYNTHIA M.<br />

CONNINE, SUNY, Binghamton—We investigated phonological variant<br />

acquisition (schwa vowel deletion) for two- and three-syllable words<br />

with high and low deletion rates. During training, schwa-deleted variants<br />

were presented with a visual version. <strong>The</strong> test, a lexical decision<br />

task, occurred without training (control) or with identical (repetition) or<br />

new (transfer) words. Three-syllable words (low deletion) showed a repetition<br />

effect, as compared with the control. Two-syllable low-deletion<br />

words showed equivalent accuracy gains for repetition and transfer conditions.<br />

<strong>The</strong> transfer effect for two-syllable low-deletion words was replicated<br />

using segmentally matched training and test sets. A delay between<br />

training and test did not eliminate training effects. Increased stimulus<br />

repetition during training did not alter accuracy rates but facilitated responses<br />

for low-deletion stimuli. Changing talker voice from training to<br />

test did not alter accuracy or reaction time effects. Results are discussed<br />

in terms of episodic and abstract representations of spoken words.<br />

(3066)<br />

Performance on a SPIN Task by Second-Language Learners: Effects<br />

of Age of Acquisition and Time of Exposure. KIRSTEN M. WEST-<br />

ERGARD & MAGDALENE H. CHALIKIA, Minnesota State University,<br />

Moorhead—Age of acquisition and time of exposure may account<br />

for different language abilities found among second-language<br />

learners. Phonological representation may be a more sensitive measure<br />

of age of acquisition, since speech-in-noise (SPIN) tasks have<br />

previously helped differentiate native from nonnative listeners and<br />

early from late bilinguals. College students and elementary school students<br />

were given a SPIN task. High- and low-frequency words were<br />

presented in silence and in noise. Time of exposure was a covariate.<br />

College students perceived significantly fewer words in noise, relative<br />

to the younger students, and fewer words in noise than in silence, a<br />

finding that was not significant for the younger students. Our results<br />

indicate that older students learning a language perform worse on a<br />

SPIN task than younger students do. This supports previous hypotheses<br />

that learning a language at a younger age can lead to a better<br />

phonological representation of L2.<br />

(3067)<br />

Perceptual Adaptation to Spanish-Accented Speech. SABRINA K.<br />

SIDARAS, Emory University, JENNIFER S. QUEEN, Rollins College,<br />

& JESSICA E. DUKE & LYNNE C. NYGAARD, Emory University—<br />

Recent research suggests that adult listeners are sensitive to talkerspecific<br />

properties of speech and that perceptual processing of speech<br />

changes as a function of exposure to and familiarity with these properties.<br />

<strong>The</strong> present study investigates adult listeners’ perceptual learning<br />

of talker- and accent-specific properties of spoken language.<br />

Mechanisms involved in perceptual learning were examined by evaluating<br />

the effects of exposure to foreign accented speech. Adult native<br />

speakers of American English transcribed English words produced<br />

by six native Spanish-speaking adults. Prior to this transcription<br />

97<br />

task, listeners were trained with items produced by Spanish-accented<br />

talkers or with items produced by native speakers of American English,<br />

or did not receive any training. Listeners were most accurate at<br />

test if they had been exposed to Spanish-accented speech during training.<br />

Similar results were found using sentence-length utterances. Even<br />

with brief exposure, adult listeners perceptually adapt to both talkerspecific<br />

and accent-general regularities in spoken language.<br />

(3068)<br />

<strong>The</strong> Time Course of Audiovisual Integration and Lexical Access:<br />

Evidence From the McGurk Effect. LAWRENCE BRANCAZIO,<br />

Southern Connecticut State University and Haskins Laboratories, JULIA<br />

R. IRWIN, Haskins Laboratories, & CAROL A. FOWLER, Haskins<br />

Laboratories and University of Connecticut—Previous findings demonstrated<br />

lexical influences on the McGurk effect (visual influence on<br />

heard speech with audiovisually discrepant stimuli). We exploited this<br />

effect to test whether audiovisual integration precedes lexical activation.<br />

Experiment 1 paired auditory words/nonwords (mesh, met; meck,<br />

mep) with a visual spoken nonword (/nε/); participants sometimes<br />

perceived words/nonwords (net, neck vs. nesh, nep) not present in either<br />

modality. This McGurk effect was influenced by both auditory<br />

lexicality (fewer /n/ responses for mesh, met than for meck, mep) and<br />

integrated lexicality (more /n/ responses for meck [neck], met [net]<br />

than for mesh [nesh], mep [nep]); the latter finding indicates that audiovisual<br />

integration precedes lexical activation. Experiment 2 incorporated<br />

audiovisual asynchrony to address the time course of these<br />

processes. Auditory lead (100 msec) increased the auditory lexicality<br />

effect without modulating the integrated lexicality effect, indicating<br />

a complex relationship between uptake of auditory/visual phonetic information<br />

and lexical access. Implications for speech perception models<br />

are addressed.<br />

(3069)<br />

Motion Information Analogically Conveyed Through Acoustic<br />

Properties of Speech. HADAS SHINTEL & HOWARD C. NUSBAUM,<br />

University of Chicago—Language is generally thought of as conveying<br />

meaning using arbitrary symbols, such as words. However, analogue<br />

variation in the acoustic properties of speech can also convey<br />

meaning (Shintel, Okrent, & Nusbaum, 2003). We examined whether<br />

listeners routinely use speech rate as information about the motion of<br />

a described object and whether they perceptually represent motion<br />

conveyed exclusively through speech rate. Listeners heard a sentence<br />

describing an object (e.g., <strong>The</strong> dog is brown) spoken quickly or slowly.<br />

Listeners then saw an image of the object in motion or at rest. Listeners<br />

were faster recognizing the object when speech rate was consistent<br />

with the depicted motion in the picture. Results suggest that<br />

listeners routinely use information conveyed exclusively through<br />

acoustic properties of speech as a natural part of the comprehension<br />

process and incorporate this information into a perceptual representation<br />

of the described object.<br />

(3070)<br />

Recognition of Basic Emotions From Speech Prosody as a Function<br />

of Language and Sex. MARC D. PELL, McGill University, SONJA<br />

KOTZ & SILKE PAULMANN, Max Planck Institute for Human Cognitive<br />

and Brain Sciences, & AREEJ ALASSERI, McGill University<br />

(sponsored by Dorothee J. Chwilla)—This study investigated the vocal<br />

expression of emotion in three distinct languages (German, English,<br />

and Arabic) in order to understand factors that influence how listeners<br />

of each language identify basic emotions from native spoken language<br />

input. Two female and two male speakers of each language were<br />

recorded producing a series of semantically anomalous “pseudosentences”<br />

in seven distinct emotional tones, where the emotion was communicated<br />

strictly through vocal-prosodic features of the utterance. A<br />

group of 20 native listeners of each language then judged the intended<br />

emotion represented by a randomized set of utterances elicited by<br />

speakers of the native language in a perceptual judgment task. Findings<br />

were analyzed both within and across languages in order to eval-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!