29.01.2013 Views

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Sunday Morning Papers 323–328<br />

dents received a final test covering the facts they reviewed through<br />

testing versus rereading, and some that they never reviewed. Retention<br />

was greatest for facts that were reviewed through testing after a<br />

3-month delay, suggesting that the principles of testing and spacing<br />

can be readily applied to improving retention of U.S. history.<br />

11:40–11:55 (323)<br />

<strong>The</strong> Role of Attention in Episodic Memory Impairment During<br />

Nicotine Withdrawal. PAUL S. MERRITT, ADAM COBB, & LUKE<br />

MOISSINAC, Texas A&M University, Corpus Christi, & ELLIOT<br />

HIRSHMAN, George Washington University—Previous research has<br />

shown reductions in memory performance following 24 h of abstinence<br />

from tobacco use (Hirshman et al., 2004). A central question<br />

from this research is whether this is a primary effect of withdrawal<br />

from nicotine or due to reductions in attention also observed during<br />

withdrawal (Hirshman et al., 2004). We tested 25 moderate to heavy<br />

smokers when smoking normally (ad lib) and after 24 h without tobacco<br />

use (abstinent). Participants completed a recognition memory<br />

test under both full and divided attention encoding conditions, in addition<br />

to f digit span, selective attention task, and mental rotation. <strong>The</strong><br />

most significant finding was a reduction in memory performance during<br />

abstinence which was equivalent across full and divided attention<br />

conditions. No effects of withdrawal from nicotine were observed for<br />

the other tasks. Tobacco abstinence appears to have a primary effect<br />

on episodic memory performance, which may have consequences for<br />

individuals abstaining from tobacco.<br />

Speech Recognition<br />

Shoreline, Sunday Morning, 10:20–12:00<br />

Chaired by Heather Bortfeld, Texas A&M University<br />

10:20–10:35 (324)<br />

Early Word Recognition May Be Stress-Full. HEATHER BORTFELD,<br />

Texas A&M University, & JAMES MORGAN, Brown University—In<br />

a series of studies, we examined how mothers naturally stress words<br />

across multiple mentions in speech to their infants and how this marking<br />

influences infants’ recognition of words in fluent speech. We first<br />

collected samples of mothers’ infant-directed speech using a technique<br />

that induced multiple repetitions of target words. Acoustic<br />

analyses revealed that mothers systematically alternated between emphatic<br />

and nonemphatic stress when talking to their infants. Using the<br />

headturn preference procedure, we then tested 7.5-month-old infants<br />

on their ability to detect familiarized bisyllabic words in fluent<br />

speech. Stress of target words (emphatic and nonemphatic) was systematically<br />

varied across familiarization and recognition phases of<br />

four experiments. <strong>The</strong> results indicated that, although infants generally<br />

prefer listening to words produced with emphatic stress, recognition<br />

was enhanced when the degree of emphatic stress at familiarization<br />

matched the degree of emphatic stress at recognition.<br />

10:40–10:55 (325)<br />

Influence of Visual Speech on Phonological Processing by Children.<br />

SUSAN JERGER, University of Texas, Dallas, MARKUS F. DAMIAN,<br />

University of Bristol, MELANIE SPENCE, University of Texas, Dallas,<br />

& NANCY TYE-MURRAY, Washington University School of Medicine—Speech<br />

perception is multimodal in nature in infants, yet dominated<br />

by auditory input in children. Apparent developmental differences<br />

may be specious, however, to the extent performance has been<br />

assessed implicitly in infants and explicitly in children. We assessed<br />

implicitly the influence of visual speech on phonological processing<br />

in 100 typically developing children between 4 and 14 years. We applied<br />

the online cross-modal picture word task. Children named pictures<br />

while attempting to ignore auditory or audiovisual distractors<br />

whose onsets were congruent or conflicting (in place of articulation<br />

or voicing) relative to picture–name onsets. Overall, congruent onsets<br />

speeded naming and conflicting onsets slowed naming. Visual speech<br />

50<br />

significantly (1) enhanced phonological effects in preschoolers and<br />

preteen/teenagers but (2) exerted no influence on performance in<br />

young elementary school children. Patterns of results will be related<br />

to abilities such as speechreading, input/output phonology, vocabulary,<br />

visual perception, and visual processing speed.<br />

11:00–11:15 (326)<br />

Asynchrony Tolerance in the Multimodal Organization of Speech.<br />

ROBERT E. REMEZ, DARIA F. FERRO, & KATHRYN R.<br />

DUBOWSKI, Barnard College—Studies of multimodal presentation<br />

of speech reveal that perceivers tolerate large temporal discrepancy in<br />

integrating audible and visible properties. Perceivers combine multimodal<br />

samples of speech resolving syllables, words, and sentences at<br />

asynchronies greater than 180 msec. A unimodal test exploiting sine<br />

wave speech revealed that asynchrony tolerance in auditory speech<br />

differs critically from audiovisual speech perception. Is this difference<br />

in tolerance due to the reliance on dynamic sensory attributes, or a<br />

true difference between uni- and multimodal organization? New tests<br />

used sine wave synthesis of speech in an audiovisual presentation. Perceivers<br />

transcribed audiovisual sentences differing in asynchrony of<br />

a tone analog of the second formant relative to a visible face articulating<br />

a sentence. Asynchronies ranged from 250 msec to 250-msec<br />

lag. <strong>The</strong> results revealed time-critical similarities and differences between<br />

perceptual organization of unimodal and multimodal speech.<br />

<strong>The</strong> implications for understanding perceptual organization and<br />

analysis of speech will be discussed.<br />

11:20–11:35 (327)<br />

Effects of Time Pressure on Eye Movements to Visual Referents<br />

During the Recognition of Spoken Words. DELPHINE DAHAN,<br />

University of Pennsylvania—Eye movements to visual referents are<br />

increasingly being used as a measure of lexical processing during<br />

spoken-language comprehension. Typically, participants see a display<br />

with four pictures and are auditorily instructed to move one of them.<br />

<strong>The</strong> probability of fixating a picture as participants hear the target’s<br />

name has been assumed to reflect lexical activation of this picture’s<br />

name. Here, we more closely examined the functional relation between<br />

fixations and lexical processing by manipulating task demands.<br />

Half of the participants were asked to complete the task as<br />

quickly as possible, and half were under no time pressure. <strong>The</strong> frequency<br />

of target- and distractor-picture names was varied. Time pressure<br />

affected the speed with which participants fixated the target picture.<br />

Importantly, it also greatly amplified the impact of lexical<br />

frequency on fixation probabilities. Thus, the relation between lexical<br />

activation and fixation behavior is only indirect, and, we argue,<br />

mediated by a decisional component.<br />

11:40–11:55 (328)<br />

Audiovisual Alignment Facilitates the Detection of Speaker Intent in<br />

a Word-Learning Setting. ELIZABETH JOHNSON & ALEXANDRA<br />

JESSE, Max Planck Institute for Psycholinguistics—Caregivers produce<br />

distinctive speech-accompanying movements when addressing<br />

children. We hypothesized that the alignment between speakers’ utterances<br />

and the motion they impose upon objects could facilitate<br />

joint attention in caregiver–child interactions. Adults (N = 8) were<br />

videotaped as they taught the proper name of a toy to 24-month-olds.<br />

<strong>The</strong> toy’s motion was extracted from the video to animate a photograph<br />

of the toy. In a forced-choice task, adults (N = 24) watched sideby-side<br />

animations (a forward and reversed version of the same animation)<br />

and were asked to choose which toy the speaker was labeling.<br />

Performance was above chance (75% correct). Performance was hindered<br />

when the speech was reversed, but not when it was low-pass filtered,<br />

suggesting that adults rely on the alignment between the motion<br />

imposed on the labeled toy and the utterance’s prosody to detect<br />

speaker intent. We are currently testing whether this amodal information<br />

modulates 24-month-olds’ attention in a word-learning<br />

setting.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!