29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Posters 3057–3064 Friday Evening<br />

(3057)<br />

Does Memory Retrieval Interference Require A Verbal Competing<br />

Task? ANA M. FRANCO-WATKINS, TIM C. RICKARD, & HAROLD<br />

PASHLER, University of California, San Diego—Fernandes and<br />

Moscovitch (2000) found that a (verbal) memory retrieval task suffered<br />

interference from a verbal competing task, but not from a numerical<br />

competing task. We examined both verbal and numerical<br />

competing tasks with varying levels of difficulty. Each of these competing<br />

tasks was performed either alone or paired with a memory retrieval<br />

task. In addition to analyzing overall performance levels, we<br />

conducted a more microscopic analysis of the relative timing of responses<br />

on the two tasks in order to shed light on the degree and nature<br />

of interference. <strong>The</strong> results provide little support for the view that<br />

memory retrieval is especially susceptible to interference from concurrent<br />

verbal processing.<br />

(3058)<br />

Attentional Limits in Memory Retrieval—Revisited. COLLIN<br />

GREEN & JAMES C. JOHNSTON, NASA Ames Research Center, &<br />

ERIC RUTHRUFF, University of New Mexico—Carrier and Pashler<br />

(1995) concluded that memory retrieval is subject to a central bottleneck.<br />

Using locus-of-slack logic in a dual-task paradigm, they provided<br />

evidence that memory retrieval (both recall and recognition) on<br />

Task 2 was delayed until after the bottleneck caused by performing a<br />

tone discrimination Task 1 occurred. New experiments explored the<br />

limitations of Carrier and Pashler’s conclusions. To increase the likelihood<br />

of observing parallel processing during memory retrieval, our<br />

experiments used more typical dual-task instructions and used preferred<br />

stimulus–response modality pairings. In addition, we considered<br />

the hypothesis that central resources are required for the initiation<br />

and/or termination of memory retrieval, but not for the retrieval<br />

process itself.<br />

(3059)<br />

Probing the Link Between Orienting and IOR Using a Dual-Task<br />

Procedure. TROY A. W. VISSER, University of British Columbia,<br />

Okanagan, ROBERT BOURKE, University of Melbourne, & JENEVA<br />

L. OHAN, University of British Columbia, Okanagan—A nonpredictive<br />

visual cue presented at the same location as a target facilitates responses<br />

when the interval between cue and target (cue–target onset<br />

asynchrony; CTOA) is short (e.g., 100 msec) but slows responses<br />

when the CTOA is longer (e.g., 800 msec). This slowing is commonly<br />

referred to as inhibition of return (IOR). Although, IOR is clearly<br />

linked to the attentional shift caused by the appearance of the cue, the<br />

relationship between attention and IOR is still unclear. To investigate<br />

this issue, the present work combined a conventional cuing paradigm<br />

with a dual-task procedure. Observers were presented with a central<br />

letter target, followed by a nonpredictive peripheral cue and a peripheral<br />

target. <strong>The</strong> interval between the target letter and the cue was manipulated<br />

to vary attentional availability for the cue. Results suggest<br />

that limiting attention influenced early facilitation and later IOR, indicating<br />

that both effects were subsumed by common mechanisms.<br />

• SPEECH PERCEPTION •<br />

(3060)<br />

<strong>The</strong> Effect of Word/Emotion Congruency on Dichotic Laterality<br />

Effects. CHERYL TECHENTIN & DANIEL VOYER, University of<br />

New Brunswick (sponsored by Daniel Voyer)—<strong>The</strong> present study investigated<br />

the effect of word/emotion congruency in dichotic listening.<br />

Eighty participants were dichotically presented with pairs of<br />

words expressing emotions in one of two report conditions (blocked<br />

or randomized). Words and emotions were combined in congruent<br />

(e.g., “glad” pronounced in a happy tone) and noncongruent (e.g.,<br />

“glad” in a sad tone) pairs. Participants identified the presence of either<br />

a target word or an emotion in separate blocks or in a randomized fashion.<br />

In addition to an overall right-ear advantage (REA) for words and<br />

a left-ear advantage (LEA) for emotions, a material (word or emotion) �<br />

96<br />

congruency � ear interaction was obtained only for randomized testing.<br />

It indicated an REA for words congruent with the expressed emotion,<br />

whereas emotions showed an LEA only for incongruent stimuli.<br />

Implications of these findings for research claiming functional complementarity<br />

in the cerebral representation of verbal and nonverbal<br />

tasks are discussed.<br />

(3061)<br />

Word and Subword Units in Speech Perception. IBRAHIMA<br />

GIROUX & ARNAUD REY, LEAD-CNRS, Université de Bourgogne,<br />

Dijon (sponsored by Arnaud Rey)—Saffran et al. (1996) found that<br />

human infants are sensitive to statistical regularities corresponding to<br />

lexical units when hearing an artificial spoken language. In order to<br />

account for this early word segmentation ability, Simple Recurrent<br />

Networks (SRN: Elman, 1990) suggest that associations between subword<br />

units are strengthened with time. Alternatively, according to<br />

Parser (Perruchet & Vinter, 1998), only lexical units are strengthened<br />

independently from the weight of subword units. In the present study,<br />

we compared word and subword recognition performance of adults after<br />

hearing 2 or 10 min of an artificial spoken language. <strong>The</strong> data are consistent<br />

with Parser’s predictions showing improved performance on<br />

words after 10 min, but not on subwords. This result suggests that<br />

word segmentation abilities are not simply due to stronger subword<br />

units’ associations but to the emergence of stronger lexical representations<br />

during the development of speech perception processes.<br />

(3062)<br />

Using Pronunciation Data to Constrain Models of Spoken Word<br />

Recognition. LAURA DILLEY & MARK A. PITT, Ohio State University,<br />

& KEITH JOHNSON, University of California, Berkeley—Many<br />

of the mysteries of how spoken words are recognized have evolved out<br />

of the observation that the acoustics of speech are highly variable, yet<br />

perception is amazingly stable (i.e., listeners perceive the words intended<br />

by talkers). Proposed solutions to this perceptual constancy<br />

problem can be process oriented, in which mental processes restore<br />

or recover the intended word form en route to lexical memory, or representation<br />

oriented, in which the variation itself is encoded in the<br />

word’s lexical entry. <strong>The</strong> viability of both approaches was examined<br />

by studying the phonological and acoustic variability found in the<br />

Buckeye corpus of conversational speech, specifically in environments<br />

associated with regressive assimilation (“green” becomes<br />

“greem” in “green ball”). <strong>The</strong> results highlight obstacles that models<br />

of both classes must overcome.<br />

(3063)<br />

<strong>The</strong> Perception and Representation of an R-Dropping Dialect.<br />

MEGHAN SUMNER & ARTHUR G. SAMUEL, SUNY, Stony Brook—<br />

Much variation a listener is exposed to is due to differing phonological<br />

characteristics of dialects. For example, in American English,<br />

speakers of the Long Island dialect (LID) regularly drop the “er”<br />

sound word finally (e.g., “mother” sounds similar to “moth-uh”). An<br />

important question is how listeners within and across dialects perceive<br />

and store such variation. Four speakers from two different dialect<br />

groups (LID and non-LID) were used to address this question. Listeners<br />

participated in either a long-term priming task or a semantic<br />

priming task, with two speakers from each dialect. Examining dialectal<br />

variation in this way enabled us to see whether speakers of an<br />

r-dropping dialect stored representations similar to their own productions<br />

(regardless of the input form). It also clarified whether the predictability<br />

of the variation enables non-LID listeners to encode these<br />

variants into the already existing representations of their own dialect, or<br />

whether they created new speaker- or dialect-specific representations.<br />

(3064)<br />

Orthographic Influence in Phonological Variant Recognition.<br />

LARISSA J. RANBOM & CYNTHIA M. CONNINE, SUNY, Binghamton<br />

(sponsored by Cynthia M. Connine)—Although spoken word<br />

recognition is influenced by phonology, orthographic characteristics

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!