S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society S1 (FriAM 1-65) - The Psychonomic Society

psychonomic.org
from psychonomic.org More from this publisher
29.01.2013 Views

Thursday Evening Posters 1008–1013 changes in discrimination and perceived similarity within and across category boundaries, even when category learning is unsupervised. The online mixture estimation model (Vallabha, McClelland, Pons, Werker, & Amano, in press) provides an account of unsupervised vowel category learning by treating categories as Gaussian distributions whose parameters are gradually estimated. We extend the model to pairwise discrimination, defined as the extent to which two stimuli are members of different estimated categories. We address three findings: Infants’ discrimination of speech sounds is better after exposure to a bimodal rather than a unimodal distribution (Maye, Werker, & Gerken, 2002), infants’ discrimination of vowels is affected by acoustic distance (Sabourin & Werker, 2004), and perceived similarity is affected by order of typical versus atypical category exemplars (Polk, Behensky, Gonzalez, & Smith, 2002). The model also makes predictions about the development of discrimination during perceptual learning. (1008) Nonsimultaneous Context Effects for Speech-Like Stimuli. JEREMY R. GASTON, RICHARD E. PASTORE, JESSE D. FLINT, & ANJULI BOSE, Binghamton University—A well known speech perception finding is that the physical properties that cue consonant identity can differ substantially as a function of vowel context. Might vowel properties alter the accessibility of consonantal cues? Addressing this question, the present work investigates the effects of vowel formantlike stimuli on the ability to recognize aspects of speech-like targets. For both frequency transitions and brief noise bursts, a subsequent stimulus provided significant backward recognition masking relative to those same stimuli in isolation, with the amount of interference dependent on both the target–masker frequency relationship and the complexity of the masker. The opposite effect, forward recognition contrast, was found for the temporal reversal of the stimuli. If these results for speech-like stimuli generalize to speech, they would, at a minimum, provide a perceptual account for the physical changes in consonantal cues across vowel contexts. (1009) Perception of Cross-Modal Speech Alignment. RACHEL M. MILLER, KAUYUMARI SANCHEZ, & LAWRENCE D. ROSEN- BLUM, University of California, Riverside (sponsored by Lawrence D. Rosenblum)—Talkers are known to produce utterances that partly imitate, or align to, the speech they hear. Alignment occurs both during live conversation and when a talker is asked to shadow recorded words. Recent evidence reveals that talkers also align toward the speech they see. When asked to shadow lipread words from a speaker, subjects will produce utterances that sound more like those of the speaker. Previously, alignment has been determined by naive raters asked to judge the auditory similarity between a speaker’s and subjects’ utterances. To determine whether alignment is also apparent across modalities, raters were asked to judge the relative similarity between a speaker’s visual (video) utterance and two audio utterances of the subject. Raters judged the speaker’s visual utterance as more similar to subjects’ shadowed utterances than to subjects’ baseline (read text) utterances. This suggests that raters are indeed sensitive to the cross-modal similarity that occurs during alignment. (1010) Speeded Choice Responses to Audiovisual Speech. KAUYUMARI SANCHEZ, RACHEL M. MILLER, & LAWRENCE D. ROSENBLUM, University of California, Riverside—Speakers are nearly as fast at shadowing an unpredicted syllable (/ba/, /va/, or /da/; choice condition), as they are at uttering a single assigned syllable (/ba/; simple condition) following presentation by a model. One explanation for these speeded choice reaction times is that listeners are extracting articulatory information that can serve to prime their own gestures. To test the articulatory versus auditory nature of this priming information, an audiovisual test of choice reaction times was conducted. Speeded choice reaction times were observed for an audiovisually 52 fused syllable (audio /ba/ + visual /ga/, perceived as “da”), suggesting that priming information does exist at the gestural level. These findings suggest that the common currency facilitating a speeded production can take an informational form that exists cross-modally. The results are also consistent with transcranial magnetic stimulation findings that show responsiveness in specific articulatory muscles dependent on audiovisually integrated speech information. (1011) Cognitive Consequences of Parallel Language Processing in Bilinguals. HENRIKE K. BLUMENFELD & VIORICA MARIAN, Northwestern University—Bilinguals’ activation and control of word networks across languages was examined using eyetracking and negative priming. Experiment 1 found that, when listening to words, bilinguals activated cross-linguistic competitors from sparse, but not dense, phonological neighborhoods, suggesting that cross-linguistic competitors from large neighborhoods are inhibited by similar words. Competition resolution from cross-linguistic networks may place additional demands on inhibitory control in bilinguals. Experiment 2 tested the hypothesis that bilingual experience modulates cognitive inhibition mechanisms during auditory comprehension. The results suggest that monolinguals inhibit within-language competitors to resolve ambiguity, whereas bilinguals may resolve competition equally efficiently without the use of overt inhibition. To draw connections between consequences of bilingualism in the linguistic and nonlinguistic domains, Experiment 3 compared performance on the eyetracking/ negative priming paradigm with performance on measures of executive control (linguistic Stroop, nonlinguistic Stroop, and Simon tasks). Together, results suggest a link between parallel language activation in bilinguals and executive control mechanisms. (1012) The Nature of Memory Representations for Surface Form of Spoken Language. SABRINA K. SIDARAS & LYNNE C. NYGAARD, Emory University (sponsored by Lynne C. Nygaard)—This study examined the nature of memory for surface characteristics of spoken words by evaluating the effects of indexical and allophonic variation in implicit and explicit memory tasks. During the study phase for all tasks, participants were presented with a list of words that varied by speaker and pronunciation. In the test phase, words were repeated either in the same or different voice and with the same or different pronunciation. The results showed that in both explicit and implicit memory tasks, changes from study to test in pronunciation and voice interacted to influence memory performance. A memory benefit was found for items repeated in the same rather than in a different voice but only when pronunciation stayed the same from study to test. These findings suggest that the relative specificity of memory representations for spoken language may depend on the interaction of surface and phonological form. (1013) Cognitive Predictors of Lipreading Ability in Hearing Adults. JULIA E. FELD & MITCHELL S. SOMMERS, Washington University (sponsored by Mitchell S. Sommers)—Previous research has shown extensive individual variability in lipreading ability, even among participants sampled from homogenous populations. Despite considerable research on factors that predict individual differences in lipreading, it remains unclear what traits or abilities underlie the observed variability in visual-only speech perception. In part, the absence of reliable predictors of lipreading can be attributed to the use of narrow sets of predictor variables within individual studies. The present study therefore examined the ability of a broad range of factors, including working memory, processing speed, verbal learning, personality, and perceptual closure, to account for individual differences in lipreading ability. Multiple measures of each of these factors were obtained along with lipreading measures for consonants, words, and sentences. To date, only verbal processing speed has emerged as a significant predictor of lipreading ability for all three types of stimuli.

Posters 1014–1020 Thursday Evening (1014) Comparing Perceptual Adaptation With Naturally Produced Fast Speech and Time-Compressed Speech. IVY L. LIU, University at Buffalo, CONSTANCE M. CLARKE-DAVIDSON, University of Alberta, & PAUL A. LUCE, University at Buffalo—Our purpose was to explore perception of the dual components of speech produced at a fast rate: rate of information flow and phonetic differences from normal rate speech. Previous research has shown that time-compressed speech is easier to process than naturally produced fast speech, presumably due to less careful articulation in fast speech (Janse, 2004). Other research has demonstrated that listeners adapt within 10 sentences to time-compressed speech in which rate of information is increased but phonetic characteristics are unaltered (Dupoux & Green, 1997). We further explored differences in perception of timecompressed and natural fast speech in the context of perceptual adaptation. We compared adaptation to semantically anomalous sentences produced at a fast rate versus time-compressed from a normal rate. Initial results indicate a different pattern of adaptation to timecompressed and natural fast speech. Based on these findings, we consider the possibility that rate and phonetic adaptation are separate processes. (1015) Talker Specificity Effects in the Perception of Foreign-Accented Speech. CONOR T. MCLENNAN, Cleveland State University, & JULIO GONZÁLEZ, Universitat Jaume I—Our research examines the circumstances in which talker variability affects spoken word perception. Based on previous time-course work, we hypothesized that talker specificity effects would be more robust when processing is relatively slow. We further hypothesized that spoken word processing would be significantly slower for listeners presented with foreignaccented speech than for listeners presented with speech produced by native speakers (and thus produced without a foreign accent). Consequently, we predicted that more robust talker specificity effects would be obtained for listeners presented with foreign-accented speech. Our results confirmed these hypotheses: Listeners presented with foreignaccented speech made lexical decision responses significantly more slowly than listeners presented with nonaccented speech. Crucially, talker specificity effects were only obtained for listeners presented with foreign-accented speech. The results are consistent with previous time-course findings, and add to our knowledge of the circumstances under which variability affects the perception of spoken words. • MOTOR CONTROL • (1016) Resistance to Slow Motion: Strategies for Moving Near Preferred Speeds. ROBRECHT P. R. D. VAN DER WEL & DAVID A. ROSEN- BAUM, Pennsylvania State University (sponsored by David A. Rosenbaum)—A great deal of research has focused on how people respond to challenges of moving above preferred movement rates. Much less work has focused on how people respond to challenges of moving below preferred rates. To address this issue, we asked participants to move a dowel back and forth between two large targets in time with an auditory metronome whose rate varied from slow to fast. The kinematics of participants’ movements at each of the driving frequencies revealed that participants did not simply scale their movement rates with driving frequency, but used one or more of the following strategies to avoid moving slowly: (1) increasing dwell times; (2) subdividing movement time intervals; and/or (3) increasing movement path length. The results suggested that the selection of movement speed is constrained at the low end of the frequency spectrum as well as at the high end. (1017) Response–Response Interference in Simultaneously Executed Oculomotor and Manual Responses. LYNN HUESTEGGE & IRING 53 KOCH, RWTH Aachen University—Previous research on the coordination of eye and hand movements has mainly focused on grasping movements, implying experimental paradigms where subjects have to respond with both effector systems to a common target. In the present study, we analyze on a more general level to what extent concurrently performed eye and hand movements interact. For this purpose, in Experiment 1 subjects had to respond to an auditory stimulus with either a buttonpress (manual response), a saccade to a visual target (oculomotor response), or both. In Experiments 2 and 3, the difficulty of response selection in the manual task was increased: Subjects had to cross hands and respond to the auditory stimulus with either the spatially corresponding hand or button. The results indicate that both manual and oculomotor responses generally suffer from dual task conditions, and that oculomotor response times are severely prolonged with increasing difficulty of the simultaneous manual task. (1018) Effects of Perceived Distance, Time-To-Contact, and Momentum on Obstacle Avoidance: The Chainmail Experiment. HUGO BRUGGEMAN & WILLIAM H. WARREN, JR., Brown University (sponsored by William H. Warren, Jr.)—Is obstacle avoidance controlled by perceived distance or time-to-contact with an obstacle? To dissociate these hypotheses, we vary physical walking speed, body weight, and the visual gain in a virtual environment. Fajen and Warren’s (JEP:HPP, 2003) locomotor dynamics model predicts later turns with higher walking speed or greater weight if distance is the control variable, but predicts the opposite if time-to-contact is the control variable. Participants walked to a goal around an obstacle whose position varied in an ambulatory virtual environment. Visual gain was manipulated by making the optical motion in the display slower, matched, or faster than actual walking speed. Body weight was increased by 25% using chainmail and weight vest. Model predictions are evaluated against the human data to empirically determine whether obstacle avoidance is controlled by distance or time-to-contact. The weight manipulation allows us to analyze the influence of momentum and to specify the model’s damping term. (1019) Why Are Two Hands Better Than One? AMANDA L. STEWART, J. DEVIN MCAULEY, & STEVEN M. SEUBERT, Bowling Green State University (sponsored by J. Devin McAuley)—Within-hand timing variability during bimanual rhythmic performance (e.g., repetitive tapping) is reduced in comparison with unimanual rhythmic performance (Helmuth & Ivry, 1996). To explain the bimanual advantage, Helmuth and Ivry proposed that in-phase bimanual movements involve averaging the output of two clocks prior to execution, whereas unimanual movements involve only a single clock. The present study replicated the bimanual advantage using methods that matched Helmuth and Ivry (1996; Experiment 1) but additionally found differences in the amount of temporal drift between the bimanual and unimanual conditions that were positively correlated with the magnitude of the bimanual advantage. The latter result suggests that the bimanual advantage is at least partially an artifact of Weber’s law. Overall, the results of this study suggest that a comprehensive explanation of the bimanual timing advantage is multifaceted. (1020) Action Effects in the PRP Paradigm: Locating Processes of Intentional Response Coding. MARKO PAELECKE & WILFRIED KUNDE, Martin Luther University Halle-Wittenburg—Ideomotor theories of action control assume that actions are represented and accessed by codes of their sensorial effects. Consistent with this view, Hommel (1993) demonstrated that the direction of the Simon effect can be inverted by intentionally recoding the responses in terms of their responseincongruent action effects. In the present study, we examined the contribution of several dissociated processes to this inversion of the Simon effect. Participants made two choice reactions in response to stimuli presented in rapid succession at variable stimulus onset asynchronies

Thursday Evening Posters 1008–1013<br />

changes in discrimination and perceived similarity within and across<br />

category boundaries, even when category learning is unsupervised.<br />

<strong>The</strong> online mixture estimation model (Vallabha, McClelland, Pons,<br />

Werker, & Amano, in press) provides an account of unsupervised<br />

vowel category learning by treating categories as Gaussian distributions<br />

whose parameters are gradually estimated. We extend the model<br />

to pairwise discrimination, defined as the extent to which two stimuli<br />

are members of different estimated categories. We address three findings:<br />

Infants’ discrimination of speech sounds is better after exposure<br />

to a bimodal rather than a unimodal distribution (Maye, Werker, &<br />

Gerken, 2002), infants’ discrimination of vowels is affected by<br />

acoustic distance (Sabourin & Werker, 2004), and perceived similarity<br />

is affected by order of typical versus atypical category exemplars<br />

(Polk, Behensky, Gonzalez, & Smith, 2002). <strong>The</strong> model also makes<br />

predictions about the development of discrimination during perceptual<br />

learning.<br />

(1008)<br />

Nonsimultaneous Context Effects for Speech-Like Stimuli. JEREMY<br />

R. GASTON, RICHARD E. PASTORE, JESSE D. FLINT, & ANJULI<br />

BOSE, Binghamton University—A well known speech perception<br />

finding is that the physical properties that cue consonant identity can<br />

differ substantially as a function of vowel context. Might vowel properties<br />

alter the accessibility of consonantal cues? Addressing this<br />

question, the present work investigates the effects of vowel formantlike<br />

stimuli on the ability to recognize aspects of speech-like targets.<br />

For both frequency transitions and brief noise bursts, a subsequent<br />

stimulus provided significant backward recognition masking relative<br />

to those same stimuli in isolation, with the amount of interference dependent<br />

on both the target–masker frequency relationship and the<br />

complexity of the masker. <strong>The</strong> opposite effect, forward recognition<br />

contrast, was found for the temporal reversal of the stimuli. If these<br />

results for speech-like stimuli generalize to speech, they would, at a<br />

minimum, provide a perceptual account for the physical changes in<br />

consonantal cues across vowel contexts.<br />

(1009)<br />

Perception of Cross-Modal Speech Alignment. RACHEL M.<br />

MILLER, KAUYUMARI SANCHEZ, & LAWRENCE D. ROSEN-<br />

BLUM, University of California, Riverside (sponsored by Lawrence D.<br />

Rosenblum)—Talkers are known to produce utterances that partly imitate,<br />

or align to, the speech they hear. Alignment occurs both during<br />

live conversation and when a talker is asked to shadow recorded<br />

words. Recent evidence reveals that talkers also align toward the<br />

speech they see. When asked to shadow lipread words from a speaker,<br />

subjects will produce utterances that sound more like those of the<br />

speaker. Previously, alignment has been determined by naive raters<br />

asked to judge the auditory similarity between a speaker’s and subjects’<br />

utterances. To determine whether alignment is also apparent<br />

across modalities, raters were asked to judge the relative similarity between<br />

a speaker’s visual (video) utterance and two audio utterances of<br />

the subject. Raters judged the speaker’s visual utterance as more similar<br />

to subjects’ shadowed utterances than to subjects’ baseline (read<br />

text) utterances. This suggests that raters are indeed sensitive to the<br />

cross-modal similarity that occurs during alignment.<br />

(1010)<br />

Speeded Choice Responses to Audiovisual Speech. KAUYUMARI<br />

SANCHEZ, RACHEL M. MILLER, & LAWRENCE D. ROSENBLUM,<br />

University of California, Riverside—Speakers are nearly as fast at<br />

shadowing an unpredicted syllable (/ba/, /va/, or /da/; choice condition),<br />

as they are at uttering a single assigned syllable (/ba/; simple<br />

condition) following presentation by a model. One explanation for<br />

these speeded choice reaction times is that listeners are extracting articulatory<br />

information that can serve to prime their own gestures. To<br />

test the articulatory versus auditory nature of this priming information,<br />

an audiovisual test of choice reaction times was conducted.<br />

Speeded choice reaction times were observed for an audiovisually<br />

52<br />

fused syllable (audio /ba/ + visual /ga/, perceived as “da”), suggesting<br />

that priming information does exist at the gestural level. <strong>The</strong>se<br />

findings suggest that the common currency facilitating a speeded production<br />

can take an informational form that exists cross-modally. <strong>The</strong><br />

results are also consistent with transcranial magnetic stimulation findings<br />

that show responsiveness in specific articulatory muscles dependent<br />

on audiovisually integrated speech information.<br />

(1011)<br />

Cognitive Consequences of Parallel Language Processing in Bilinguals.<br />

HENRIKE K. BLUMENFELD & VIORICA MARIAN, Northwestern<br />

University—Bilinguals’ activation and control of word networks<br />

across languages was examined using eyetracking and negative<br />

priming. Experiment 1 found that, when listening to words, bilinguals<br />

activated cross-linguistic competitors from sparse, but not dense,<br />

phonological neighborhoods, suggesting that cross-linguistic competitors<br />

from large neighborhoods are inhibited by similar words.<br />

Competition resolution from cross-linguistic networks may place additional<br />

demands on inhibitory control in bilinguals. Experiment 2<br />

tested the hypothesis that bilingual experience modulates cognitive inhibition<br />

mechanisms during auditory comprehension. <strong>The</strong> results suggest<br />

that monolinguals inhibit within-language competitors to resolve<br />

ambiguity, whereas bilinguals may resolve competition equally efficiently<br />

without the use of overt inhibition. To draw connections between<br />

consequences of bilingualism in the linguistic and nonlinguistic domains,<br />

Experiment 3 compared performance on the eyetracking/<br />

negative priming paradigm with performance on measures of executive<br />

control (linguistic Stroop, nonlinguistic Stroop, and Simon tasks).<br />

Together, results suggest a link between parallel language activation<br />

in bilinguals and executive control mechanisms.<br />

(1012)<br />

<strong>The</strong> Nature of Memory Representations for Surface Form of Spoken<br />

Language. SABRINA K. SIDARAS & LYNNE C. NYGAARD, Emory<br />

University (sponsored by Lynne C. Nygaard)—This study examined<br />

the nature of memory for surface characteristics of spoken words by<br />

evaluating the effects of indexical and allophonic variation in implicit<br />

and explicit memory tasks. During the study phase for all tasks, participants<br />

were presented with a list of words that varied by speaker and<br />

pronunciation. In the test phase, words were repeated either in the<br />

same or different voice and with the same or different pronunciation.<br />

<strong>The</strong> results showed that in both explicit and implicit memory tasks,<br />

changes from study to test in pronunciation and voice interacted to influence<br />

memory performance. A memory benefit was found for items<br />

repeated in the same rather than in a different voice but only when pronunciation<br />

stayed the same from study to test. <strong>The</strong>se findings suggest<br />

that the relative specificity of memory representations for spoken language<br />

may depend on the interaction of surface and phonological<br />

form.<br />

(1013)<br />

Cognitive Predictors of Lipreading Ability in Hearing Adults.<br />

JULIA E. FELD & MITCHELL S. SOMMERS, Washington University<br />

(sponsored by Mitchell S. Sommers)—Previous research has<br />

shown extensive individual variability in lipreading ability, even<br />

among participants sampled from homogenous populations. Despite<br />

considerable research on factors that predict individual differences in<br />

lipreading, it remains unclear what traits or abilities underlie the observed<br />

variability in visual-only speech perception. In part, the absence<br />

of reliable predictors of lipreading can be attributed to the use of narrow<br />

sets of predictor variables within individual studies. <strong>The</strong> present<br />

study therefore examined the ability of a broad range of factors, including<br />

working memory, processing speed, verbal learning, personality,<br />

and perceptual closure, to account for individual differences in<br />

lipreading ability. Multiple measures of each of these factors were obtained<br />

along with lipreading measures for consonants, words, and sentences.<br />

To date, only verbal processing speed has emerged as a significant<br />

predictor of lipreading ability for all three types of stimuli.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!