Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society Abstracts 2005 - The Psychonomic Society

psychonomic.org
from psychonomic.org More from this publisher
29.01.2013 Views

Papers 22–27 Friday Morning such representations, though, remains questionable. Work on the continued influence effect, conceptual change, and resonance models of memory suggests that, under most circumstances, memory is resistant to spontaneous updating. Thus, determining conditions that facilitate updating is necessary for theories of comprehension. In this study, we examined updating processes for a narrative dimension usually considered important for text comprehension—story protagonists. Participants read stories describing character behaviors associated with particular traits; this was followed by confirming evidence or statements refuting the validity of trait inferences. Later in the stories, characters behaved in either trait-consistent or trait-inconsistent ways. Judgments and reading latencies revealed that readers failed to completely update their models despite refutation statements. Extended explanation was necessary to invoke successful updating. 9:20–9:35 (22) The Use of Other Side Information: Explaining the My Side Bias in Argumentation. CHRISTOPHER R. WOLFE, Miami University, & M. ANNE BRITT, Northern Illinois University—Skillful writers address the other side of an argument, whereas less skillful writers exhibit “my side” biases. The first of four experiments analyzed 34 “authentic arguments.” Nearly all rebutted or dismissed other side arguments to outline the parameters of debate. Experiment 2 used opposing claims and reasons to test the hypothesis that agreement is primarily associated with claims and quality is primarily associated with supporting reasons. These predictions were supported. We hypothesized in Experiment 3 that argumentation schemata are deficient in two ways: overemphasizing factual support and ignoring opposition. Thus, we introduced two interventions: “know the opposition” and “more than just the facts” before participants wrote essays. The “opposition” intervention led to more previously presented other side information. The “facts” manipulation led to more novel other side information. The final experiment tested the consequences of concession, rebuttal, and dismissal on perceived persuasiveness, quality, and author credibility. Argumentation schemata are discussed. 9:40–9:55 (23) When Eyegaze Speaks Louder Than Words: The Advantages of Shared Gaze for Coordinating a Collaborative Search Task. SUSAN E. BRENNAN, SUNY, Stony Brook, CHRISTOPHER A. DICKINSON, University of Delaware, & XIN CHEN, MARK B. NEIDER, & GRE- GORY J. ZELINSKY, SUNY, Stony Brook—Eyegaze is a potent means of communication. We had remotely located pairs collaborate on a perceptually difficult task: searching for a target (Q) among similar objects (Os) on a shared display. Either partner could press a target present/absent key. Each pair communicated via one condition: speech alone, shared gaze alone (the cursor from each partner’s headmounted eyetracker was superimposed on the other’s display), both shared gaze and speech, or neither (no communication). As was expected, four eyes were better than two, and communication improved efficiency. With shared gaze, speech, or both, partners divided the labor (e.g., one searched the left and the other, the right). Surprisingly, the fastest and most efficient condition was shared gaze alone, especially when targets were absent; in this time-critical task, speaking actually incurred costs. Visual evidence about where a partner is looking is an efficient resource for coordinating collaborative spatial tasks, with implications for mediated communication. Auditory–Visual Integration Civic Ballroom, Friday Morning, 8:00–9:40 Chaired by Jean Vroomen, Tilburg University 8:00–8:15 (24) The Spatial Constraint in Intersensory Pairing: No Role in Temporal Ventriloquism. JEAN VROOMEN & MIRJAM KEETELS, Tilburg University—A sound presented in close temporal proximity to a flash can alter the perceived temporal occurrence of that flash 4 (= temporal ventriloquism). Here, we explored whether temporal ventriloquism is harmed by spatial discordance between the sound and the flash. Participants judged, in a visual temporal order judgment (TOJ) task, which of two lights appeared first, while they heard taskirrelevant sounds before the first and after the second light. Temporal ventriloquism manifested itself in that performance was most accurate (lowest just noticeable difference) when the sound–light interval was ~100 msec. Surprisingly, temporal ventriloquism was unaffected by whether sounds came from the same or different positions as the lights, whether the sounds were static or moved laterally, or whether the sounds and lights came from the same or opposite side of the median. These results challenge the common view that spatial correspondence is an important requirement for intersensory interactions in general. 8:20–8:35 (25) Two-Dimensional Localization With Spatially and Temporally Congruent Visual–Auditory Stimuli. MARTINE GODFROY & ROBERT B. WELCH, NASA Ames Research Center (sponsored by Robert B. Welch)—An experiment evaluated the predicted enhancement of perceptual localization with spatially and temporally congruent visual–auditory stimuli in a two-dimensional, reference-free environment. On the basis of the “information reliability hypothesis,” it was expected that relative weighting of visual and auditory modalities in the perceptual response would vary according to the main directional component (i.e., azimuth vs. elevation) of the stimuli. Ten participants were instructed to position a visual cursor toward randomly presented visual, auditory, and visual–auditory targets. Analyses of the responses in terms of their precision, orientation, centering, and variability confirmed optimal integration, as well as the predicted weighting of the two senses in relation to the directional component. For example, in the vertical dimension, in which auditory targets are relatively poorly localized, the influence of vision was greater than in the horizontal dimension. This experiment was repeated using virtual 3-D sound sources, together with a comparison of structured versus unstructured visual fields. 8:40–8:55 (26) Visual Recalibration and Selective Adaptation in Auditory–Visual Speech Perception. PAUL BERTELSON, Université Libre de Bruxelles, & JEAN VROOMEN, SABINE VAN LINDEN, & BÉATRICE DE GELDER, Tilburg University—Exposure to incongruent auditory– visual speech can produce both recalibration and selective adaptation of speech identification. In an earlier study, exposure to an ambiguous auditory token (intermediate between /aba/ and /ada/) dubbed onto the video of a face articulating either /aba/ or /ada/, recalibrated the perceived identity of auditory targets in the direction of the visual component, while exposure to congruent nonambiguous /aba/ or /ada/ pairs created selective adaptation—that is, a shift of perceived identity in the opposite direction. Here, we examined the build-up course of the aftereffects produced by the same two types of bimodal adapters, over a 1–256 range of presentations. The aftereffects of nonambiguous congruent adapters increased linearly across that range, whereas those of ambiguous incongruent adapters followed a curvilinear course, going up and then down with increasing exposure. This late decline might reflect selective adaptation to the recalibrated ambiguous sound. 9:00–9:15 (27) Asynchrony Tolerance in the Perceptual Organization of Speech. ROBERT E. REMEZ, DARIA F. FERRO, STEPHANIE C. WISSIG, & CLAIRE A. LANDAU, Barnard College—Studies of multimodal presentation of speech reveal that perceivers tolerate temporal discrepancy in integrating audible and visible properties. Perceivers combined seen and heard samples of speech to resolve syllables, words, and sentences at asynchronies as great as 180 msec. Although explanations have appealed to general flexibility in perceptual organization, a unimodal test is required to know whether asynchrony tolerance in

Friday Morning Papers 28–33 audiovisual speech perception differs critically from auditorily apprehended speech. Using sine wave synthesis to force perceivers to resolve phonetic properties dynamically, we tested two conditions of unimodal asynchrony tolerance. Listeners transcribed sentences at each degree of asynchrony of the tone analogue of the first or second formant, relative to the remaining tones of the sentence, ranging from 250-msec lead to 250-msec lag. The results revealed time-critical perceptual organization of unimodal heard speech. The implications for amodal principles of the perceptual organization and analysis of speech are discussed. 9:20–9:35 (28) Dissociating Uni- From Multimodal Perception in Infants Using Optical Imaging. HEATHER BORTFELD & ERIC WRUCK, Texas A&M University, & DAVID BOAS, Harvard Medical School—Nearinfrared spectroscopy is an optical imaging technique that measures relative changes in total hemoglobin concentration and oxygenation as an indicator of neural activation. Recent research suggests that optical imaging is a viable procedure for assessing the relation between perception and brain function in human infants. We examined the extent to which increased neural activation, as measured using optical imaging, could be observed in a neural area known to be involved in speech processing, the superior temporal cortex, during exposure to fluent speech. Infants 6–9 months of age were presented with a visual event paired with fluent speech (visual + audio) and a visual event without additional auditory stimuli (visual only). We observed a dissociation of neural activity during the visual + audio event and the visual-only event. Results have important implications for research in language development, developmental neuroscience, and infant perception. Face Processing Conference Rooms B&C, Friday Morning, 8:00–10:00 Chaired by Christian Dobel Westfälische Wilhelms-Universität Münster 8:00–8:15 (29) Learning of Faces and Objects in Prosopagnosia. CHRISTIAN DOBEL & JENS BÖLTE, Westfälische Wilhelms-Universität Münster— We investigated a group of congenital prosopagnosics with a neuropsychological testing battery. Their performance was characterized by an impairment in recognizing individual faces. Other aspects of face processing were affected to a lesser degree. In a subsequent eyetracking experiment, we studied the ability of these subjects to learn novel faces, objects with faces, and objects presented in an upright and an inverted manner. Controls mostly attended central regions of stimuli. This was done more so for faces than for objects and more strongly expressed in upright than in inverted stimuli. Prosopagnosics performed as accurately as controls, but latencies were strongly delayed. In contrast to controls, they devoted more attention to outer parts of the stimuli. These studies confirm the assumption that prosopagnosics use a more feature-based approach to recognize visual stimuli and that configural processing might be the locus of their impairment. 8:20–8:35 (30) On the Other Hand: The Concurrence Effect and Self-Recognition. CLARK G. OHNESORGE, Carleton College, & NICK PALMER, JUSTIN KALEMKIARIAN, & ANNE SWENSON, Gustavus Adolphus College (read by Clark G. Ohnesorge)—Several recent studies of hemispheric specialization for facial self-recognition in which either visual field or response hand was manipulated have returned contrasting results. In three studies of self-recognition, we simultaneously manipulated visual field and response hand and found evidence for a concurrence effect—that is, an interaction of visual field and response hand indicating better performance when the “viewing” hemisphere also controls the hand used for response. The absence of main effects for either visual field or response hand are evidence against strong 5 claims for hemispheric specialization in self-recognition. We investigated the generality of the concurrence effect in three further studies and found that it also occurs for identification of unfamiliar faces but disappears when a task is chosen (distinguishing circles from ellipses) that more strongly favors the right hemisphere. The several task- and stimulus-related performance asymmetries we observed are discussed in terms of communication and cooperation between the hemispheres. 8:40–8:55 (31) Environmental Context Effects in Episodic Recognition of Novel Faces. KERRY A. CHALMERS, University of Newcastle, Australia— Effects of context on recognition were investigated in three experiments. During study, novel faces were presented in one of two contexts created by varying screen position and background color. At test, old (studied) and new (nonstudied) faces were presented in the same context as studied faces or in a different context that was either a context seen at study (Experiments 1 and 3) or a new context (Experiment 2). Participants judged whether faces were “old” (studied) or “new” (Experiments 1 and 2) or whether they had been studied in the “same” or “different” context or were “new” faces (Experiment 3). Match between study and test contexts had no effect on correct recognition of faces, even when study context was correctly identified at test. False recognition was higher when the test context was old than when it was new. Implications for global matching models and dualprocess accounts of memory are considered. 9:00–9:15 (32) Processing the Trees and the Forest During Initial Stages of Face Perception: Electrophysiological Evidence. SHLOMO BENTIN & YULIA GOLLAND, Hebrew University of Jerusalem, ANASTASIA FLAVERIS, University of California, Berkeley, LYNN C. ROBERTSON, Veterans Affairs Medical Center, Martinez, and University of California, Berkeley, & MORRIS MOSCOVITCH, University of Toronto—Although global configuration is a hallmark of face processing, most contemporary models of face perception posit a dual-code view, according to which face recognition relies on the extraction of featural codes, involving local analysis of individual face components, as well as on the extraction of configural codes, involving the components themselves and computation of the spatial relations among them. We explored the time course of processing configural and local component information during face processing by recording the N170, an ERP component that manifests early perception of physiognomic information. The physiognomic value of local and global information was manipulated by substituting objects or faces for eyes in the global configuration of the schematic face or placing the same stimuli in random positions inside the global face. The results suggest that the global face configuration imposes (local) analysis of information in the “eyes” position, which determines the overall physiognomic value of the global stimulus. 9:20–9:35 (33) Facial Conjunctions May Block Recollection: ERP Evidence. KALYAN SHASTRI, JAMES C. BARTLETT, & HERVÉ ABDI, University of Texas, Dallas (read by James C. Bartlett)—Although conjunctions of previously viewed faces are sometimes falsely judged as “old,” they often are correctly rejected as “new.” This could be due to (1) successful recollection of configural information or (2) low familiarity and/or failure of recollection. To distinguish these ideas, we compared ERPs in a recognition test for hits to old faces and correct rejections of (1) conjunction faces, (2) entirely new faces, and (3) repetitions of new faces. Focusing on differences in ERP positivity, 400 to 800 msec poststimulus, over midline and left parietal sites (CP3, CPZ, P3, and PZ), we replicated the “parietal old/new effect” of greater positively for old faces than for new faces, a difference frequently attributed to recollection. A comparison of repeated new faces and conjunctions showed this same effect, and, critically, the ERP functions for repeated new faces closely matched that for old faces, whereas the functions for conjunctions closely matched that for new faces.

Papers 22–27 Friday Morning<br />

such representations, though, remains questionable. Work on the continued<br />

influence effect, conceptual change, and resonance models of<br />

memory suggests that, under most circumstances, memory is resistant<br />

to spontaneous updating. Thus, determining conditions that facilitate<br />

updating is necessary for theories of comprehension. In this study, we<br />

examined updating processes for a narrative dimension usually considered<br />

important for text comprehension—story protagonists. Participants<br />

read stories describing character behaviors associated with<br />

particular traits; this was followed by confirming evidence or statements<br />

refuting the validity of trait inferences. Later in the stories,<br />

characters behaved in either trait-consistent or trait-inconsistent ways.<br />

Judgments and reading latencies revealed that readers failed to completely<br />

update their models despite refutation statements. Extended<br />

explanation was necessary to invoke successful updating.<br />

9:20–9:35 (22)<br />

<strong>The</strong> Use of Other Side Information: Explaining the My Side Bias in<br />

Argumentation. CHRISTOPHER R. WOLFE, Miami University, &<br />

M. ANNE BRITT, Northern Illinois University—Skillful writers address<br />

the other side of an argument, whereas less skillful writers exhibit<br />

“my side” biases. <strong>The</strong> first of four experiments analyzed 34 “authentic<br />

arguments.” Nearly all rebutted or dismissed other side<br />

arguments to outline the parameters of debate. Experiment 2 used opposing<br />

claims and reasons to test the hypothesis that agreement is primarily<br />

associated with claims and quality is primarily associated with<br />

supporting reasons. <strong>The</strong>se predictions were supported. We hypothesized<br />

in Experiment 3 that argumentation schemata are deficient in two<br />

ways: overemphasizing factual support and ignoring opposition. Thus,<br />

we introduced two interventions: “know the opposition” and “more<br />

than just the facts” before participants wrote essays. <strong>The</strong> “opposition”<br />

intervention led to more previously presented other side information.<br />

<strong>The</strong> “facts” manipulation led to more novel other side information. <strong>The</strong><br />

final experiment tested the consequences of concession, rebuttal, and<br />

dismissal on perceived persuasiveness, quality, and author credibility.<br />

Argumentation schemata are discussed.<br />

9:40–9:55 (23)<br />

When Eyegaze Speaks Louder Than Words: <strong>The</strong> Advantages of<br />

Shared Gaze for Coordinating a Collaborative Search Task. SUSAN<br />

E. BRENNAN, SUNY, Stony Brook, CHRISTOPHER A. DICKINSON,<br />

University of Delaware, & XIN CHEN, MARK B. NEIDER, & GRE-<br />

GORY J. ZELINSKY, SUNY, Stony Brook—Eyegaze is a potent means<br />

of communication. We had remotely located pairs collaborate on a<br />

perceptually difficult task: searching for a target (Q) among similar<br />

objects (Os) on a shared display. Either partner could press a target<br />

present/absent key. Each pair communicated via one condition: speech<br />

alone, shared gaze alone (the cursor from each partner’s headmounted<br />

eyetracker was superimposed on the other’s display), both shared gaze<br />

and speech, or neither (no communication). As was expected, four<br />

eyes were better than two, and communication improved efficiency.<br />

With shared gaze, speech, or both, partners divided the labor (e.g., one<br />

searched the left and the other, the right). Surprisingly, the fastest and<br />

most efficient condition was shared gaze alone, especially when targets<br />

were absent; in this time-critical task, speaking actually incurred<br />

costs. Visual evidence about where a partner is looking is an efficient<br />

resource for coordinating collaborative spatial tasks, with implications<br />

for mediated communication.<br />

Auditory–Visual Integration<br />

Civic Ballroom, Friday Morning, 8:00–9:40<br />

Chaired by Jean Vroomen, Tilburg University<br />

8:00–8:15 (24)<br />

<strong>The</strong> Spatial Constraint in Intersensory Pairing: No Role in Temporal<br />

Ventriloquism. JEAN VROOMEN & MIRJAM KEETELS,<br />

Tilburg University—A sound presented in close temporal proximity to<br />

a flash can alter the perceived temporal occurrence of that flash<br />

4<br />

(= temporal ventriloquism). Here, we explored whether temporal ventriloquism<br />

is harmed by spatial discordance between the sound and the<br />

flash. Participants judged, in a visual temporal order judgment (TOJ)<br />

task, which of two lights appeared first, while they heard taskirrelevant<br />

sounds before the first and after the second light. Temporal<br />

ventriloquism manifested itself in that performance was most accurate<br />

(lowest just noticeable difference) when the sound–light interval<br />

was ~100 msec. Surprisingly, temporal ventriloquism was unaffected<br />

by whether sounds came from the same or different positions as the<br />

lights, whether the sounds were static or moved laterally, or whether<br />

the sounds and lights came from the same or opposite side of the median.<br />

<strong>The</strong>se results challenge the common view that spatial correspondence<br />

is an important requirement for intersensory interactions<br />

in general.<br />

8:20–8:35 (25)<br />

Two-Dimensional Localization With Spatially and Temporally<br />

Congruent Visual–Auditory Stimuli. MARTINE GODFROY &<br />

ROBERT B. WELCH, NASA Ames Research Center (sponsored by<br />

Robert B. Welch)—An experiment evaluated the predicted enhancement<br />

of perceptual localization with spatially and temporally congruent<br />

visual–auditory stimuli in a two-dimensional, reference-free environment.<br />

On the basis of the “information reliability hypothesis,” it<br />

was expected that relative weighting of visual and auditory modalities<br />

in the perceptual response would vary according to the main directional<br />

component (i.e., azimuth vs. elevation) of the stimuli. Ten<br />

participants were instructed to position a visual cursor toward randomly<br />

presented visual, auditory, and visual–auditory targets. Analyses<br />

of the responses in terms of their precision, orientation, centering,<br />

and variability confirmed optimal integration, as well as the predicted<br />

weighting of the two senses in relation to the directional component.<br />

For example, in the vertical dimension, in which auditory targets are<br />

relatively poorly localized, the influence of vision was greater than in<br />

the horizontal dimension. This experiment was repeated using virtual<br />

3-D sound sources, together with a comparison of structured versus<br />

unstructured visual fields.<br />

8:40–8:55 (26)<br />

Visual Recalibration and Selective Adaptation in Auditory–Visual<br />

Speech Perception. PAUL BERTELSON, Université Libre de Bruxelles,<br />

& JEAN VROOMEN, SABINE VAN LINDEN, & BÉATRICE<br />

DE GELDER, Tilburg University—Exposure to incongruent auditory–<br />

visual speech can produce both recalibration and selective adaptation<br />

of speech identification. In an earlier study, exposure to an ambiguous<br />

auditory token (intermediate between /aba/ and /ada/) dubbed<br />

onto the video of a face articulating either /aba/ or /ada/, recalibrated<br />

the perceived identity of auditory targets in the direction of the visual<br />

component, while exposure to congruent nonambiguous /aba/ or /ada/<br />

pairs created selective adaptation—that is, a shift of perceived identity<br />

in the opposite direction. Here, we examined the build-up course<br />

of the aftereffects produced by the same two types of bimodal<br />

adapters, over a 1–256 range of presentations. <strong>The</strong> aftereffects of nonambiguous<br />

congruent adapters increased linearly across that range,<br />

whereas those of ambiguous incongruent adapters followed a curvilinear<br />

course, going up and then down with increasing exposure. This<br />

late decline might reflect selective adaptation to the recalibrated ambiguous<br />

sound.<br />

9:00–9:15 (27)<br />

Asynchrony Tolerance in the Perceptual Organization of Speech.<br />

ROBERT E. REMEZ, DARIA F. FERRO, STEPHANIE C. WISSIG,<br />

& CLAIRE A. LANDAU, Barnard College—Studies of multimodal<br />

presentation of speech reveal that perceivers tolerate temporal discrepancy<br />

in integrating audible and visible properties. Perceivers combined<br />

seen and heard samples of speech to resolve syllables, words,<br />

and sentences at asynchronies as great as 180 msec. Although explanations<br />

have appealed to general flexibility in perceptual organization,<br />

a unimodal test is required to know whether asynchrony tolerance in

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!