Abstracts 2005 - The Psychonomic Society
Abstracts 2005 - The Psychonomic Society
Abstracts 2005 - The Psychonomic Society
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Friday Morning Papers 28–33<br />
audiovisual speech perception differs critically from auditorily apprehended<br />
speech. Using sine wave synthesis to force perceivers to resolve<br />
phonetic properties dynamically, we tested two conditions of<br />
unimodal asynchrony tolerance. Listeners transcribed sentences at<br />
each degree of asynchrony of the tone analogue of the first or second<br />
formant, relative to the remaining tones of the sentence, ranging from<br />
250-msec lead to 250-msec lag. <strong>The</strong> results revealed time-critical perceptual<br />
organization of unimodal heard speech. <strong>The</strong> implications for<br />
amodal principles of the perceptual organization and analysis of<br />
speech are discussed.<br />
9:20–9:35 (28)<br />
Dissociating Uni- From Multimodal Perception in Infants Using<br />
Optical Imaging. HEATHER BORTFELD & ERIC WRUCK, Texas<br />
A&M University, & DAVID BOAS, Harvard Medical School—Nearinfrared<br />
spectroscopy is an optical imaging technique that measures<br />
relative changes in total hemoglobin concentration and oxygenation as<br />
an indicator of neural activation. Recent research suggests that optical<br />
imaging is a viable procedure for assessing the relation between perception<br />
and brain function in human infants. We examined the extent<br />
to which increased neural activation, as measured using optical imaging,<br />
could be observed in a neural area known to be involved in speech<br />
processing, the superior temporal cortex, during exposure to fluent<br />
speech. Infants 6–9 months of age were presented with a visual event<br />
paired with fluent speech (visual + audio) and a visual event without<br />
additional auditory stimuli (visual only). We observed a dissociation<br />
of neural activity during the visual + audio event and the visual-only<br />
event. Results have important implications for research in language development,<br />
developmental neuroscience, and infant perception.<br />
Face Processing<br />
Conference Rooms B&C, Friday Morning, 8:00–10:00<br />
Chaired by Christian Dobel<br />
Westfälische Wilhelms-Universität Münster<br />
8:00–8:15 (29)<br />
Learning of Faces and Objects in Prosopagnosia. CHRISTIAN<br />
DOBEL & JENS BÖLTE, Westfälische Wilhelms-Universität Münster—<br />
We investigated a group of congenital prosopagnosics with a neuropsychological<br />
testing battery. <strong>The</strong>ir performance was characterized by<br />
an impairment in recognizing individual faces. Other aspects of face<br />
processing were affected to a lesser degree. In a subsequent eyetracking<br />
experiment, we studied the ability of these subjects to learn novel<br />
faces, objects with faces, and objects presented in an upright and an<br />
inverted manner. Controls mostly attended central regions of stimuli.<br />
This was done more so for faces than for objects and more strongly expressed<br />
in upright than in inverted stimuli. Prosopagnosics performed<br />
as accurately as controls, but latencies were strongly delayed. In contrast<br />
to controls, they devoted more attention to outer parts of the stimuli.<br />
<strong>The</strong>se studies confirm the assumption that prosopagnosics use a<br />
more feature-based approach to recognize visual stimuli and that configural<br />
processing might be the locus of their impairment.<br />
8:20–8:35 (30)<br />
On the Other Hand: <strong>The</strong> Concurrence Effect and Self-Recognition.<br />
CLARK G. OHNESORGE, Carleton College, & NICK PALMER,<br />
JUSTIN KALEMKIARIAN, & ANNE SWENSON, Gustavus Adolphus<br />
College (read by Clark G. Ohnesorge)—Several recent studies of<br />
hemispheric specialization for facial self-recognition in which either<br />
visual field or response hand was manipulated have returned contrasting<br />
results. In three studies of self-recognition, we simultaneously<br />
manipulated visual field and response hand and found evidence for a<br />
concurrence effect—that is, an interaction of visual field and response<br />
hand indicating better performance when the “viewing” hemisphere<br />
also controls the hand used for response. <strong>The</strong> absence of main effects<br />
for either visual field or response hand are evidence against strong<br />
5<br />
claims for hemispheric specialization in self-recognition. We investigated<br />
the generality of the concurrence effect in three further studies<br />
and found that it also occurs for identification of unfamiliar faces but<br />
disappears when a task is chosen (distinguishing circles from ellipses)<br />
that more strongly favors the right hemisphere. <strong>The</strong> several task- and<br />
stimulus-related performance asymmetries we observed are discussed<br />
in terms of communication and cooperation between the hemispheres.<br />
8:40–8:55 (31)<br />
Environmental Context Effects in Episodic Recognition of Novel<br />
Faces. KERRY A. CHALMERS, University of Newcastle, Australia—<br />
Effects of context on recognition were investigated in three experiments.<br />
During study, novel faces were presented in one of two contexts<br />
created by varying screen position and background color. At test,<br />
old (studied) and new (nonstudied) faces were presented in the same<br />
context as studied faces or in a different context that was either a context<br />
seen at study (Experiments 1 and 3) or a new context (Experiment<br />
2). Participants judged whether faces were “old” (studied) or<br />
“new” (Experiments 1 and 2) or whether they had been studied in the<br />
“same” or “different” context or were “new” faces (Experiment 3).<br />
Match between study and test contexts had no effect on correct recognition<br />
of faces, even when study context was correctly identified at<br />
test. False recognition was higher when the test context was old than<br />
when it was new. Implications for global matching models and dualprocess<br />
accounts of memory are considered.<br />
9:00–9:15 (32)<br />
Processing the Trees and the Forest During Initial Stages of Face<br />
Perception: Electrophysiological Evidence. SHLOMO BENTIN &<br />
YULIA GOLLAND, Hebrew University of Jerusalem, ANASTASIA<br />
FLAVERIS, University of California, Berkeley, LYNN C. ROBERTSON,<br />
Veterans Affairs Medical Center, Martinez, and University of California,<br />
Berkeley, & MORRIS MOSCOVITCH, University of Toronto—Although<br />
global configuration is a hallmark of face processing, most contemporary<br />
models of face perception posit a dual-code view, according<br />
to which face recognition relies on the extraction of featural codes, involving<br />
local analysis of individual face components, as well as on the<br />
extraction of configural codes, involving the components themselves<br />
and computation of the spatial relations among them. We explored the<br />
time course of processing configural and local component information<br />
during face processing by recording the N170, an ERP component that<br />
manifests early perception of physiognomic information. <strong>The</strong> physiognomic<br />
value of local and global information was manipulated by substituting<br />
objects or faces for eyes in the global configuration of the<br />
schematic face or placing the same stimuli in random positions inside<br />
the global face. <strong>The</strong> results suggest that the global face configuration<br />
imposes (local) analysis of information in the “eyes” position, which<br />
determines the overall physiognomic value of the global stimulus.<br />
9:20–9:35 (33)<br />
Facial Conjunctions May Block Recollection: ERP Evidence.<br />
KALYAN SHASTRI, JAMES C. BARTLETT, & HERVÉ ABDI, University<br />
of Texas, Dallas (read by James C. Bartlett)—Although conjunctions<br />
of previously viewed faces are sometimes falsely judged as<br />
“old,” they often are correctly rejected as “new.” This could be due to<br />
(1) successful recollection of configural information or (2) low familiarity<br />
and/or failure of recollection. To distinguish these ideas, we<br />
compared ERPs in a recognition test for hits to old faces and correct<br />
rejections of (1) conjunction faces, (2) entirely new faces, and (3) repetitions<br />
of new faces. Focusing on differences in ERP positivity, 400<br />
to 800 msec poststimulus, over midline and left parietal sites (CP3,<br />
CPZ, P3, and PZ), we replicated the “parietal old/new effect” of greater<br />
positively for old faces than for new faces, a difference frequently attributed<br />
to recollection. A comparison of repeated new faces and conjunctions<br />
showed this same effect, and, critically, the ERP functions<br />
for repeated new faces closely matched that for old faces, whereas the<br />
functions for conjunctions closely matched that for new faces.