S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Sunday Morning Papers 284–289<br />
Based on evidence from spoken language corpora, Clark and Fox Tree<br />
(2002) hypothesized that the fillers “um” and “uh” represent signals<br />
of anticipated major and minor delay, respectively. But if they are genuine<br />
signals of delay, then they should show effects on the listener.<br />
Two mouse-tracking experiments supported this prediction. Listeners<br />
expected a speaker to refer to something new following an “um” but<br />
not following an “uh,” and only when “um” was followed by a sufficiently<br />
long pause (Experiment 1). Furthermore, this expectation was<br />
based on active perspective taking rather than on a learned association<br />
between a particular pattern of disfluency and new information:<br />
Listeners expected information that would be new for the current<br />
speaker, even though that information was old for them (Experiment<br />
2). <strong>The</strong>se findings suggest that “uh” and “um” are metacognitive<br />
collateral signals that draw listeners’ attention to a speaker’s cognitive<br />
state.<br />
Movement, Distance, and Depth Perception<br />
Seaview, Sunday Morning, 8:00–9:20<br />
Chaired by Maggie Shiffrar, Rutgers University, Newark<br />
8:00–8:15 (284)<br />
Facing Apparent Motion: A Translating Eyeball Illusion. SONGJOO<br />
OH & MAGGIE SHIFFRAR, Rutgers University, Newark (read by<br />
Maggie Shiffrar)—Traditional models of the human visual system assume<br />
that all classes of visual images are initially analyzed in the same<br />
way. From this perspective, the same processes are employed when observers<br />
view crashing waves and smiling children. We tested this assumption<br />
by investigating whether face perception changes motion<br />
perception. In psychophysical studies, we tested the perception of classic<br />
apparent motion phenomena. Wertheimer (1912) initiated Gestalt<br />
psychology with the finding that two sequentially presented static dots<br />
can appear as one translating dot. When these two dots are positioned<br />
in the eye sockets of an upright face, the perception of translation stops.<br />
Face processing also impacts a modified Ternus (1926) display such<br />
that dots that appeared to move independently instead appear to move<br />
together in as eyes in a face. <strong>The</strong>se results suggest that the visual analysis<br />
of facial motion differs from other motion analyses.<br />
8:20–8:35 (285)<br />
Why Do Moving Objects Interfere With the Visibility of Stationary<br />
Ones? JUDITH AVRAHAMI & OREN FLEKSER, Hebrew University—<strong>The</strong><br />
fact that moving objects interfere with the visibility of stationary<br />
ones has been known for a long time (Bonneh, Cooperman, &<br />
Sagi, 2001; Grindley & Townsend, 1966; MacKay, 1960), but its<br />
cause is still in dispute. To achieve some insight into the phenomenon<br />
the time required for detecting a gradually appearing Gabor stimulus<br />
on the background of moving dots was measured. <strong>The</strong> direction and<br />
speed of the dots and the orientation and spatial frequency of the<br />
Gabor were manipulated. When its spatial frequency was high, the<br />
Gabor stimulus was harder to detect when its orientation was orthogonal<br />
to the direction of the moving dots than when parallel; the difference<br />
increased with faster dots. Surprisingly, the opposite was true<br />
when the spatial frequency of the Gabor was low. <strong>The</strong>se results provide<br />
clues as to what the eye must be doing when watching moving<br />
objects and when perceiving stationary ones.<br />
8:40–8:55 (286)<br />
Testing Two Accounts of a Failure of Perceptual Separability.<br />
STEPHEN C. DOPKINS, George Washington University—In the<br />
complex distance task, the stimuli vary on two spatial dimensions and<br />
the error rate for distance judgments regarding one dimension depends<br />
on the interstimulus distance on both dimensions. According to<br />
the mean-shift integrality (MSI) account, this phenomenon reflects<br />
the mental representation of the stimuli; the mean of the distribution<br />
for a stimulus on each dimension of the representation depends on the<br />
level of the stimulus on both spatial dimensions. According to the derived<br />
distance (DD) account, the phenomenon reflects the distance es-<br />
44<br />
timation process; the distance between a pair of stimuli on a given dimension<br />
is derivative of the distance between them on both dimensions—distance<br />
on a given dimension can only be assessed to the degree<br />
that the dimension’s scale is made greater than the scale of the<br />
other dimension. <strong>The</strong> DD account fit the data from several experiments<br />
better than the MSI account did.<br />
9:00–9:15 (287)<br />
A Substantial Genetic Contribution to Stereoscopic Depth Judgments<br />
Further Than Fixation. JEREMY B. WILMER & BEN-<br />
JAMIN T. BACKUS, University of Pennsylvania—One in three individuals<br />
is blind to some range of stereoscopic depth for briefly<br />
presented stimuli (1). We tested precision of depth estimation from<br />
stereopsis in <strong>65</strong> identical and 35 fraternal twin pairs using a recently<br />
developed test (2). Precision for each individual was calculated as the<br />
increment in disparity that caused an increment in reported depth on<br />
75% of trials. Using structural equation modeling we estimated the influences<br />
of genetic and environmental factors on stereoscopic precision.<br />
Almost all reliable individual variation in “far” precision (beyond<br />
fixation) was attributable to genes (57%, 38%–70%*), but genes<br />
did not contribute to individual variation in “near” precision (closer<br />
than fixation; 0%, 0%–26%*). Thus specific genetic markers may<br />
correlate with far stereopsis and therapeutic interventions may be<br />
most successful if they target near stereopsis. *±1SE. (1) Richards, W.<br />
(1970) Experimental Brain Research, 10, 380-388. (2) van Ee, R., &<br />
Richards, W. (2002) Perception, 31, 51-64.<br />
9:20–9:35 (288)<br />
Extremal Edges and Gradient Cuts: New Cues to Depth and Figure–<br />
Ground Perception. STEPHEN E. PALMER & TANDRA GHOSE,<br />
University of California, Berkeley—Extremal edges (EEs) and gradient<br />
cuts (GCs) are powerful cues to depth and figure–ground organization<br />
that arise from shading and texture gradients, where convex,<br />
smoothly curved surfaces occlude themselves (EEs) or are occluded by<br />
other surfaces (GCs). Ecological constraints imply that the EE side of<br />
the shared edge should be seen as closer and figural, and experimental<br />
evidence shows that they are. Indeed, EEs readily dominate even<br />
combinations of well-known classical figure–ground cues (e.g., size<br />
and convexity). <strong>The</strong> GC side of a shared edge tends to be seen as a farther/ground<br />
surface. <strong>The</strong> strength of GC effects depends strongly on<br />
the relation between the shared edge and the gradient’s equiluminance<br />
contours, including the angle between them and the alignment of inflection<br />
points along the edge with luminance minima and maxima<br />
along the shading gradient. Together they strongly determine the perception<br />
of relative depth across an edge and figure–ground assignment.<br />
Picture Processing and Imagery<br />
Shoreline, Sunday Morning, 8:00–10:00<br />
Chaired by James R. Brockmole, University of Edinburgh<br />
8:00–8:15 (289)<br />
Prioritizing New Objects for Eye Fixation in Scenes: Effects of<br />
Object–Scene Consistency. JAMES R. BROCKMOLE & JOHN M.<br />
HENDERSON, University of Edinburgh—Recent research suggests<br />
that new objects appearing in real-world scenes are prioritized for eye<br />
fixations and by inference, for attentional processing. We examined<br />
whether semantic consistency modulates the degree to which new objects<br />
appearing in a scene are prioritized for viewing. New objects<br />
were added to photographs of real-world scenes during a fixation<br />
(new object with transient onset) or during a saccade (new object<br />
without transient onset). <strong>The</strong> added object was either consistent or inconsistent<br />
with the scene’s meaning. Object consistency did not affect<br />
the efficacy with which transient onsets captured attention, suggesting<br />
that transient motion signals capture attention in a bottom-up manner.<br />
Without a transient motion signal, the semantic consistency of the<br />
new object affected its prioritization with new inconsistent objects<br />
fixated sooner than new consistent objects, suggesting that attention