29.01.2013 Views

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Friday Afternoon Papers 99–104<br />

Observers overestimate the slopes of hills with verbal measures, but<br />

are more accurate with body-based motor matching techniques, an example<br />

of contrast between cognitive and sensorimotor visual information.<br />

We replicated this effect using arm posture measured with<br />

digital photography rather than a tilt board for the motor measure,<br />

eliminating body contact with the hardware. Judged slopes are too<br />

steep at long distances, measured by having observers estimate segments<br />

of slopes between themselves and markers; at 16 m an 11º hill<br />

is verbally estimated at nearly 30º; the motor estimate is much lower.<br />

At 1 m, however, estimates are more accurate with both measures.<br />

What happens when observers traverse the slope before estimating it,<br />

giving them short-distance, presumably more accurate perceptual information<br />

at every point on the slope? <strong>The</strong>ir overestimates are just as<br />

great as those of observers who did not traverse the slope. Appearance<br />

dominates knowledge of the terrain.<br />

3:10–3:25 (99)<br />

<strong>The</strong> Perception of Four-Dot Configurations. MARY C. PORTILLO,<br />

CARL HAMMARSTEN, SHAIYAN KESHVARI, STEPHEN W.<br />

JEWELL, & JAMES R. POMERANTZ, Rice University (read by<br />

James R. Pomerantz)—Perceivers see stars in the night sky configured<br />

into dozens of nameable constellations. With simpler stimuli, twopoint<br />

configurations are organized into a straight line, and three points<br />

into a triangle. How do we perceive configurations of four points, as<br />

when four coins are tossed randomly on the floor: as quadrilaterals,<br />

as straight lines, curves, Y patterns, Ls, Ts, or yet others? We presented<br />

328 patterns that systematically sampled the space of all possible<br />

4-dot arrangements (ignoring size scale, orientation, and reflections)<br />

for subjects to free-classify based on perceived similarity. We<br />

then cluster-analyzed their responses. <strong>The</strong> structure of their classifications<br />

was captured by a hierarchy of 14 clusters of patterns. <strong>The</strong><br />

first bifurcation occurred between patterns having 3 dots in a straight<br />

line versus those that did not. Further branches suggest an ordered set<br />

of rules for grouping dot patterns, including grouping by proximity,<br />

linearity, parallelism, and symmetry.<br />

3:30–3:45 (100)<br />

Spationumerical Associations Between Perception and Semantics.<br />

PETER KRAMER, IVILIN STOIANOV, CARLO UMILTÀ, &<br />

MARCO ZORZI, University of Padua (sponsored by Johannes C.<br />

Ziegler)—Stoianov, Kramer, Umiltà, and Zorzi (Cognition, in press)<br />

found an interaction, between visuospatial and numerical information,<br />

that is independent of response selection effects (e.g., the SNARC effect).<br />

This Spatial-Numerical Association between Perception and Semantics<br />

(SNAPS) emerges when a spatial prime follows (backward<br />

priming), but not when it precedes (forward priming), a numerical target.<br />

Here, we investigate the time course and nature of the SNAPS effect.<br />

We used nonspatial, verbal parity judgments and number comparisons<br />

and, to dissociate the SNAPS effect from other numerical<br />

effects, we compared conditions with and without priming. <strong>The</strong> results<br />

show that the SNAPS effect is inhibitory and peaks when the<br />

prime follows the target by about 100 msec. Moreover, we observed<br />

a main effect of number size even in the parity judgment task, contrary<br />

to earlier claims. This latter finding has important implications<br />

for current models of the representation of numerical magnitude.<br />

Selective Attention<br />

Regency ABC, Friday Afternoon, 4:10–5:30<br />

Chaired by Zhe Chen, University of Canterbury<br />

4:10–4:25 (101)<br />

Implicit Perception in Object Substitution Masking. ZHE CHEN,<br />

University of Canterbury, & ANNE TREISMAN, Princeton University—Object<br />

substitution masking (OSM; Enns & Di Lollo, 1997)<br />

refers to reduced target discrimination when the target is surrounded<br />

by a sparse mask that does not overlap with the target in space but<br />

16<br />

trails it in time. In two experiments, we used a novel approach to investigate<br />

the extent of processing of a masked target in OSM. We measured<br />

response compatibility effects between target and mask, both<br />

when the offsets were simultaneous and when the mask offset was delayed.<br />

Participants made a speeded response to the mask followed by<br />

an accuracy only response to the target, then categorizing their responses<br />

to the target as “see” or “guess.” Targets and masks matched<br />

or differed at a feature level in Experiment 1 and at a categorical level<br />

in Experiment 2. Evidence for OSM as well as a dissociation between<br />

perception and awareness was found in both experiments.<br />

4:30–4:45 (102)<br />

Intertrial Biasing of Selective Attention Leads to Blink-Like Misses<br />

in RSVP Streams. ALEJANDRO LLERAS & BRIAN LEVINTHAL,<br />

University of Illinois, Urbana-Champaign—When participants are required<br />

to report the case of the color-oddball letter in a single-target<br />

RSVP stream, their ability to do so is modulated by the position of the<br />

target in the RSVP stream, and more crucially, by the match or mismatch<br />

between the current target color and the color of distractors in<br />

the prior RSVP stream. When the target is presented in the color of<br />

the distractors in the previous trial RSVP stream, participants very<br />

often miss the target (performance is at chance) when the target is presented<br />

early on in the RSVP stream. Performance recovers for later<br />

target positions in the RSVP stream. <strong>The</strong> pattern of performance (cost<br />

and recovery) is strongly reminiscent of the attentional blink, even<br />

though only one target is to be detected on any given trial and more<br />

than 2 sec have elapsed since the end of the previous trial.<br />

4:50–5:05 (103)<br />

<strong>The</strong> Time Course of Goal-Driven Saccadic Selection. WIESKE<br />

VAN ZOEST & MIEKE DONK, Vrije Universiteit, Amsterdam (sponsored<br />

by Mieke Donk)—Four experiments were performed to investigate<br />

goal-driven modulation in saccadic target selection as a function<br />

of time. Observers were presented with displays containing<br />

multiple homogeneously oriented background lines and two singletons.<br />

Observers were instructed to make a speeded eye-movement to<br />

one singleton in one condition and the other singleton in another condition.<br />

Simultaneously presented singletons were defined in different<br />

dimensions (orientation and color in Experiment 1), or in the same dimension<br />

(i.e., orientation in Experiment 2, color in Experiments 3 and<br />

4). <strong>The</strong> results showed that goal-driven selectivity increased as a function<br />

of saccade latency and depended on the specific singleton combination.<br />

Yet, selectivity was not a function of whether both singletons<br />

were defined within or across dimensions. Instead, the rate of<br />

goal-driven selectivity was related to the similarity between the singletons;<br />

when singletons were dissimilar, accuracy as a function of<br />

time increased more rapidly than when they were similar.<br />

5:10–5:25 (104)<br />

Tracking of Visual Objects Containing Textual Information.<br />

LAURI OKSAMA & JUKKA HYÖNÄ, University of Turku (sponsored<br />

by Jukka Hyönä)—Do properties of textual identity information<br />

associated with moving targets influence visual tracking? In realworld<br />

visual environments, such as air traffic displays, targets to be<br />

tracked contain textual information (e.g., call signs). In the present<br />

study, the textual information appeared within rectangles that moved<br />

around the computer screen. Four factors were manipulated in the experiments:<br />

(1) number of targets (2–6), (2) length of textual information<br />

(5 vs. 10 character words), (3) familiarity of textual information<br />

(existing words vs. pronounceable pseudowords), and (4) speed of object<br />

movement. We observed that performance accuracy decreased as<br />

a function of target set-size, text length, word unfamiliarity, and target<br />

speed. We argue that the results are consistent with the recently<br />

proposed serial model of dynamic identity–location binding (MOMIT),<br />

which states that identity–location bindings for multiple moving objects<br />

become more difficult when target identification consumes more<br />

time (e.g., as text length increases).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!