29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Papers 293–299 Sunday Morning<br />

of phoneme repetition in the spoken production of short phrases were<br />

explored. Participants named colored pictures with an adjective–noun<br />

phrase; the central manipulation concerned whether or not segments<br />

within the adjective and the noun matched (green gun, red rug) or mismatched<br />

(red gun, green rug). A robust facilitatory effect of phoneme<br />

match was found, suggesting that speakers planned the entire phrase<br />

before they initiated a response and that repeated selection of the same<br />

phoneme conferred a processing benefit. Further experiments demonstrated<br />

that the effect of phoneme repetition is phonological, not articulatory,<br />

in nature, and that it is largely independent of the position<br />

of the matched segment within each word. Implications for theories<br />

of phonological encoding and advance planning are discussed.<br />

9:40–9:55 (293)<br />

Speech Errors Reflect Newly Learned Phonotactic Constraints.<br />

JILL A. WARKER & GARY S. DELL, University of Illinois, Urbana-<br />

Champaign (read by Gary S. Dell)—If speakers repeatedly produce a<br />

set of syllables in which all occurrences of, say, /f/ are syllable onsets,<br />

and all /s/s are codas, their speech errors will rapidly come to reflect<br />

these constraints. For example, when /f/s slip, they will slip to other<br />

onset positions, not to coda positions. We attribute this effect to the<br />

implicit learning of the phonotactic constraints within the experiment.<br />

In four experiments, we show that more complex constraints, such as<br />

/f/ appearing in an onset only if the vowel is /ae/, can also be acquired<br />

and can influence speech errors. <strong>The</strong>se constraints are learned much<br />

more slowly, however. We present a model of the data to illustrate our<br />

view that the language production system adapts to recent experience<br />

while also continuing to reflect the accumulated experience of a lifetime<br />

of speaking and listening.<br />

10:00–10:15 (294)<br />

Self-Monitoring of Sign Language: Implications for the Perceptual<br />

Loop Hypothesis. KAREN D. EMMOREY, San Diego State University—Models<br />

of speech production suggest that an inner, prearticulatory<br />

loop and an external, auditory loop monitor the ongoing speech<br />

flow. Speakers can monitor their speech output by listening to their<br />

own voice—a perceptual loop feeds back to the speech comprehension<br />

mechanism. Herein lies a critical difference between signed and<br />

spoken language. <strong>The</strong> visual input from one’s own signing is quite distinct<br />

from the visual input of another’s signing. To investigate how<br />

signers monitor their production, normally sighted signers and signers<br />

with tunnel vision due to Usher’s syndrome were studied. Evidence<br />

suggests that signers visually monitor the location of their hands in<br />

space but may not parse this visual input via the sign comprehension<br />

mechanism. In addition, prearticulatory monitoring and perceptual<br />

monitoring of another’s signing operate at different representational<br />

levels (phonological vs. phonetic). I argue that self-monitoring via a<br />

perceptual loop may be less critical for signed than for spoken language.<br />

Movement and Perception<br />

Conference Rooms B&C, Sunday Morning, 8:00–9:40<br />

Chaired by Michael K. McBeath, Arizona State University<br />

8:00–8:15 (295)<br />

Pursuers Maintain Linear Optical Trajectory When Navigating to<br />

Intercept Robots Moving Along Complex Pathways. MICHAEL K.<br />

MCBEATH, WEI WANG, THOMAS G. SUGAR, IGOR DOLGOV, &<br />

ZHENG WANG, Arizona State University—This study explores the<br />

lateral navigational strategy used to intercept moving robots that approach<br />

along complex pathways. Participants ran across a gymnasium<br />

equipped with an eight-camera, high-resolution motion capture system<br />

and tried to catch a robot that varied in speed and direction. Participant<br />

behavior was compared with predictions of three lateral control<br />

models, each based on maintenance of a particular optical angle<br />

relative to the target: (1) constant alignment angle (CAA), (2) constant<br />

bearing angle (CBA), and (3) linear optical trajectory (LOT). <strong>The</strong> results<br />

were most consistent with maintenance of a LOT and least con-<br />

46<br />

sistent with CBA. <strong>The</strong> findings support the idea that pursuers use the<br />

same simple optical control mechanism to navigate toward complexly<br />

moving targets that they use when intercepting simple ballistic ones.<br />

Maintenance of a linear optical trajectory appears to be a robust, generalpurpose<br />

strategy for navigating to interception of targets headed off to<br />

the side.<br />

8:20–8:35 (296)<br />

Event Path Perception: Recognition of Transposed Spatiotemporal<br />

Curves. THOMAS F. SHIPLEY, Temple University—An event can be<br />

recognized as similar to another (although, logically, events are seen<br />

only once). How can we understand this achievement? Aspects of<br />

event perception may allow direct analogies to the better-understood<br />

domain of object perception. If object recognition models are to serve<br />

as the basis for models of event recognition, events should be recognized<br />

following spatial transpositions, just as objects are recognized<br />

following translation, rotation, or size change. Object paths (the trajectories<br />

of objects through space) are recognized despite transpositions.<br />

In this experiment, paths shown at one scale (e.g., a human walking<br />

along a complex path in an open field) were accurately matched<br />

to paths with different objects and different scales (e.g., a hand tracing<br />

a path in the air). This is consistent with a model of event perception<br />

in which paths of moving objects are decomposed at curvature<br />

extrema with recognition based on the spatiotemporal shape of<br />

the fragments.<br />

8:40–8:55 (297)<br />

Representational Momentum and Motion Capture. TIMOTHY L.<br />

HUBBARD, Texas Christian University—In representational momentum,<br />

memory for the final location of a moving target is displaced in<br />

the direction of target motion. In motion capture, a stationary stimulus<br />

is perceived to move in the same direction as a nearby moving target.<br />

In the experiments reported here, a stationary stimulus was briefly<br />

presented near the end or middle of a moving target’s trajectory. Memory<br />

for the location of a stationary stimulus presented near the end of<br />

the moving target’s trajectory was displaced in the direction of target<br />

motion, and this displacement was larger with faster target velocities<br />

and when the stationary stimulus was closer to the target. Memory for<br />

the location of a stationary stimulus presented near the middle of the<br />

moving target’s trajectory was not displaced. Representational momentum<br />

of a moving target can influence memory for a nearby stationary<br />

stimulus. Implications for theories of representational momentum<br />

and of motion capture are considered.<br />

9:00–9:15 (298)<br />

Scene Movement Versus Observer Movement. CORY FINLAY,<br />

MICHAEL MOTES, & MARIA KOZHEVNIKOV, Rutgers University,<br />

Newark (read by Maria Kozhevnikov)—This research examined the<br />

hypothesis that observer movement automatically updates representations<br />

of scenes. In our initial study, observers memorized a spatial<br />

array of ten objects from a single perspective. <strong>The</strong>n, either the scene<br />

was rotated or participants moved around the scene (from 0º to 360º),<br />

and participants judged whether the interobject spatial relations in the<br />

array had changed. Regardless of whether the scene was rotated or observers<br />

moved, greater angular disparity between judged and encoded<br />

views produced slower RTs, suggesting that memory for the scene was<br />

not automatically updated following observer movement. Furthermore,<br />

varying set size (4, 6, 8, or 10 objects) and delay interval between encoding<br />

and movement (0, 6, or 12 sec) produced decreased accuracy<br />

and increased RT, with larger angular disparity between encoded and<br />

judged views, regardless of set size and delay interval. <strong>The</strong>se data<br />

raise important questions regarding conditions under which spatial<br />

updating does not occur.<br />

9:20–9:35 (299)<br />

It’s All About Me (Well Mostly): Identifying People From <strong>The</strong>ir<br />

Actions. SAPNA PRASAD, FANI LOULA, MAGDALENA GALA-<br />

ZYN, & MAGGIE SHIFFRAR, Rutgers University, Newark (read by

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!