S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
S1 (FriAM 1-65) - The Psychonomic Society
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Friday Evening Posters 3106–3111<br />
consistency after 6 months was 100%. Skin conductance responses<br />
(SCRs) were monitored while we left each subject alone in a room so<br />
they wouldn’t need to “fake” specific emotional expressions to convince<br />
us of their authenticity. A hidden camera videotaped their expressions<br />
evoked by different textures and the tapes were subsequently<br />
analyzed by blinded researchers to rule out confabulation. Evaluators’<br />
ratings significantly correlated with the valency of synesthetes’ subjective<br />
reports, and SCR was significantly enhanced for negative synesthetic<br />
emotions. We suggest that this effect arises from increased crossactivation<br />
between S2 somatosensory cortex and insula for “basic”<br />
emotions and fronto-limbic hyperactivation for more subtle ones.<br />
(3106)<br />
Cross-Modal Comparisons of Time Intervals Presented or Not<br />
Presented in Sequences. SIMON GRONDIN, University of Laval, &<br />
J. DEVIN MCAULEY, Bowling Green State University—In four experiments,<br />
participants were presented with two sequences, each consisting<br />
of 1 or 4 intervals (marked by 2 or 5 signals), and were asked<br />
to indicate whether the interval(s) of the second sequence was (were)<br />
shorter or longer than the interval(s) of the first. On each trial, the<br />
standard sequence, which could occur first or second, delineated a<br />
fixed 500-msec interval, whereas the comparison sequence delineated<br />
a variable interval that was 500 ± 15, 45, 75, or 105 msec. Markers in<br />
Sequence 1 and Sequence 2 were, respectively, sounds and flashes<br />
(Experiment 1), flashes and sounds (Experiment 2), both flashes (Experiment<br />
3), and both sounds (Experiment 4). In general, the results<br />
showed that discrimination was better (1) when four intervals were<br />
presented, rather than one (especially in Sequence 2); (2) when the<br />
standard interval(s) was (were) presented before the comparison interval(s);<br />
and (3) when sequences were marked by sounds, rather than<br />
by flashes.<br />
• SPATIAL UPDATING •<br />
(3107)<br />
Path Integration and Spatial Updating in Humans: Behavioral Principles<br />
and Underlying Neural Mechanisms. THOMAS WOLBERS &<br />
MARY HEGARTY, University of California, Santa Barbara, CHRIS-<br />
TIAN BUECHEL, University Medical Center, Hamburg-Eppendorf,<br />
& JACK LOOMIS, University of California, Santa Barbara—Path integration,<br />
the sensing of self-motion for keeping track of changes in<br />
orientation and position, constitutes a fundamental mechanism of spatial<br />
navigation. Here, we show that humans can reliably estimate selfmotion<br />
from optic flow in virtual space, which relied upon the dynamic<br />
interplay of self-motion processing in area MST, higher-level<br />
spatial processes in the hippocampus, and spatial working memory in<br />
medial prefrontal cortex. A subsequent eye movement study revealed<br />
that when the positions of external objects have to be updated simultaneously,<br />
humans do not simply track remembered locations by<br />
means of saccadic eye movements. Instead, incoming self-motion<br />
cues are integrated with stored representations in the precuneus to enable<br />
online computation of changing object coordinates and to generate<br />
motor plans for potential actions in dorsal premotor cortex.<br />
<strong>The</strong>se results will be discussed in the context of an emerging theoretical<br />
model of navigational learning.<br />
(3108)<br />
Where Am I? Updating Nested Spatial Memory Learned From Different<br />
Sources. A. REYYAN BILGE & HOLLY A. TAYLOR, Tufts<br />
University—People mentally update environment locations relative to<br />
their own position as they move, a process called spatial updating.<br />
Wang and Brockmole (2003) found that when learning nested environments<br />
from direct experience, the more immediate (or proximal)<br />
surrounding (room) received updating priority, whereas the remote<br />
one (i.e., campus) required more effortful updating. <strong>The</strong> present work<br />
examined nested updating after map learning. Participants learned locations<br />
within a room nested within a campus either through direct experience<br />
or via maps. After learning both environments, they updated<br />
102<br />
their location with respect to one of the environments and then completed<br />
tasks assessing knowledge of both environments. <strong>The</strong> results<br />
suggested that learning format selectively favors different levels of a<br />
nested environment. People more accurately represented the proximal<br />
environment (room) following navigation, whereas they more accurately<br />
represented the remote environment (campus) after map learning.<br />
<strong>The</strong>se results have implications for best practice in representing<br />
environments on different scales.<br />
(3109)<br />
Knowledge Updating on the Basis of Learning From Spatial Actions<br />
and Spatial Language. ALEXANDRA PETERS & MARK<br />
MAY, Helmut Schmidt University (sponsored by Mark May)—Do spatial<br />
actions and spatial language lead to functionally equivalent or to<br />
functionally distinct types of spatial representation? In two experiments,<br />
we used real and imagined perspective switches to examine<br />
this question. Blindfolded participants were asked to learn object locations,<br />
either by exploring the locations with a cane (spatial action)<br />
or by hearing verbal descriptions (spatial language). In Experiment 1<br />
with bodily switches (i.e., self-rotations between 0º and 180º), pointing<br />
latencies were longer after language than after action learning, especially<br />
for the more difficult testing perspectives (45º and 135º). In<br />
Experiment 2 with imagined switches to the same perspectives, spatial<br />
disparity between real and imagined perspective had a significant<br />
effect on latencies, both learning conditions being similarly affected.<br />
Implications of these findings for single- versus dual-code conceptions<br />
of the underlying spatial representations and processes are<br />
discussed.<br />
(3110)<br />
<strong>The</strong> Effect of Active Selection in Path Integration. XIAOANG WAN,<br />
RANXIAO FRANCES WANG, & JAMES A. CROWELL, University of<br />
Illinois, Urbana-Champaign (sponsored by Ranxiao Frances Wang)—<br />
Many species can integrate information of self-motion to estimate<br />
their current position and orientation relative to the origin, a phenomenon<br />
known as path integration. We used a homing task in virtual<br />
hallway mazes to investigate the effect of active selection/planning in<br />
path integration. Participants traveled along hallways and attempted<br />
to directly return to the origin upon seeing a golden apple. Half of the<br />
participants freely decided the direction and distance of each hallway<br />
by themselves (completely free selection condition). <strong>The</strong> other half<br />
followed the identical outbound pathways selected by their counterparts<br />
(passive following condition). <strong>The</strong> two groups received the same<br />
perceptual and motor information but differed in the voluntary selection<br />
of the path structure. We found no overall facilitation effect of active<br />
selection on homing performance, possibly due to the trade-off<br />
between the advantage of planning and the cost of increased working<br />
memory load and task complexity in the active selection condition.<br />
(3111)<br />
Intrinsic Reference Direction in Sequentially Learning a Layout.<br />
XIANYUN LIU & WEIMIN MOU, Chinese Academy of Sciences, &<br />
TIMOTHY P. MCNAMARA, Vanderbilt University (sponsored by<br />
Weimin Mou)—Mou, Liu, and McNamara (2007) showed that preferred<br />
directions in pointing judgments (e.g., “Imagine you are standing<br />
at X, facing Y, please point to Z”) were consistent with the sequence<br />
that participants used to learn locations of objects, suggesting<br />
that the learning sequence may determine the spatial reference direction<br />
in memory. In this project, participants learned a layout of 7 objects<br />
with a symmetric axis different from the learning view. In Experiment<br />
1, the objects’ locations were illustrated by circular disks<br />
that were always presented during learning, and the objects were presented<br />
sequentially in a random order. In Experiment 2, the disks were<br />
removed and the objects were presented sequentially along the symmetric<br />
axis. <strong>The</strong> results showed that the preferred heading was determined<br />
by the symmetric axis in Experiment 1 but by the learning direction<br />
in Experiment 2. <strong>The</strong>se results suggest that spatial reference<br />
directions are established before learning sequence.