29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Thursday Evening Posters 1008–1013<br />

stimuli. Participants performed object-based and perspective-based<br />

judgments for rooms and bodies in different combinations of reference<br />

frame congruence, defined by the position of the person and the<br />

computer monitor. Rather than showing one dominant reference<br />

frame, results revealed that upright coincided with any two congruent<br />

reference frames, suggesting flexibility in reference frame use for spatial<br />

problem solving.<br />

(1008)<br />

Perspective and Instruction Effects on Mentally Representing a<br />

Virtual Environment. HOLLY A. TAYLOR, Tufts University, &<br />

FRANCESCA PAZZAGLIA, University of Padua (sponsored by<br />

Holly A. Taylor)—How do instructions to focus on particular aspects<br />

of an environment and learning from different spatial perspectives affect<br />

one’s cognitive map? Participants learned an urban virtual environment<br />

from either a survey or a route perspective and were instructed<br />

to focus either on landmarks or on intersections. <strong>The</strong> route<br />

group learned the environment by watching a virtual person walking<br />

through it, whereas the survey group learned by watching a dot moving<br />

through a map. While learning, participants were stopped at critical<br />

points, and their attention was focused on either landmarks or intersections.<br />

After learning, all participants performed several spatial<br />

tasks: navigation, map drawing, and pointing. Individual differences<br />

in the cognitive style of spatial representation were recorded. Results<br />

showed that spatial perspective, instructions, and individual differences<br />

in spatial representations interacted to affect performance.<br />

<strong>The</strong>se results will be discussed in the context of spatial mental models<br />

and their influences.<br />

(1009)<br />

Encoding Direction During the Processing of Proximity Terms.<br />

AARON L. ASHLEY & LAURA A. CARLSON, University of Notre<br />

Dame—A target’s location may be described by spatially relating it to<br />

a reference object. Different types of spatial terms emphasize different<br />

aspects of this spatial relation, with projective terms (e.g.,<br />

“above”) explicitly conveying direction (but not distance), and proximity<br />

terms (e.g., “near”) explicitly conveying distance (but not direction).<br />

It has been suggested that only aspects of the relation that are<br />

explicitly conveyed by the spatial term are encoded when interpreting<br />

a spatial description. However, recent research has demonstrated that<br />

distance information is encoded during the processing of projective<br />

spatial relations, because such information is important for finding<br />

the target. In the present research, we demonstrate that direction information<br />

is similarly encoded during the processing of proximity<br />

terms that convey a close distance (“near,” “approach”), but not for<br />

those conveying a far distance (“far,” “avoid”), in support of the idea<br />

that multiple aspects of the spatial relation are encoded when they assist<br />

locating the target.<br />

(1010)<br />

<strong>The</strong> Effect of Recipient Perspective on Direction-Giving Processes.<br />

ALYCIA M. HUND & KIMBERLY M. HOPKINS, Illinois State University—Getting<br />

to unfamiliar destinations often involves relying on<br />

directions from others. <strong>The</strong>se directions contain several cues, including<br />

landmarks, streets, distances, and turns. <strong>The</strong> goal of this project<br />

was to understand the cues people use when giving directions for navigation.<br />

In particular, does the information provided depend on<br />

whether a route or survey perspective is employed? Sixty-four participants<br />

provided directions to help a fictitious recipient get from starting<br />

locations to destinations in a fictitious model town. On half of the<br />

trials, the recipient was driving in the town (a route perspective). On<br />

the remaining trials, the recipient was looking at a map of the town (a<br />

survey perspective). As predicted, people included significantly more<br />

landmarks and left/right descriptions when addressing a recipient driving<br />

in the town. In contrast, they used significantly more cardinal descriptors<br />

when addressing a recipient looking at a map. <strong>The</strong>se findings<br />

suggest that perspective affects direction-giving processes.<br />

53<br />

• COGNITIVE SKILL ACQUISITION •<br />

(1011)<br />

<strong>The</strong> Separation of Words and Rules: Implicit Learning of Abstract<br />

Rules for Word Order. ANDREA P. FRANCIS, Michigan<br />

State University, GWEN L. SCHMIDT & BENJAMIN A. CLEGG,<br />

Colorado State University—Artificial grammar learning studies have<br />

implied that people can learn grammars implicitly. Two studies using<br />

word strings, rather than traditional letter strings, examined the incidental<br />

learning of three-word orders. English speakers practiced unfamiliar<br />

strings ordered as either “verb noun noun” or “noun noun<br />

verb.” Despite possible prior associations between the words and the<br />

“noun verb noun” order, self-timed reading speed decreased following<br />

exposure to the unfamiliar rule. This pattern generalized beyond<br />

the specific instances encountered during practice, suggesting that<br />

learning of the structure was abstract. A second experiment found<br />

learning when nouns were replaced with pseudowords, showing that<br />

learning was possible in the absence of preexisting meaning and<br />

meaningful relationships between items. <strong>The</strong>se findings suggest that<br />

word orders can be learned implicitly and that words and orders can<br />

be dissociated during learning. <strong>The</strong>se results extend artificial grammar<br />

learning to more ‘language-like’ materials and are consistent with<br />

accounts emerging from structural priming research.<br />

(1012)<br />

Implicit Learning of Artificial Grammars: Under What Conditions?<br />

ESTHER VAN DEN BOS & FENNA H. POLETIEK, Leiden<br />

University—Numerous artificial grammar learning (AGL) experiments<br />

have shown that memorizing grammatical letter strings enables<br />

participants to subsequently discriminate between grammatical and<br />

ungrammatical strings at least as well as does looking for underlying<br />

rules. <strong>The</strong> present study examined the circumstances triggering implicit<br />

learning. We suggest that implicit learning occurs during memorizing,<br />

because structure knowledge facilitates this task. In general,<br />

we propose that implicit learning occurs whenever structure knowledge<br />

contributes to fulfilling a person’s current goal. This goal directedness<br />

hypothesis was tested in an AGL study. Adults and children<br />

performed an induction task to which knowledge of the grammar<br />

could be more or less functional. Both groups showed the same pattern<br />

of performance on a subsequent grammaticality judgment test:<br />

Functional conditions (identifying semantic referents, memorizing)<br />

outperformed nonfunctional conditions (identifying different semantic<br />

referents, rating likeability, computing values associated with sentences).<br />

<strong>The</strong>se results suggest that implicit learning is goal directed,<br />

occurring whenever structure knowledge facilitates one’s current task.<br />

(1013)<br />

Movement Matters: Enhancing Artificial Grammar Performance<br />

With Animation. BILL J. SALLAS, ROBERT C. MATHEWS, &<br />

SEAN M. LANE, Louisiana State University, & RON SUN, Rensselaer<br />

Polytechnic Institute—When learning abstract material, one approach<br />

involves exposure to many examples of the corpus (memorybased),<br />

while a second approach involves learning the underlying<br />

structure (model-based). Research (Domangue et al., 2004) using an<br />

artificial grammar task has found that memory-based processing leads<br />

to fast but relatively inaccurate performance and model-based processing<br />

leads to slow but accurate performance at test. Attempts to integrate<br />

memory- and model-based training to facilitate fast and accurate<br />

performance were unsuccessful. <strong>The</strong> present experiment utilized<br />

a computer-animated training task, whereas previous research used a<br />

pen-and-paper task. Training with an animated representation, or diagram,<br />

of the grammatical rules led to fast and accurate performance<br />

at test. Animation without this explicit representation led to fast but<br />

inaccurate performance. Our results suggest that it is possible to integrate<br />

memory- and model-based processing to enhance performance.<br />

In addition, our results bear on the current debate on the utility of animation<br />

for learning.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!