29.01.2013 Views

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Posters 4083–4089 Saturday Noon<br />

(4083)<br />

On the Use of Signal Detection Models in Contrasting Yes–No and<br />

Forced-Choice Discrimination. MOSES M. LANGLEY, Iowa State<br />

University (sponsored by Robert West)—Signal detection (SD) models<br />

predict the superiority of two-alternative forced-choice (2AFC)<br />

detection over yes–no (YN) detection by a factor of √2. Thus, to make<br />

balanced comparisons between performances in these tasks, the equation<br />

for estimating 2AFC detection performance involves a division<br />

by √2. Although the detection literature provides a long and consistent<br />

history of confirmation for this prediction (Wickelgren, 1968),<br />

the prediction often fails when extended to discrimination tasks<br />

(Creelman & Macmillan, 1979). Nevertheless, SD models have been<br />

widely used to contrast discrimination in YN and 2AFC tasks. <strong>The</strong><br />

present experiments explicitly examined the √2 prediction under theoretically<br />

ideal conditions for the use of SD models in discrimination<br />

estimation; the √2 prediction was generally unsupported for both the<br />

equal-variance and the unequal-variance SD model. <strong>The</strong>se results<br />

challenge prior assertions that SD model estimates are the appropriate<br />

means for contrasting YN and 2AFC discrimination.<br />

(4084)<br />

Some-or-None Recollection in Recognition Memory: Evidence From<br />

Conjoint Modeling of Item and Source Information. SERGE V.<br />

ONYPER, St. Lawrence University, & MARC W. HOWARD, Syracuse<br />

University—Recent years have seen the development of consensus<br />

that recognition memory is subserved by familiarity and a recall-like<br />

recollection process. However, considerable debate still exists about<br />

the form of the recollection process and the relationship between recollection<br />

and familiarity. Evidence from conjoint judgments of item<br />

and source information shows curvilinear source ROCs when conditionalized<br />

on item confidence. This finding has been taken as evidence<br />

against a two-process account (e.g., Slotnick & Dodson, 2005).<br />

We conducted an experiment in which subjects rated both confidence<br />

and source for words or pictures. We successfully fit a dual-process<br />

model with some-or-none recollection simultaneously to the item and<br />

source data. This variable recollection model postulates that recollection<br />

is a continuous process, but that it also exhibits threshold behavior<br />

(see also DeCarlo, 2002, 2003). This model approximates<br />

single-process accounts and all-or-none dual-process accounts as limiting<br />

cases and should provide a general framework for describing<br />

recognition performance across a wide variety of circumstances.<br />

(4085)<br />

Using Factors Selectively Influencing Processes to Construct a<br />

Multinomial Processing Tree. SHENGBAO CHEN, JMW Truss, &<br />

RICHARD J. SCHWEICKERT, Purdue University (sponsored by<br />

Richard J. Schweickert)—Data often indicate that manipulating an experimental<br />

factor changed a single probability in a multinomial processing<br />

tree, evidence that the factor selectively influenced a process.<br />

(Such evidence often arises when the process dissociation procedure<br />

is used.) A systematic way is presented to test whether two experimental<br />

factors each selectively influenced a different process in a<br />

multinomial processing tree; and, if so, to construct a multinomial<br />

processing tree accounting for the data. We show that if the data were<br />

produced by any underlying multinomial processing tree, there is an<br />

equivalent relatively simple tree. <strong>The</strong> equivalent tree must have one<br />

of two forms. In one form, the selectively influenced processes are sequential;<br />

in the other, they are not sequential. If the processes are sequential,<br />

the data sometimes indicate the order in which they were<br />

executed.<br />

• DEPTH AND MOVEMENT PERCEPTION •<br />

(4086)<br />

Induced Motion Effects in a Target Throwing Task. CRYSTAL D.<br />

OBERLE, JOANNA M. BAUGH, & HEIDI S. LOVEJOY, Texas State<br />

University—Computerized induced motion experiments have revealed<br />

that target pointing is less affected than perceptual judgments (Abrams<br />

117<br />

& Landgraf, 1990; Post & Welch, 2004) or is completely immune to<br />

the illusory motion (Bridgeman et al., 1981; Rival et al., 2004). <strong>The</strong><br />

present research investigated the effects of background motion on<br />

throwing performance. Forty participants completed 50 trials of a target<br />

throwing task, 10 for each of the following motion background conditions:<br />

none, rightward, leftward, upward, and downward. Although<br />

condition did not affect vertical deviation from the target’s center<br />

[F(4,156) = 2.51, p = .06], it did affect horizontal deviation [F(4,156) =<br />

6.01, p = .0002]. Planned comparisons revealed that only performance<br />

in the rightward and leftward motion background conditions differed<br />

from performance in the no-motion background condition. Relative to<br />

the latter control trials, rightward background motion caused leftward<br />

throwing errors, and leftward background motion caused rightward<br />

throwing errors, consistent with the induced motion effect.<br />

(4087)<br />

Effects of Motion Type and Response Method on Representational<br />

Momentum in a Large Visual Display. SUSAN E. RUPPEL, University<br />

of South Carolina Upstate, & TIMOTHY L. HUBBARD, Texas<br />

Christian University—Effects of motion type (i.e., whether motion appears<br />

continuous or implied) and response type (pointing, probe judgment)<br />

on memory for the final location of a moving target (representational<br />

momentum; RM) have been debated. Also, displays in<br />

previous studies of RM typically subsumed a relatively small portion<br />

of observers’ visual fields, and it is possible that visual information<br />

outside the boundaries of the display anchored (or otherwise influenced)<br />

observers’ judgments. <strong>The</strong> present study presented continuous<br />

motion and implied motion, and used natural pointing, mouse pointing,<br />

and probe judgment response measures. Motion was displayed on<br />

a Smartboard touchscreen (77-in. diagonal) that subsumed a relatively<br />

large portion of observers’ visual fields. RM was not influenced by<br />

motion type; RM was larger with mouse pointing and natural pointing<br />

when continuous motion was presented, whereas RM was larger<br />

with mouse pointing and probe judgment when implied motion was<br />

presented. Implications for theories of RM are discussed.<br />

(4088)<br />

Visual Perception of Self-Rotation in a Virtual Environment.<br />

ANDREAS E. FINKELMEYER & MARC M. SEBRECHTS, Catholic<br />

University of America (sponsored by James H. Howard, Jr.)—In three<br />

experiments, we investigated the ability of human observers to indicate<br />

the amount of a previously presented self-rotation in a virtual environment.<br />

Experiment 1 compared active–physical rotations with<br />

passive–visual rotation modes. Experiment 2 investigated the influence<br />

of the horizontal and vertical field-of-view (FOV), and Experiment<br />

3 compared conditions of mismatching physical and rendering<br />

FOV. In all three experiments, observers gave their judgment by rotating<br />

back to the start, or pointing to it. Observers generally underestimated<br />

the amount of performed rotation, and had increasing errors<br />

with larger angles. Response mode had the largest effect on these errors:<br />

Judgments were less biased and more accurate when rotating<br />

back than when pointing, suggesting different cognitive processes for<br />

the two response modes. Despite dramatic differences in appearance,<br />

movement mode (Experiment 1) and FOV manipulations (Experiments<br />

2 and 3) had relatively smaller effects on the observers’ judgments,<br />

primarily in the form of interactions.<br />

(4089)<br />

Biomechanical Capture in Rotational Locomotion. HUGO BRUGGE-<br />

MAN, Brown University, & HERBERT L. PICK, JR., University of<br />

Minnesota—In human terrestrial locomotion, activation of the inertial<br />

system is causally related to self-generated activity of the biomechanical<br />

system. <strong>The</strong> biomechanical information has been proven to<br />

be dominant in guiding terrestrial locomotion without vision or hearing.<br />

However, we wanted to know whether such biomechanical capture<br />

of guidance might still be influenced by the inertial information<br />

from stepping. In a series of three experiments, participants completed<br />

updating tasks of spatial rotation for several different conditions of

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!