29.01.2013 Views

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Posters 3001–3007 Friday Evening<br />

POSTER SESSION III<br />

Sheraton Hall, Friday Evening, 5:30–7:00<br />

• TOUCH •<br />

(3001)<br />

Training Reduces the Crossed-Hands Deficit in Temporal Order<br />

Judgments. JAMES C. CRAIG & ADRIENNE N. BELSER, Indiana<br />

University—It has been shown that crossing the hands results in tactile<br />

temporal order thresholds that may be more than three times larger<br />

than those with uncrossed hands, a crossed-hands deficit (CHD).<br />

<strong>The</strong>se results suggest that with crossed hands, subjects have difficulty<br />

remapping the tactile inputs to correspond with the spatial positions<br />

of the hands. <strong>The</strong> effect of training on the CHD was examined. At the<br />

beginning of training, the crossed threshold was 458 msec and the uncrossed<br />

threshold was 65 msec, a CHD of 393 msec. At the end of training,<br />

the comparable values were 111 and 50 msec, a CHD of 61 msec.<br />

In another experiment, the CHD initially was 79 msec, dropping to<br />

only 16 msec in the last session. Training did not eliminate but did<br />

greatly reduce the CHD. <strong>The</strong> implication is that significant remapping<br />

occurs with modest amounts of training.<br />

(3002)<br />

<strong>The</strong> Effects of Force and Conformance on Tactile Sensitivity. GREG-<br />

ORY O. GIBSON & JAMES C. CRAIG, Indiana University (sponsored<br />

by Gabriel P. Frommer)—<strong>The</strong> effects of force and conformance<br />

on intensive and spatial processing were examined with several measures<br />

of tactile sensitivity. Measures were made at two locations (fingerpad,<br />

fingerbase) that differ in sensitivity and density of innervation.<br />

Psychometric functions were generated for two measures of<br />

spatial sensitivity and one measure of intensive sensitivity at two<br />

forces (50 and 200 g). Results indicated that increasing force led to<br />

improvement on the intensive task, but not on the spatial tasks. Skin<br />

conformance measurements were made at both test sites. Conformance<br />

was found to be a joint function of force and groove width. Furthermore,<br />

performance on the intensive task could be predicted by<br />

conformance. <strong>The</strong> results are consistent with the view that increasing<br />

conformance increases neural activity in the afferent fibers; this increase<br />

improves performance on intensive tasks but has little effect on<br />

the quality of the spatial image.<br />

(3003)<br />

Haptic Concepts in the Blind. DONALD HOMA, KANAV KAHOL,<br />

PRIYAMVADA TRIPATHI, LAURA BRATTON, & SETHURAMAN<br />

PANCHANATHAN, Arizona State University—<strong>The</strong> acquisition of haptic<br />

concepts by the blind was investigated. Each subject—either blind or<br />

normally sighted—initially classified eight objects into two categories,<br />

using a study/test format, followed by a recognition/classification test<br />

involving old, new, and prototype forms. Each object varied along three<br />

relevant dimensions—shape, size, and texture—with each dimension<br />

having five values. <strong>The</strong> categories were linearly separable in three dimensions,<br />

and no single dimension permitted 100% accurate classification.<br />

<strong>The</strong> results revealed that blind subjects learned the categories<br />

slightly more quickly than their sighted controls and performed at<br />

least as well on the later memory tests. On the classification test, both<br />

groups performed equivalently, with the category prototype classified<br />

more accurately than the old or new stimuli. On the recognition test,<br />

all subjects, including the blind, false alarmed to the category prototype<br />

more than to any new pattern. <strong>The</strong>se results are discussed in<br />

terms of current views of categorization.<br />

• MULTISENSORY INTEGRATION •<br />

(3004)<br />

Kinesthetic Egocenter Is Used in Visually Directed Manual Pointing.<br />

KOICHI SHIMONO, Tokyo University of Marine Science and Technology,<br />

& ATSUKI HIGASHIYAMA, Ritsumeikan University—We examined<br />

the hypothesis (Howard, 1982; Shimono & Higashiyama, <strong>2005</strong>)<br />

88<br />

that if we point a target manually without viewing our hands, its direction<br />

is judged from the kinesthetic egocenter, but not from the visual<br />

egocenter. For each of 8 observers, we estimated locations of the<br />

visual and the kinesthetic egocenters by using the Howard and Templeton<br />

method and required them to point to a near or far target without<br />

viewing their hands. <strong>The</strong> angle, which was formed by the sagittal<br />

plane going through the egocenter (visual or kinesthetic), with the line<br />

joining the egocenter and the pointed position, was determined for<br />

each target. <strong>The</strong> angles for the near targets were better described as a<br />

function of the angle for the far targets when they were represented<br />

using the kinesthetic, rather than the visual, egocenter.<br />

(3005)<br />

Contrast Effects Between Concurrently Perceiving and Producing<br />

Movement Directions. JAN ZWICKEL, MARC GROSJEAN, &<br />

WOLFGANG PRINZ, Max Planck Institute for Human Cognitive and<br />

Brain Sciences—Schubö, Aschersleben, and Prinz (2001) proposed a<br />

model to account for the contrast effects (CEs) that arise during the<br />

concurrent perception and production of feature-overlapping events. For<br />

example, the model can explain why producing a medium-amplitude<br />

movement while simultaneously watching a large-amplitude motion<br />

leads to a reduction in size of the produced movement (i.e., a CE in<br />

action) and to an increase in size of the perceived motion (i.e., a CE in<br />

perception). Using movement direction as the overlapping perception–<br />

action dimension, the present experiments sought to evaluate two<br />

untested predictions of the model: (1) <strong>The</strong> size of the CEs in perception<br />

and action should be monotonically related and (2) the size of the CEs<br />

should depend on the angular proximity between perceived and produced<br />

movements. In agreement with the model, CEs were found in<br />

both perception and action; however, neither of these specific predictions<br />

were confirmed.<br />

(3006)<br />

Cross-Modal Interactions in the Perception of Auditory Spatial Sequences.<br />

SHARON E. GUTTMAN, LEE A. GILROY, & RANDOLPH<br />

BLAKE, Vanderbilt University—To create meaningful descriptions of<br />

reality, the perceptual system must combine inputs from multiple sensory<br />

modalities. Previously, we have shown that one consequence of<br />

multimodal integration is cross-modal encoding: Temporal information<br />

presented through visual input automatically becomes represented<br />

using an auditory code. Here, we investigate the converse and<br />

ask whether spatial information presented through auditory input is<br />

automatically represented using a visual code. Participants made<br />

same/different judgments regarding two auditory sequences consisting<br />

of white noise bursts presented serially at four distinct spatial locations.<br />

Auditory/visual interactions suggested cross-modal encoding:<br />

Incongruent visual–spatial information diminished task performance<br />

relative to a baseline condition, whereas congruent visual–spatial information<br />

improved performance. Further experimentation suggested that<br />

this cross-modal interference is partially attributable to visual capture<br />

of the auditory spatial information. Together, these results indicate<br />

that the perceptual system employs multiple, situationally dependent<br />

strategies to create unitary representations from multimodal input.<br />

(3007)<br />

Differentiable Effects of Size Change on Repetition Priming in Vision<br />

and Haptics. ALAN C. SCOTT & RANDOLPH D. EASTON, Boston<br />

College—Previous research has demonstrated that object identification<br />

on repeated exposures is performed more quickly than initial<br />

identification—a phenomenon referred to as priming (e.g., Cave &<br />

Squire, 1992). Introducing a size change between study and test has<br />

no effect on the facilitating effects of repeated exposures when items<br />

are presented visually but does reduce facilitation when items are presented<br />

haptically. In recent research, we have demonstrated haptic-tovisual<br />

cross-modal priming with no effects of size change, thereby<br />

suggesting the existence of bimodal object processing. It was believed<br />

that if object processing was entirely bimodal in nature, increasing the<br />

delay between study and test would eliminate the unique effects of

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!