Abstracts 2005 - The Psychonomic Society

Abstracts 2005 - The Psychonomic Society Abstracts 2005 - The Psychonomic Society

psychonomic.org
from psychonomic.org More from this publisher
29.01.2013 Views

Papers 149–154 Saturday Morning ficult to make empirically. We show how a multiple-target search methodology that is analyzed using sequential sampling models can effectively discriminate between capacity-limited parallel search and truly serial search. We find that there are, at most, two stimulus classes that require serial inspection of targets and distractors. STM and Language Processing Dominion Ballroom, Saturday Morning, 8:00–10:00 Chaired by Elisabet M. Service University of Helsinki and Dalhousie University 8:00–8:15 (149) Phonological Short-Term Memory Span and the Quality of Phonological Traces. ELISABET M. SERVICE, University of Helsinki and Dalhousie University, & SINI E. MAURY & EMILIA L. LUOTONIEMI, University of Helsinki—A correlation between vocabulary learning and STM performance (e.g., digit span, nonword repetition) has been explained by assuming that phonological WM modulates what gets transferred to LTM. Our results suggest a different explanation. Two groups with relatively good versus poor nonword span were tested. In Experiment 1, lists of three CVCVCV nonwords constructed from a pool of 12 stimuli were presented to participants for immediate serial recall. In a surprise test after the STM task, participants were given free recall, cued recall, and recognition tests for the nonwords in the stimulus pool. The good group remembered more in all tests. In Experiment 2, experience of correct output was controlled by presenting span + 1 lists. Despite equal STM, the good group again outperformed the poor group in memory for the stimulus pool items. We suggest that the good group forms stronger phonological representations, available for both STM and LTM tasks. 8:20–8:35 (150) When Does Silence Speak Louder Than Words? Using Recall Dynamics and Recall Quality to Investigate Working Memory. JOHN N. TOWSE, Lancaster University, NELSON COWAN, University of Missouri, Columbia, & NEIL J. HORTON, Lancaster University—Working memory span tasks combine nontrivial processing requirements with explicit memory demands. They are extensively used as indices of working memory capacity and are linked with a wide range of important higher level cognitive skills. Drawing upon evidence that includes silent pauses in sequence recall, we report four studies that assess the interdependence between processing and memory in reading span. Each experiment compares two types of reading span, in which memoranda are either integrated with or independent from sentence processing. Absolute levels of recall vary substantially between these conditions, and the chronometry of recall strongly suggests that reconstructive processes contribute to commonly employed forms of reading span. We assess reading span, using both spoken recall and an item and order reconstruction paradigm involving nonspoken responses. We also examine the contribution of a generation effect (Slamecka & Graf, 1978) to reading span performance. The data provide fresh theoretical perspectives on working memory. 8:40–8:55 (151) Articulation Without Suppression: Dissociable Processing Streams for Phonology and Articulation. TIMOTHY C. RICKARD, University of California, San Diego—Subjects were instructed to synchronize vocal repetition of “ta” with subvocal recitation of the alphabet to a specified letter. Another group was instructed to interleave these tasks. For the synchronize group, subvocal recitation accuracy was high at all rates (1,000–400 msec per letter), whereas for the interleave group, accuracy was lower and decreased markedly with increasing rate. Several factors eliminate the possibility that subjects recited the letters visually, instead of phonologically. These results are consistent with the subjective experience that subvocalization is possible while reading under articulatory suppression and with related results from the speech 24 production literature. Previous findings to the contrary, using the phonological similarity task, may reflect task difficulty and (or) failure of subjects to adopt a synchronize strategy. Data from an ongoing experiment, as well as implications for working memory theory and for use of articulatory suppression as a research tool, are discussed. 9:00–9:15 (152) Artificially Induced Valence of Distractors Increases the Irrelevant Speech Effect. AXEL BUCHNER & BETTINA MEHL, Heinrich Heine University, Düsseldorf, KLAUS ROTHERMUND, Friedrich Schiller University, Jena, & DIRK WENTURA, Saarland University—In a game context, nonwords were artificially associated with negative valence, or they were neutral or irrelevant. Subsequently, participants memorized target words in silence or while ignoring the irrelevant, neutral, or negatively valent distractor nonwords. The presence of distractor nonwords impaired recall performance, but negative distractor nonwords caused more disruption than did neutral and irrelevant distractors, which did not differ in how much disruption they caused. These findings conceptually replicate earlier results showing disruption due to valence with natural language words, and they extend these earlier results in demonstrating that auditory features that may possibly be confounded with valence in natural language words cannot be the cause of the observed disruption. Explanations of the irrelevant speech effect within working memory models that specify an explicit role of attention in the maintenance of information for immediate serial recall can explain this pattern of results, whereas structural models of working memory cannot. 9:20–9:35 (153) Integrating Verbal Information in Working Memory With Language Knowledge. GRAHAM J. HITCH, ALAN D. BADDELEY, & RICHARD J. ALLEN, University of York (sponsored by Philip Thomas Quinlan)—We investigated the hypothesis that an episodic buffer (Baddeley, 2000) is necessary for integrating verbal information held in working memory with knowledge about sequential redundancy in language. We did this by comparing the effects of various concurrent tasks on immediate serial recall of constrained sentences and scrambled word lists. The concurrent tasks were designed to interfere with phonological storage, visuospatial storage, or executive components of working memory. Results suggest that the beneficial effect of sequential redundancy is largely automatic when a sentence is spoken but involves executive processes when presentation is visual and access to the phonological store is blocked. The first finding is consistent with a direct link between the phonological store and language knowledge that does not require access to an episodic buffer. The second finding suggests that episodic buffer storage is required when the direct link is not available. 9:40–9:55 (154) Neuroimaging Evidence for a Single Lexical–Semantic Buffer Involved in Language Comprehension and Production. RANDI C. MARTIN & PHILIP BURTON, Rice University, & A. CRIS HAMIL- TON, University of Pennsylvania—Patients with semantic short-term memory deficits have difficulty comprehending sentences in which several word meanings must be held prior to integration (Martin & He, 2004) and with producing phrases containing multiple content words (Martin & Freedman, 2001). These results suggest that the same lexical– semantic buffer is used in comprehension and production. As these patients’ lesions include the left inferior frontal gyrus (LIFG), the present fMRI study sought evidence from neurally intact subjects for LIFG involvement in semantic retention. Our experiments contrasted neural activation for delayed versus immediate word meaning integration during sentence anomaly detection (Experiment 1) and for the production of adjective–noun phrases versus copular sentences (Experiment 2). Results provide converging evidence for the involvement of the same lexical–semantic buffer in comprehension and production and for a LIFG localization for this buffer.

Saturday Morning Papers 155–161 Visual Cognition Civic Ballroom, Saturday Morning, 8:00–10:20 Chaired by James R. Brockmole, Michigan State University 8:00–8:15 (155) Contextual Cuing in Real-World Scenes. JAMES R. BROCKMOLE & JOHN M. HENDERSON, University of Edinburgh—We investigated whether contextual cuing in real-world scenes is driven by memory for local objects and features or whether a scene’s gist guides attention to targets. During learning, twice the number of repetitions were required to observe maximal learning benefits for inverted scenes (making them harder to interpret), as compared with upright scenes, indicating a role for semantic memory in cuing. Following learning, scenes were mirror reversed, spatially translating features and targets while preserving gist. Observers first moved their eyes toward the target’s previous position in the display. This localization error caused an increase in search time, since additional fixations were required to locate the target. The disruption was not absolute; when initial search failed, the eyes quickly moved toward the target’s new position. This suggests that the scene’s gist initially guides attention to the target and that localized feature information is used if the gist association fails. 8:20–8:35 (156) Varieties of Emergent Features in Visual Perceptual Organization. JAMES R. POMERANTZ & MARY C. PORTILLO, Rice University— At the heart of Gestalt effects in perception are nonlinearities, where the whole is perceived differently from the sum of its parts or, put differently, where elements are perceived and discriminated differently in different contexts. We study such Gestalt organization by searching for configural superiority effects (CSEs), which are instances where stimuli are discriminated from each other more quickly and accurately in the presence of context elements even when those elements by themselves provide no information relevant to the discrimination. In most instances, adding such context hinders performance: Discriminating A from B is easier than discriminating AC from BC (where C is the context). We demonstrate a dozen or so cases where context actually helps (sometimes greatly), and we try to extract general principles to explain CSEs in terms of emergent features arising when elements group. 8:40–8:55 (157) Fixation Position Changes During Fixations in Reading. AL- BRECHT W. INHOFF, & ULRICH WEGER, SUNY, Binghamton, SETH GREENBERG, Union College, & RALPH R. RADACH, Florida State University—Readers move the eyes to apply high acuity vision to different text segments and they keep the eyes relatively stationary (fixated) after that so that linguistic detail can be obtained. The study examined whether small position changes that occur during fixations are a consequence of oculomotor reflexes and/or ongoing processing demands. Two tasks were used: silent reading and reading plus detection of a target letter. The left eye was more stable than the right eye in both tasks. A strong directional bias was present in the reading task, with larger and more frequent position changes toward the right than toward the left. This bias was eliminated when letter detection was added to the task. An ongoing “pull-mechanism” appears to shift the eyes toward to-be-identified words during fixations in reading. However, when readers are forced to consider linguistic detail such as a target letter, the pull-mechanism is apparently suppressed. 9:00–9:15 (158) Forgetting Pictures: Visual Versus Conceptual Information. MARY C. POTTER & LAURA F. FOX, Massachusetts Institute of Technology— Rapidly presented pictures are quickly, but not immediately, forgotten (Potter, Staub, & O’Connor, 2004). We assessed memory for concep- 25 tual gist and memory for visual and spatial information in experiments using title or picture recognition tests. In addition, subjects made a forced choice between two versions of a picture that differed in visual/ spatial features, such as color, left–right orientation, or presence/absence of details. Although gist memory declined over testing, visual/spatial memory (poorer to begin with) did not, suggesting that if gist is remembered, so is any visual/spatial information encoded with the gist. Hence, the conceptual gist of a picture is extracted early and determines what is remembered. We also compare gist and visual/spatial information in longer term memory for pictures. 9:20–9:35 (159) Tunnel Vision During Visual Scanning: Do Smaller Views Yield Smaller Representations? HELENE INTRAUB & KAREN K. DANIELS, University of Delaware—Regions of real 3-D scenes delimited by “windows” are remembered as having included more of the world than was actually shown (boundary extension [BE]; Intraub, 2004). Does the scope of each exploratory fixation constrain BE? In Experiment 1, viewers (N = 60) studied scene regions with multiple objects for 30 sec each. Vision was binocular (normal) or monocular through tunnel vision goggles with large (3 cm) or small (0.6 cm) apertures. Binocular vision encompassed the entire region and surrounding space; tunnel viewing required effortful head movements to examine the regions. At test, viewers reconstructed the windows to recreate the delimited areas. In all conditions, viewers remembered seeing about one third more area; dramatic differences in visual scope did not influence spatial extrapolation. Experiment 2 (N = 40) replicated the results and demonstrated that viewers were experiencing true BE—not simply creating “conventionally sized” regions. Layout extrapolation was constrained by the targeted view—not the scope of the input. 9:40–9:55 (160) The Role of Visual Short-Term Memory in Gaze Control. ANDREW HOLLINGWORTH, ASHLEIGH M. RICHARD, & STEVEN J. LUCK, University of Iowa—It is well established that object representations are maintained in visual short-term memory (VSTM) across saccades, but the functional role of VSTM in gaze control is not well understood. Saccades are often inaccurate, and when the eyes miss a saccade target, multiple objects will lie near fixation, especially in cluttered, real-world scenes. We tested the hypothesis that VSTM stores target information across the saccade to identify the target among other local objects, supporting an efficient corrective saccade to the target. A new paradigm was developed to simulate saccade error. During a saccade, a stimulus array was shifted so that the eyes landed between the saccade target object and a distractor. Accurate gaze correction to the target required transsaccadic memory for the target’s visual form. VSTM-based gaze correction in this paradigm was accurate, fast, and automatic, demonstrating that VSTM plays an important role in the direction of gaze to goal-relevant objects. 10:00–10:15 (161) Fixational Eye Movements at the Periphery. JUDITH AVRAHAMI & OREN FLEKSER, Hebrew University of Jerusalem—Although the fact that the eye is moving constantly has been known for a long time, the role of fixational eye movements (FEMs) is still in dispute. Whatever their role, it is structurally clear that, since the eye is a ball, the size of these movements diminishes for locations closer to the poles. Here, we propose a new perspective on the role of FEM, from which we derive a prediction for a three-way interaction of a stimulus’ orientation, location, and spatial frequency. Measuring time to disappearance for gratings located in the periphery we find that, as predicted, gratings located to the left and right of fixation fade faster when horizontal than when vertical in low spatial frequencies and faster when vertical than when horizontal in high spatial frequencies. The opposite is true for gratings located above and below fixation.

Saturday Morning Papers 155–161<br />

Visual Cognition<br />

Civic Ballroom, Saturday Morning, 8:00–10:20<br />

Chaired by James R. Brockmole, Michigan State University<br />

8:00–8:15 (155)<br />

Contextual Cuing in Real-World Scenes. JAMES R. BROCKMOLE<br />

& JOHN M. HENDERSON, University of Edinburgh—We investigated<br />

whether contextual cuing in real-world scenes is driven by memory<br />

for local objects and features or whether a scene’s gist guides attention<br />

to targets. During learning, twice the number of repetitions<br />

were required to observe maximal learning benefits for inverted<br />

scenes (making them harder to interpret), as compared with upright<br />

scenes, indicating a role for semantic memory in cuing. Following<br />

learning, scenes were mirror reversed, spatially translating features<br />

and targets while preserving gist. Observers first moved their eyes toward<br />

the target’s previous position in the display. This localization<br />

error caused an increase in search time, since additional fixations<br />

were required to locate the target. <strong>The</strong> disruption was not absolute;<br />

when initial search failed, the eyes quickly moved toward the target’s<br />

new position. This suggests that the scene’s gist initially guides attention<br />

to the target and that localized feature information is used if<br />

the gist association fails.<br />

8:20–8:35 (156)<br />

Varieties of Emergent Features in Visual Perceptual Organization.<br />

JAMES R. POMERANTZ & MARY C. PORTILLO, Rice University—<br />

At the heart of Gestalt effects in perception are nonlinearities, where<br />

the whole is perceived differently from the sum of its parts or, put differently,<br />

where elements are perceived and discriminated differently<br />

in different contexts. We study such Gestalt organization by searching<br />

for configural superiority effects (CSEs), which are instances<br />

where stimuli are discriminated from each other more quickly and accurately<br />

in the presence of context elements even when those elements<br />

by themselves provide no information relevant to the discrimination.<br />

In most instances, adding such context hinders performance: Discriminating<br />

A from B is easier than discriminating AC from BC<br />

(where C is the context). We demonstrate a dozen or so cases where<br />

context actually helps (sometimes greatly), and we try to extract general<br />

principles to explain CSEs in terms of emergent features arising<br />

when elements group.<br />

8:40–8:55 (157)<br />

Fixation Position Changes During Fixations in Reading. AL-<br />

BRECHT W. INHOFF, & ULRICH WEGER, SUNY, Binghamton,<br />

SETH GREENBERG, Union College, & RALPH R. RADACH, Florida<br />

State University—Readers move the eyes to apply high acuity vision<br />

to different text segments and they keep the eyes relatively stationary<br />

(fixated) after that so that linguistic detail can be obtained. <strong>The</strong> study<br />

examined whether small position changes that occur during fixations<br />

are a consequence of oculomotor reflexes and/or ongoing processing<br />

demands. Two tasks were used: silent reading and reading plus detection<br />

of a target letter. <strong>The</strong> left eye was more stable than the right eye<br />

in both tasks. A strong directional bias was present in the reading task,<br />

with larger and more frequent position changes toward the right than<br />

toward the left. This bias was eliminated when letter detection was<br />

added to the task. An ongoing “pull-mechanism” appears to shift the<br />

eyes toward to-be-identified words during fixations in reading. However,<br />

when readers are forced to consider linguistic detail such as a target<br />

letter, the pull-mechanism is apparently suppressed.<br />

9:00–9:15 (158)<br />

Forgetting Pictures: Visual Versus Conceptual Information. MARY<br />

C. POTTER & LAURA F. FOX, Massachusetts Institute of Technology—<br />

Rapidly presented pictures are quickly, but not immediately, forgotten<br />

(Potter, Staub, & O’Connor, 2004). We assessed memory for concep-<br />

25<br />

tual gist and memory for visual and spatial information in experiments<br />

using title or picture recognition tests. In addition, subjects made a<br />

forced choice between two versions of a picture that differed in visual/<br />

spatial features, such as color, left–right orientation, or presence/absence<br />

of details. Although gist memory declined over testing, visual/spatial<br />

memory (poorer to begin with) did not, suggesting that if gist is remembered,<br />

so is any visual/spatial information encoded with the gist.<br />

Hence, the conceptual gist of a picture is extracted early and determines<br />

what is remembered. We also compare gist and visual/spatial<br />

information in longer term memory for pictures.<br />

9:20–9:35 (159)<br />

Tunnel Vision During Visual Scanning: Do Smaller Views Yield<br />

Smaller Representations? HELENE INTRAUB & KAREN K.<br />

DANIELS, University of Delaware—Regions of real 3-D scenes delimited<br />

by “windows” are remembered as having included more of the<br />

world than was actually shown (boundary extension [BE]; Intraub,<br />

2004). Does the scope of each exploratory fixation constrain BE? In<br />

Experiment 1, viewers (N = 60) studied scene regions with multiple<br />

objects for 30 sec each. Vision was binocular (normal) or monocular<br />

through tunnel vision goggles with large (3 cm) or small (0.6 cm)<br />

apertures. Binocular vision encompassed the entire region and surrounding<br />

space; tunnel viewing required effortful head movements to<br />

examine the regions. At test, viewers reconstructed the windows to<br />

recreate the delimited areas. In all conditions, viewers remembered<br />

seeing about one third more area; dramatic differences in visual scope<br />

did not influence spatial extrapolation. Experiment 2 (N = 40) replicated<br />

the results and demonstrated that viewers were experiencing true<br />

BE—not simply creating “conventionally sized” regions. Layout extrapolation<br />

was constrained by the targeted view—not the scope of the<br />

input.<br />

9:40–9:55 (160)<br />

<strong>The</strong> Role of Visual Short-Term Memory in Gaze Control. ANDREW<br />

HOLLINGWORTH, ASHLEIGH M. RICHARD, & STEVEN J. LUCK,<br />

University of Iowa—It is well established that object representations<br />

are maintained in visual short-term memory (VSTM) across saccades,<br />

but the functional role of VSTM in gaze control is not well understood.<br />

Saccades are often inaccurate, and when the eyes miss a saccade<br />

target, multiple objects will lie near fixation, especially in cluttered,<br />

real-world scenes. We tested the hypothesis that VSTM stores<br />

target information across the saccade to identify the target among<br />

other local objects, supporting an efficient corrective saccade to the<br />

target. A new paradigm was developed to simulate saccade error. During<br />

a saccade, a stimulus array was shifted so that the eyes landed between<br />

the saccade target object and a distractor. Accurate gaze correction<br />

to the target required transsaccadic memory for the target’s<br />

visual form. VSTM-based gaze correction in this paradigm was accurate,<br />

fast, and automatic, demonstrating that VSTM plays an important<br />

role in the direction of gaze to goal-relevant objects.<br />

10:00–10:15 (161)<br />

Fixational Eye Movements at the Periphery. JUDITH AVRAHAMI<br />

& OREN FLEKSER, Hebrew University of Jerusalem—Although the<br />

fact that the eye is moving constantly has been known for a long time,<br />

the role of fixational eye movements (FEMs) is still in dispute. Whatever<br />

their role, it is structurally clear that, since the eye is a ball, the<br />

size of these movements diminishes for locations closer to the poles.<br />

Here, we propose a new perspective on the role of FEM, from which<br />

we derive a prediction for a three-way interaction of a stimulus’ orientation,<br />

location, and spatial frequency. Measuring time to disappearance<br />

for gratings located in the periphery we find that, as predicted,<br />

gratings located to the left and right of fixation fade faster<br />

when horizontal than when vertical in low spatial frequencies and<br />

faster when vertical than when horizontal in high spatial frequencies.<br />

<strong>The</strong> opposite is true for gratings located above and below fixation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!