S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society S1 (FriAM 1-65) - The Psychonomic Society

psychonomic.org
from psychonomic.org More from this publisher
29.01.2013 Views

Friday Noon Posters 2022–2027 targets are embedded in a rapid serial visual presentation stream of distractors, perception of the second target is impaired if the intertarget lag is less than 500 msec. This phenomenon, called the attentional blink, has been attributed to a temporal inability of attentional resources. Nevertheless, a recent study found that observers could monitor two streams concurrently for up to four items presented in close succession, suggesting a much larger visual capacity limit. However, such high-capacity performance could be obtained by a rapid attentional shift, rather than concurrent monitoring of multiple locations. Therefore, we examined these alternatives. The results indicate that observers can concurrently monitor two noncontiguous locations, even when targets and distractors are from different categories, such as digits, alphabet, Japanese, and pseudocharacters. These results can be explained in terms of a modified input-filtering model in which a multidimensional attentional set can be flexibly configured at different spatial locations. (2022) The Effects of Noise Reduction on Listening Effort. ANASTASIOS SARAMPALIS, University of California, Berkeley, SRIDHAR KALLURI & BRENT W. EDWARDS, Starkey Hearing Research Center, & ERVIN R. HAFTER, University of California, Berkeley (sponsored by William Prinzmetal)—Hearing-aid users frequently complain of difficulty understanding speech in the presence of noise. Signal processing algorithms attempt to improve the quality, ease of listening, and/or intelligibility of speech in noisy environments. Hearingaid users often report that speech sounds easier to understand with a noise reduction (NR) algorithm, even though there are no documented intelligibility improvements, suggesting that NR may lead to a reduction in listening effort. We investigated this hypothesis using a dualtask paradigm with normal-hearing and hearing-impaired listeners. They were asked to repeat sentences or words presented in noise while performing either a memory or a reaction-time task. Our results showed that degrading speech by reducing the signal-to-noise ratio increased demand for cognitive resources, demonstrated as a drop in performance in the cognitive task. Use of an NR algorithm mitigated some of the deleterious effects of noise by reducing cognitive effort and improving performance in the competing task. (2023) Ideal Observer Analysis in Dual Attention Task. REIKO YAKUSHIJIN, Aoyama Gakuin University, & AKIRA ISHIGUCHI, Ochanomizu University—We applied ideal observer analysis to a dual-task situation in order to investigate directly how attentional resources are allocated. Two sets of component circles arrayed circularly were displayed side by side. The sets were different both in the size of components and in the size of circular configurations. Participants were asked in each trial to discriminate the size of components (local task) or the size of circular configurations (global task). The task required was in random order through trials. Priority of each task was manipulated across trial sequences. Gaussian noise was introduced to those sizes so that ideal performance could be calculated quantitatively on each task. The result showed that, although statistical efficiency was higher in the prior task, the efficiency summation of two tasks was constant irrespective of the priority manipulations. It suggests that the total amount of attentional resource was constant regardless of top-down allocation of attention. (2024) Working Memory Consolidation Causes the Attentional Blink. HIROYUKI TSUBOMI, University of Tokyo, & NAOYUKI OSAKA, Kyoto University (sponsored by Naoyuki Osaka)—In an RSVP stream, the first target interferes with the second if it is presented within a few hundred milliseconds. This phenomenon, called the attentional blink (AB), is often thought to reflect the temporal limit of visual working memory (VWM) consolidation. However, this relationship has been assumed rather than directly tested. We presented an array of 24 black or green Landolt C rings. After 3 sec, one of the green rings turned 72 red for 100 msec, followed by a 100-msec mask. Subsequently, a black ring turned red for 100 msec, followed by a 100-msec mask. The participants were instructed to report the gap direction of the two red rings. The AB for the second red ring occurred only when there were more than three green rings in the array. Using the same type of stimuli, VWM capacity was independently measured to three items. This indicates a clear connection between VWM and AB. (2025) Does the Response Criterion Affect Redundancy Gain in Simple RT? JEFF O. MILLER, University of Otago—In simple RT, responses are faster when two stimuli are presented than when one is—an example of “redundancy gain.” Models of redundancy gain typically assume that two stimuli cause faster accumulation of sensory evidence, thus satisfying a response criterion more rapidly when both are presented. That account predicts that, all other things being equal, redundancy gain should be larger when the response criterion is higher. We tested this prediction in two experiments manipulating the response criterion. One experiment compared blocks of trials with infrequent versus frequent catch trials, assuming that observers would set a higher criterion when catch trials were frequent. The other experiment compared blocks including no-stimulus catch trials against blocks including a distractor stimulus in catch trials, assuming that observers would set a higher criterion in the latter blocks to avoid incorrectly responding to distractors. The results have implications for models of redundancy gain. (2026) Dual-Task Performance With Consistent and Inconsistent Stimulus– Response Mappings. KIM-PHUONG L. VU, California State University, Long Beach, & ROBERT W. PROCTOR, Purdue University— This study examined benefits for consistent stimulus–response mappings for two tasks in a psychological-refractory-period paradigm. In Experiment 1, participants performed three-choice spatial tasks, using combinations of corresponding and mirror-opposite mappings. Both mappings showed a consistency benefit for stimuli in the two outer positions, for which the correct response differed across mappings, but only the corresponding mapping showed a consistency benefit for the center position, for which the middle response was always correct. Experiments 2 and 3 used four-choice spatial tasks. A consistency benefit was obtained for both mappings when the mapping for each task was completely corresponding or mirror opposite. However, a consistency benefit was not always obtained for a mixed mapping for which corresponding responses were made to two positions and opposite responses to two positions. These results suggest that the consistency benefit is mainly evident when the mappings for both tasks can be characterized by a single rule. (2027) Interhemispheric Collaboration for Matching Emotions Signified by Words and Faces. JOSEPH B. HELLIGE, URVI J. PATEL, JIYE KIM, & PATRICIA GEORGE, University of Southern California—Previous visual laterality studies indicate that benefits of dividing an information processing load across both cerebral hemispheres outweigh costs of interhemispheric transfer as the tasks become more difficult or complex. The present experiment indicates that this finding does not generalize to a complex task that requires matching emotions represented by two different visual formats whose perceptual processing involves different cortical areas: words (e.g., sad) and cartoon faces (e.g., a face with sad expression). Combined with other recent studies that mix stimulus formats (e.g., digits and dots to represent numeric quantity), the present results suggest that when stimuli are simultaneously presented in appropriately different visual formats, identification of those stimuli may take place in parallel, via different cortical access routes. Consequently, there is little interference, even when the stimuli are presented to a single cerebral hemisphere, so there is little or no benefit from spreading processing across both hemispheres.

Posters 2028–2034 Friday Noon (2028) Retrieval Interference Effects: The Role of Discrete Versus Continuous Concurrent Tasks. ANA M. FRANCO-WATKINS, Auburn University, & HAL PASHLER & TIMOTHY C. RICKARD, University of California, San Diego—Previous research has demonstrated some degree of dual-task interference when a concurrent task is paired with a continuous memory-retrieval task; however, this interference appears modest in magnitude. Meanwhile, in PRP designs, there are indications that memory retrieval may be completely postponed by a concurrent task, implying a more extreme degree of interference (Carrier & Pashler, 1995). The present experiments sought to determine which features of the dual-task design produced memory retrieval interference and whether the interference was graded as opposed to total. Interference was assessed at both an aggregate level (e.g., total items recalled and overall concurrent task performance) and at a more microscopic level (e.g., examining the relative timing of the two tasks on individual trials) in order to shed light upon the factors modulating memory-retrieval interference. (2029) Word Frequency and the P3: Evidence That Visual Word Processing Requires Central Attention. SARAH HULSE, Oregon State University, PHIL A. ALLEN, University of Akron, ERIC RUTHRUFF, University of New Mexico, & MEI-CHING LIEN, Oregon State University— Some behavioral studies have suggested that visual word processing cannot proceed when central attention is devoted to another task, whereas other studies suggest otherwise. To address this issue, we used the P3 component of the event-related potential (ERP), known to be sensitive to stimulus probability. Participants performed a lexical decision Task 2 while performing a tone discrimination Task 1. The critical manipulation was Task 2 word frequency. Previous single-task ERP studies have shown a larger P3 for high-frequency than for lowfrequency words, suggesting that low-frequency words require more processing capacity (e.g., Polich & Donchin, 1988). Critically, the P3 difference between high and low frequency indicates that the word was read successfully. We found that the P3 difference was much smaller when Task 2 words were presented simultaneously with Task 1, suggesting that word processing is not fully automatic, but rather requires central attention. • PSYCHOLINGUISTICS • (2030) Iconicity in American Sign Language: Word Processing Effects. ROBIN L. THOMPSON, DAVID P. VINSON, & GABRIELLA VIGLIOCCO, University College London (sponsored by Gabriella Vigliocco)—We investigated the potential language processing advantage of iconicity (the transparent relationship between meaning and form) for American Sign Language (ASL) signers. ASL signers were asked whether a picture (e.g., a bird) and a sign (e.g., “bird” produced with thumb and forefinger, representing a bird’s beak) refer to the same object. In one condition, the iconic property/feature of the sign was salient (a picture of a bird, beak well in view) while in the second the iconic property was not salient (a picture of a bird flying). Analysis of response latencies revealed a benefit for ASL signers in comparison with English-speaking controls; signers responded faster when the iconic property was made salient in the picture. Furthermore, both near-native and late L2 signers showed no difference in this facilitation effect, providing evidence that signers are sensitive to iconicity as a cue for lexical retrieval regardless of the age at which ASL is acquired. (2031) New Methods for Studying Syntactic Acquisition. MALATHI THOTHATHIRI & JESSE SNEDEKER, Harvard University (sponsored by Elizabeth S. Spelke)—There is a lively debate in language acquisition about the characterization of young children’s linguistic knowledge. Is it based on item-specific templates or on a more ab- 73 stract grammar? Most research to date has asked whether children generalize grammatical knowledge to novel verbs. But this paradigm places children in unnatural situations, and meanings of verbs are hard to learn. We combined structural priming and eyetracking to investigate children’s online comprehension of known verbs in a naturalistic task. Does a syntactic structure with one verb influence children’s interpretation of subsequent sentences with other verbs? By varying the syntactic and semantic overlap between prime and target sentences, we determined that 4-year-olds expect verb-general mappings between semantic roles and syntactic positions (Load the truck with the hay primes Pass the monkey the hat; Load the hay on the truck primes Pass the money to the bear). Studies under way ask whether 2-year-olds employ similar generalizations. (2032) Linguistic and Nonlinguistic Strategies in Artificial Language Learning. SARA FINLEY & WILLIAM BADECKER, Johns Hopkins University—The artificial language learning paradigm has successfully addressed questions about learnability and linguistic representations. We present results that expose a potential problem for this paradigm: The task may not differentiate linguistic and nonlinguistic learning strategies. In natural language, vowel harmony is determined either by a dominant feature or by direction, but never by “majority rules” (e.g., spread round only if the majority of input vowels are round). In one experiment, participants were trained on a harmony process in which three single-syllable words (e.g., [bi], [do], [gu]) combined to form a three-syllable word conforming to round harmony (e.g., [bidegi]/[budogu]). At test, participants exhibited a strong “majority rules” preference, which suggests that they employed a nonlinguistic strategy to determine the best harmonic concatenation. Followup experiments examine whether more language-like training inputs (e.g., training with pairs of morphologically related words with harmonizing affixes) had an effect on participants’ learning strategies. (2033) Sentence Comprehension and Bimanual Coordination: Implications for Embodied Cognition. ANNIE J. OLMSTEAD, NAVIN VISWANATHAN, KAREN AICHER, & CAROL A. FOWLER, University of Connecticut and Haskins Laboratories (sponsored by Carol A. Fowler)—Much of the embodied cognition literature exploits stimulus– response compatibility (e.g., Glenberg & Kaschak, 2002; Zwaan & Taylor, 2006) to expose the recruitment of the motor system in language understanding. Typically, response times indicate a facilitatory consequence of language input and motor response compatibility. Insights into the nature of the motor-language coupling may be gained by investigating how language comprehension affects motor control variables more generally. We investigated the effects of sentence judgments on continuous motor variables in a bimanual pendulum swinging paradigm (Kugler & Turvey, 1987). Stimulus–response compatibility was not manipulated; rather, participants performed an unrelated activity (swinging pendulums about the wrist) while they made verbal judgments about the sensibility of sentences having different semantic characteristics. Preliminary results suggest that relative phase shift is differentially affected by different sentence types. The results are discussed and an evaluation of the methodology is provided. (2034) BOLD Signal Correlations With Plausibility in Sentence Comprehension. LOUISE A. STANCZAK, Boston University, DAVID N. CAPLAN, Massachusetts General Hospital, GINA R. KUPERBERG, Tufts University and Massachusetts General Hospital, GLORIA S. WA- TERS, Boston University, LAURA BABBITT, Massachusetts General Hospital, & NEAL J. PEARLMUTTER, Northeastern University— Most neuroimaging studies examining pragmatic or semantic meaning properties in sentence comprehension compare an entirely acceptable condition with a clearly anomalous condition using subtraction, with a comprehension task requiring an overt acceptabil-

Friday Noon Posters 2022–2027<br />

targets are embedded in a rapid serial visual presentation stream of<br />

distractors, perception of the second target is impaired if the intertarget<br />

lag is less than 500 msec. This phenomenon, called the attentional<br />

blink, has been attributed to a temporal inability of attentional resources.<br />

Nevertheless, a recent study found that observers could monitor<br />

two streams concurrently for up to four items presented in close<br />

succession, suggesting a much larger visual capacity limit. However,<br />

such high-capacity performance could be obtained by a rapid attentional<br />

shift, rather than concurrent monitoring of multiple locations.<br />

<strong>The</strong>refore, we examined these alternatives. <strong>The</strong> results indicate that<br />

observers can concurrently monitor two noncontiguous locations,<br />

even when targets and distractors are from different categories, such<br />

as digits, alphabet, Japanese, and pseudocharacters. <strong>The</strong>se results can<br />

be explained in terms of a modified input-filtering model in which a<br />

multidimensional attentional set can be flexibly configured at different<br />

spatial locations.<br />

(2022)<br />

<strong>The</strong> Effects of Noise Reduction on Listening Effort. ANASTASIOS<br />

SARAMPALIS, University of California, Berkeley, SRIDHAR<br />

KALLURI & BRENT W. EDWARDS, Starkey Hearing Research<br />

Center, & ERVIN R. HAFTER, University of California, Berkeley (sponsored<br />

by William Prinzmetal)—Hearing-aid users frequently complain<br />

of difficulty understanding speech in the presence of noise. Signal<br />

processing algorithms attempt to improve the quality, ease of listening,<br />

and/or intelligibility of speech in noisy environments. Hearingaid<br />

users often report that speech sounds easier to understand with a<br />

noise reduction (NR) algorithm, even though there are no documented<br />

intelligibility improvements, suggesting that NR may lead to a reduction<br />

in listening effort. We investigated this hypothesis using a dualtask<br />

paradigm with normal-hearing and hearing-impaired listeners.<br />

<strong>The</strong>y were asked to repeat sentences or words presented in noise while<br />

performing either a memory or a reaction-time task. Our results<br />

showed that degrading speech by reducing the signal-to-noise ratio increased<br />

demand for cognitive resources, demonstrated as a drop in<br />

performance in the cognitive task. Use of an NR algorithm mitigated<br />

some of the deleterious effects of noise by reducing cognitive effort<br />

and improving performance in the competing task.<br />

(2023)<br />

Ideal Observer Analysis in Dual Attention Task. REIKO<br />

YAKUSHIJIN, Aoyama Gakuin University, & AKIRA ISHIGUCHI,<br />

Ochanomizu University—We applied ideal observer analysis to a<br />

dual-task situation in order to investigate directly how attentional resources<br />

are allocated. Two sets of component circles arrayed circularly<br />

were displayed side by side. <strong>The</strong> sets were different both in the size<br />

of components and in the size of circular configurations. Participants<br />

were asked in each trial to discriminate the size of components (local<br />

task) or the size of circular configurations (global task). <strong>The</strong> task required<br />

was in random order through trials. Priority of each task was<br />

manipulated across trial sequences. Gaussian noise was introduced to<br />

those sizes so that ideal performance could be calculated quantitatively<br />

on each task. <strong>The</strong> result showed that, although statistical efficiency<br />

was higher in the prior task, the efficiency summation of two<br />

tasks was constant irrespective of the priority manipulations. It suggests<br />

that the total amount of attentional resource was constant regardless<br />

of top-down allocation of attention.<br />

(2024)<br />

Working Memory Consolidation Causes the Attentional Blink.<br />

HIROYUKI TSUBOMI, University of Tokyo, & NAOYUKI OSAKA,<br />

Kyoto University (sponsored by Naoyuki Osaka)—In an RSVP stream,<br />

the first target interferes with the second if it is presented within a few<br />

hundred milliseconds. This phenomenon, called the attentional blink<br />

(AB), is often thought to reflect the temporal limit of visual working<br />

memory (VWM) consolidation. However, this relationship has been<br />

assumed rather than directly tested. We presented an array of 24 black<br />

or green Landolt C rings. After 3 sec, one of the green rings turned<br />

72<br />

red for 100 msec, followed by a 100-msec mask. Subsequently, a black<br />

ring turned red for 100 msec, followed by a 100-msec mask. <strong>The</strong> participants<br />

were instructed to report the gap direction of the two red<br />

rings. <strong>The</strong> AB for the second red ring occurred only when there were<br />

more than three green rings in the array. Using the same type of stimuli,<br />

VWM capacity was independently measured to three items. This<br />

indicates a clear connection between VWM and AB.<br />

(2025)<br />

Does the Response Criterion Affect Redundancy Gain in Simple RT?<br />

JEFF O. MILLER, University of Otago—In simple RT, responses are<br />

faster when two stimuli are presented than when one is—an example<br />

of “redundancy gain.” Models of redundancy gain typically assume<br />

that two stimuli cause faster accumulation of sensory evidence, thus<br />

satisfying a response criterion more rapidly when both are presented.<br />

That account predicts that, all other things being equal, redundancy<br />

gain should be larger when the response criterion is higher. We tested<br />

this prediction in two experiments manipulating the response criterion.<br />

One experiment compared blocks of trials with infrequent versus<br />

frequent catch trials, assuming that observers would set a higher<br />

criterion when catch trials were frequent. <strong>The</strong> other experiment compared<br />

blocks including no-stimulus catch trials against blocks including<br />

a distractor stimulus in catch trials, assuming that observers would<br />

set a higher criterion in the latter blocks to avoid incorrectly responding<br />

to distractors. <strong>The</strong> results have implications for models of<br />

redundancy gain.<br />

(2026)<br />

Dual-Task Performance With Consistent and Inconsistent Stimulus–<br />

Response Mappings. KIM-PHUONG L. VU, California State University,<br />

Long Beach, & ROBERT W. PROCTOR, Purdue University—<br />

This study examined benefits for consistent stimulus–response mappings<br />

for two tasks in a psychological-refractory-period paradigm. In<br />

Experiment 1, participants performed three-choice spatial tasks, using<br />

combinations of corresponding and mirror-opposite mappings. Both<br />

mappings showed a consistency benefit for stimuli in the two outer<br />

positions, for which the correct response differed across mappings,<br />

but only the corresponding mapping showed a consistency benefit for<br />

the center position, for which the middle response was always correct.<br />

Experiments 2 and 3 used four-choice spatial tasks. A consistency<br />

benefit was obtained for both mappings when the mapping for each<br />

task was completely corresponding or mirror opposite. However, a<br />

consistency benefit was not always obtained for a mixed mapping for<br />

which corresponding responses were made to two positions and opposite<br />

responses to two positions. <strong>The</strong>se results suggest that the consistency<br />

benefit is mainly evident when the mappings for both tasks<br />

can be characterized by a single rule.<br />

(2027)<br />

Interhemispheric Collaboration for Matching Emotions Signified by<br />

Words and Faces. JOSEPH B. HELLIGE, URVI J. PATEL, JIYE KIM,<br />

& PATRICIA GEORGE, University of Southern California—Previous<br />

visual laterality studies indicate that benefits of dividing an information<br />

processing load across both cerebral hemispheres outweigh costs<br />

of interhemispheric transfer as the tasks become more difficult or<br />

complex. <strong>The</strong> present experiment indicates that this finding does not<br />

generalize to a complex task that requires matching emotions represented<br />

by two different visual formats whose perceptual processing<br />

involves different cortical areas: words (e.g., sad) and cartoon faces<br />

(e.g., a face with sad expression). Combined with other recent studies<br />

that mix stimulus formats (e.g., digits and dots to represent numeric<br />

quantity), the present results suggest that when stimuli are simultaneously<br />

presented in appropriately different visual formats,<br />

identification of those stimuli may take place in parallel, via different<br />

cortical access routes. Consequently, there is little interference, even<br />

when the stimuli are presented to a single cerebral hemisphere, so<br />

there is little or no benefit from spreading processing across both<br />

hemispheres.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!