29.01.2013 Views

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

S1 (FriAM 1-65) - The Psychonomic Society

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Friday Afternoon Papers 72–78<br />

Visual Search<br />

Regency DEFH, Friday Afternoon, 1:30–3:30<br />

Chaired by Jeremy M. Wolfe<br />

Brigham & Women’s Hospital and Harvard Medical School<br />

1:30–1:45 (72)<br />

Is Pink Special? <strong>The</strong> Evidence From Visual Search. JEREMY M.<br />

WOLFE, Brigham & Women’s Hospital and Harvard Medical School,<br />

ANINA N. RICH, Macquarie University, ANGELA BROWN &<br />

DELWIN LINDSEY, Ohio State University, & ESTER REIJNEN,<br />

University of Basel—Desaturated red is named “pink.” Other desaturated<br />

colors lack such color terms in English. <strong>The</strong>y are named for objects<br />

(lavender) or, more often, are named using compound words, relative<br />

to the saturated hue (pale green). Does pink have a special<br />

“categorical” status making it easier to find in visual search tasks?<br />

Observers searched for desaturated targets midway (in xyY space) between<br />

saturated and desaturated distractors. Search was faster and<br />

more efficient when saturated distractors and desaturated targets were<br />

in the red range than when they were any other hues. This result was<br />

obtained (1) with items as bright and saturated as possible, (2) when<br />

all items were isoluminant, or (3) when the separation in CIExy from<br />

target to distractors was equated across hues. But is pink really special?<br />

Maybe not. Search remained fast and efficient with a range of<br />

targets that were not categorically “pink” but might be characterized<br />

as skin tones.<br />

1:50–2:05 (73)<br />

Efficient Segregation of Moving and Stationary Objects in Visual<br />

Search. TODD S. HOROWITZ, Harvard Medical School, & ANINA N.<br />

RICH, Macquarie University—How efficiently can the visual system<br />

guide attention to moving or stationary objects? We constructed displays<br />

of two spatially interleaved search sets, composed of randomly<br />

moving and stationary disks. Each set consisted of 4, 8, or 12 gray<br />

disks marked with white lines. Observers were instructed to search for<br />

a vertical line target among distractors tilted ±30º, in either the moving<br />

or the stationary set (blocked). <strong>The</strong> target could be present or absent<br />

in each set independently. Segregation by motion was highly efficient.<br />

Target-present RTs for both conditions were unaffected by the<br />

number of items in the irrelevant set. However, the irrelevant set was<br />

not completely suppressed; a target in the irrelevant set slowed targetabsent<br />

RTs. Finally, search through randomly moving disks<br />

(21 msec/item) was just as efficient as search through stationary disks<br />

(23 msec/item). <strong>The</strong> visual system makes optimal use of motion information<br />

in visual search.<br />

2:10–2:25 (74)<br />

<strong>The</strong> Effect of Task-Irrelevant Objects on Learning the Spatial<br />

Context in Visual Search. ADRIAN VON MÜHLENEN, University<br />

of Warwick, & MARKUS CONCI, LMU Munich—During visual<br />

search, the spatial configuration of the stimuli can be learned when<br />

the same displays are presented repeatedly. This in turn can facilitate<br />

finding the target (contextual cuing effect). This study investigated<br />

how this effect is influenced by the presence of a task-irrelevant object.<br />

Experiment 1 used a standard T/L search task with “old” display<br />

configurations presented repeatedly among “new” displays. A green<br />

filled square appeared at unoccupied locations within the search display.<br />

<strong>The</strong> results showed that the typical contextual cuing effect was<br />

completely eliminated when a square was added to the display. In Experiment<br />

2 the contextual cuing effect was reinstated by simply including<br />

trials where the square could appear at an occupied location<br />

(i.e., below a stimulus). <strong>The</strong>se findings are discussed in terms of an<br />

account that depends on whether the square is perceived as part of the<br />

search display or as part of the display background.<br />

2:30–2:45 (75)<br />

Different Causes for Attention Interference in Focused and Divided<br />

Attention Tasks. ASHER COHEN & GERSHON BEN SHAKHAR,<br />

12<br />

Hebrew University—Distractors carrying task-relevant information<br />

often affect performance in both focused and divided attention (e.g.,<br />

visual search) tasks. In divided attention tasks it is generally assumed<br />

that task-relevant distractors “capture” attention. <strong>The</strong>re is less agreement<br />

on the cause of interference in focused attention tasks. Typically,<br />

the nature of task-relevant distractors is different in the two paradigms,<br />

rendering a direct comparison difficult. In the present study,<br />

we created a paradigm in which we can compare directly focused and<br />

divided attention tasks, and we use the same type of response-related<br />

distractors for both tasks. Several experiments show that these distractors<br />

interfere with performance in both tasks, but there is a fundamental<br />

difference between the two types of interference. Whereas<br />

response-related task-relevant distractors indeed capture attention in<br />

visual search, attention gradient rather than attention capture causes<br />

interference in focused attention tasks.<br />

2:50–3:05 (76)<br />

Experience-Guided Search: A <strong>The</strong>ory of Attentional Control.<br />

MICHAEL C. MOZER, University of Colorado, & DAVID BALDWIN,<br />

Indiana University—Visual search data are often explained by the<br />

Guided Search model (Wolfe, 1994, 2007), which assumes visualfield<br />

locations are prioritized by a saliency map whose activity is effectively<br />

a weighted sum of primitive-feature activities. <strong>The</strong> weights<br />

are determined based on the task to yield high saliency for locations<br />

containing targets. Many models based on this key idea have appeared,<br />

and to explain human data, all must be “dumbed down” by restricting<br />

the weights and/or corrupting the saliency map with noise.<br />

We present a formulation of Guided Search in which the weights are<br />

determined by statistical inference based on experience with the task<br />

over a series of trials. <strong>The</strong> weights can be cast as optimal under certain<br />

assumptions about the statistical structure of the environment. We<br />

show that this mathematically elegant and parsimonious formulation<br />

obtains accounts of human performance in a range of visual search<br />

tasks.<br />

3:10–3:25 (77)<br />

Not All Visual Memories Are Created Equal. CARRICK C.<br />

WILLIAMS, Mississippi State University—Two experiments investigated<br />

differences in the impact of number of presentations and viewing<br />

time on visual memory for search objects. In Experiment 1, participants<br />

searched for real-world targets (e.g., a green door) 2, 4, 6, or<br />

8 times in a field of real-world conjunction distractors, followed by a<br />

memory test for the presented objects. Visual memory improved<br />

across presentations, but the rate of improvement was unequal for different<br />

object types: Target memory improved more with each presentation<br />

than did distractors. In Experiment 2, eye movements were<br />

monitored while participants searched arrays either 2 or 4 times, followed<br />

by a memory test. <strong>The</strong> overall memory results replicated Experiment<br />

1. Importantly, regression analyses indicated that number of<br />

search presentations had a large effect on target memory with little additional<br />

impact of total viewing time, whereas the opposite was true<br />

of distractors. Both experiments demonstrate differences in processing<br />

of target and distractor memories.<br />

Judgment and Decision Making<br />

Beacon A, Friday Afternoon, 1:30–3:30<br />

Chaired by John S. Shaw, Lafayette College<br />

1:30–1:45 (78)<br />

Public Predictions of Future Performance. JOHN S. SHAW &<br />

SARAH A. FILONE, Lafayette College—Two experiments tested<br />

whether public predictions about one’s performance on an anagram<br />

task would have an impact on the number of anagrams actually solved.<br />

Before working on two sets of anagrams, 243 participants made predictions<br />

about how many anagrams they would solve in each set. Manipulated<br />

variables included Prediction Privacy (public vs. private)<br />

and Performance Privacy (public vs. private). Consistent with self-

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!