Bachelor's Thesis - Christian Hoffmann - Cognitive Science ...
Bachelor's Thesis - Christian Hoffmann - Cognitive Science ...
Bachelor's Thesis - Christian Hoffmann - Cognitive Science ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Universität Osnabrück<br />
<strong>Cognitive</strong> <strong>Science</strong> Bachelor Program<br />
Bachelor’s <strong>Thesis</strong><br />
The influence of prepositions on<br />
attention during the processing of<br />
referentially ambiguous sentences<br />
<strong>Christian</strong> <strong>Hoffmann</strong><br />
September 29, 2009<br />
Supervisors:<br />
Prof. Dr. Peter Bosch<br />
Computational Linguistics Working Group,<br />
Institute of <strong>Cognitive</strong> <strong>Science</strong><br />
University of Osnabrück<br />
Germany<br />
Prof. Peter König<br />
Neurobiopsychology Working Group,<br />
Institute of <strong>Cognitive</strong> <strong>Science</strong><br />
University of Osnabrück<br />
Germany
Abstract<br />
The present study uses eye-tracking to investigate the role of prepo-<br />
sitions in resolving referential ambiguities. Playmobil sceneries and<br />
prerecorded sentences were presented and fixation behaviour on possi-<br />
ble referents of the discourse was recorded.<br />
The sentences investigated contained a subject NP whose head NP<br />
refers to two objects in the scenery modified by a PP that uniquely<br />
identified the referential object of the subject NP. The hypothesis was<br />
that when a preposition can uniquely identify an object in a scenery<br />
then the fixation probability of said object should rise already prior<br />
to the processing of the following prepositional NP. If the preposition<br />
does not uniquely identify an object, then the fixation probability of<br />
the referential object should only rise after processing the prepositional<br />
NP. The results seem to imply that there are no major differences in<br />
fixation probabilities connected to the prepositions. Bootstrapping<br />
analyses revealed that there are some significant differences, namely<br />
more fixations on the target in the ambiguous block.<br />
2
Contents<br />
1 Introduction 4<br />
2 Methods 8<br />
2.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />
2.2 Experimental stimuli . . . . . . . . . . . . . . . . . . . . . . . 8<br />
2.2.1 Visual stimuli . . . . . . . . . . . . . . . . . . . . . . . 10<br />
2.2.2 Auditory stimuli . . . . . . . . . . . . . . . . . . . . . 11<br />
2.2.3 Filler . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14<br />
2.3 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14<br />
2.4 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15<br />
2.5 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
2.5.1 Regions of Interest . . . . . . . . . . . . . . . . . . . . 16<br />
2.5.2 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 16<br />
3 Results 19<br />
3.1 Subject Validity . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.2 Stimulus Validity . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />
3.3 Time Course of Fixations . . . . . . . . . . . . . . . . . . . . 22<br />
3.4 Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />
4 Discussion 28<br />
References 30<br />
A Visual Stimuli 31<br />
B Auditory Stimuli 33<br />
C Fillers - Visual 36<br />
D Fillers - Auditory 39<br />
E Statistics 41<br />
F Complementary Figures 43<br />
G Consent Sheet 48<br />
3
1 Introduction<br />
“Linguistic theory [...] may inform a theory of language processing. And<br />
observations about language processing may inform linguistic theory, i.e.<br />
support or disconfirm its predictions.”(Bosch (2009))<br />
In the last few decades, interest in neuroscientific methods for analyzing<br />
linguistic processing has been on the rise. More and more research areas<br />
develop which incorporate paradigms and methods from both theoretical<br />
linguistics and neuroscience.<br />
Specifically the method of analyzing the gaze of people, known as eye-<br />
tracking, generated major interest in the linguistic community, due to the<br />
seminal paper of Cooper (1974), who showed that people fixate elements of<br />
a visual scene that had a connection to spoken language stimuli to which<br />
they listened at the same time. Many researchers focused on eye-tracking<br />
as a method to investigate ongoing linguistic processing.<br />
As Michael K. Tanenhaus puts it, “eye movements provide a continu-<br />
ous measure of spoken-language processing in which the response is closely<br />
time locked to the input without interrupting the speech stream. [...] The<br />
presence of a visual world makes it possible to ask questions about real-<br />
time interpretation.” Tanenhaus et al. (2000) Eye-tracking has been used<br />
before to investigate topics like overt attention and its modulation, reading<br />
behaviour (in particular the implications on online lexical access and syn-<br />
tactic processing, e.g. garden-path sentences) and others. For an overview,<br />
see Rayner (1998).<br />
Of most importance for the understanding of the findings from Tanen-<br />
haus, Rayner, Cooper, Chalmers and others are the visual world paradigm<br />
and the linking hypothesis. The visual world paradigm serves as a blueprint<br />
for psycholinguistic experiments. Basically, subjects fixations are measured<br />
as they interact in some fashion with a visual scenery according to tasks pro-<br />
posed by the experimenter, thereby integrating linguistic and non-linguistic<br />
knowledge and action. The linking hypothesis proposes an intrinsic connec-<br />
tion between eye movements and lexical access, making it possible to derive<br />
knowledge about linguistic processing from analyzing non-linguistic actions.<br />
In his overview (Tanenhaus et al. (2000)) Tanenhaus shows that visual<br />
context and even real-world knowledge (see also Chambers et al. (1998)) can<br />
help to resolve apparent (or temporal) syntactic ambiguity and is rapidly<br />
4
1 INTRODUCTION 5<br />
integrated throughout the processing of linguistic utterances and also that<br />
linguistic experience (such as relative frequencies of lexical competitors) can<br />
influence fixation behaviour.<br />
A major topic in this field is the question of how referential expressions<br />
(e.g. “The cat on the tree”) are processed. As Chambers et al. (1998) shows,<br />
even prepositions suffice in certain tasks to identify the referential object of<br />
an expression by restricting the domain of interpretation. Studies conducted<br />
at the University of Osnabrück show that determiner gender 1 and adjectival<br />
constructions ensuring referential uniqueness already give rise to a higher fix-<br />
ation probability on the referential object due to an anticipation effect, even<br />
before the onset of the noun itself (Hartmann (2006)). Kleemeyer (2007) and<br />
Bärnreuther (2007) showed that top-down influences had a much higher im-<br />
pact on attention modulation than bottom-up processes when presented in<br />
parallel. Karabanov (2006) showed what differences in fixation probabilities<br />
arise when processing full noun phrases compared to pronouns.<br />
The last three studies mentioned used a more natural visual world than<br />
Tanenhaus and the others, by providing Playmobil R○ sceneries as visual<br />
stimuli. Furthermore, subjects did not have to perform complex tasks while<br />
viewing the sceneries, as it was the case in Chambers et al. (1998) and<br />
Hartmann (2006).<br />
The object of this study is to investigate a problem posed by Peter Bosch<br />
in Bosch (2009). The basic question is in which way does the uniqueness con-<br />
straint of the definite determiner contribute to the processing of potentially<br />
ambiguous referential expressions. For a sentence like:<br />
(1) Put the red block on the block on the disk.<br />
which is syntactically ambiguous, one finds two constituent structures:<br />
(2) put [the [red block]] [on [the [block [on [the [disk]]]<br />
(3) put [the [red [block [on [the block]]] [on [the disk]]]<br />
If this sentence is presented while figure 1 is shown, which contains more<br />
than one red block (one of which is even on another block), and a third<br />
block on a disk, the uniqueness constraints of the first two definite deter-<br />
miners are not met when analyzing their corresponding constituents. But<br />
1 determiners in German have gender markers
1 INTRODUCTION 6<br />
somehow, most people intuitively chose sentence 3 as the correct meaning of<br />
the sentence. Bosch proposes two alternatives: either constraints of single<br />
constituents are collecting during incremental construction of the semantic<br />
representation of the determiner phrase so that the meaning becomes clear<br />
after processing the second “block”-phrase, where it becomes clear that the<br />
DP describes a red block which is on another block. Or the violated unique-<br />
ness constraint leads to a modulation of processing ressources: the deref-<br />
erence of said DP becomes the most important point on the agenda of the<br />
parser, which immediately uses the information obtained from the following<br />
preposition to decide which block is the referential object of the DP.<br />
Figure 1: Example block world, taken from Bosch (2009)<br />
The hypothesis behind this experiment is that when in such a sentence<br />
(or any other expression containing a definite determiner and a referentially<br />
ambiguous DP) a preposition can give the information needed to resolve<br />
such an ambiguity, then this fact should be easily be seen in an earlier<br />
rise of the fixation probability on the referential object of that DP. If the<br />
preposition cannot provide such information 2 , then the fixation probability<br />
on the referential object should rise only after the onset of the prepositional<br />
NP-head.<br />
2 In said case, picture the second block on a hat, then the ambiguity cannot be resolved<br />
solely by the preposition, as both blocks are “on” something
1 INTRODUCTION 7<br />
In order to test this hypothesis, several visual stimuli were constructed<br />
bearing exactly those characteristics mentioned before and were shown to<br />
subjects while they were listening to matching spoken stories.
2 Methods<br />
This part contains all important information about the participants of this<br />
study, the materials used for preparation and experimental design and pro-<br />
cedures used during the experiment and for subsequent analysis.<br />
2.1 Participants<br />
Participants were contacted through personal contacts and the internal mail-<br />
ing lists of the student bodies of the cognitive science and psychology pro-<br />
grammes at the University of Osnabrück, Germany. The actual subjects of<br />
this study were almost equally distributed among those programmes. They<br />
had to be native German speakers, have normal or corrected-to-normal vi-<br />
sion and had to have no hearing deficits. For their participation, subjects<br />
were rewarded with either course credit or 5 Euros. All subjects partici-<br />
pated voluntarily and were naïve with regard to the purpose of this study.<br />
Fixations were recorded from 25 subjects. Of those data sets, four had to<br />
be rejected. For two subjects, the data files were corrupt and therefore not<br />
readable. The experiment for one subject ended prematurely, rendering the<br />
data set unusable. One subject had a red-green color blindness, but as the<br />
subjects fixation behaviour was the same as of the other remaining subjects<br />
(see subject validity of subject 21 in table 6), the data set of said subject<br />
was used nevertheless. One subjects performance was significantly different<br />
from the rest (see subject validity of subject 5) and its data set was dis-<br />
regarded. All in all, 21 data sets were used for subsequent analysis. The<br />
characteristics taken from the subject questionnaires are shown in table 1.<br />
2.2 Experimental stimuli<br />
Subjects received multi modal stimuli, composed of photographs of Playmobil R○<br />
sceneries and auditory stimuli which were semantically related to them. See<br />
Karabanov (2006), Kleemeyer (2007), Bärnreuther (2007) and Karabanov<br />
(2006) for similar designs.<br />
Ten stimuli were assembled from stimuli and filler material from prior<br />
experiments collectively used in Alexejenko et al. (2009). The pictures were<br />
edited with GIMP 2.6 in such a way as to conform to the constraints of<br />
the experimental design. Information about the construction of the original<br />
8
2 METHODS 9<br />
Category Range Median Mean ± SD<br />
Age (yrs) 18-28 22 22.5 ± 2.26<br />
Height (cm) 154-193 174 172.8 ± 8.63<br />
Daily screen time (hours) 2-10 5 5 ± 2.53<br />
Language knowledge (no.) 1-5 2 2.56 ± 0.96<br />
Previous eye-tracking studies (no.) 0-6 1 1.24 ± 1.67<br />
Gender Number Percent<br />
Female 14 56%<br />
Male 11 44%<br />
Education Number Percent<br />
High school diploma 22 88%<br />
University degree 3 12%<br />
Occupation Number Percent<br />
Student 24 96%<br />
Unemployed 1 4%<br />
Vision aids Number Percent<br />
None 14 56%<br />
Glasses 6 24%<br />
Contact lenses 5 20%<br />
Occular dominance Number Percent<br />
Left 10 40%<br />
Right 11 44%<br />
Unclear 4 16%<br />
Handedness Number Percent<br />
Left 1 4%<br />
Right 23 92%<br />
Unclear 1 4%<br />
Color Vision Number Percent<br />
Red-green colour blind 1 4%<br />
Perfect 24 96%<br />
Table 1: Statistics of study participants, collected from subject questionnaires
2 METHODS 10<br />
stimuli and filler can be found in Kleemeyer (2007). As some of the original<br />
images reused here had a resolution of 1024x768 pixels, all final images were<br />
downscaled to this resolution.<br />
Auditory stimuli were constructed corresponding to the experimental<br />
question raised in Bosch (2009). In order to find out what role prepositions<br />
may play during the processing of referential ambiguities, sentences were<br />
constructed whose subject phrase (sentence head) consisted of a noun phrase<br />
modified with a prepositional phrase. The whole phrase uniquely identified<br />
an object of the visual stimulus matching the auditory stimulus. The head<br />
of the subject phrase matched two objects of the visual stimulus, as did<br />
the NP of the prepositional phrase. In one condition, the preposition was<br />
supposed to uniquely identify the referential object of the subject phrase 3 ,<br />
whereas in the other condition, the ambiguity could only be resolved when<br />
processing the prepositional NP.<br />
2.2.1 Visual stimuli<br />
Every stimuli/filler depicted a natural scenery constructed from Playmobil R○<br />
objects. Those sceneries consisted of multiple objects referred to during the<br />
course of the corresponding auditory stimulus and also contained a vast<br />
amount of other objects serving as distraction, ensuring a higher possibility<br />
that a fixation on an object of interest is related to the auditory stimulus,<br />
and not to general browsing of the scenery.<br />
In particular, every scenery had two identical objects (identical in the<br />
sense of being part of the same category, e.g.“owl”,“man”,“cat”) serving as<br />
target and competitor. In addition, two objects served as their “location-<br />
ary” identifiers, i.e. identifying the location of the target/competitor in the<br />
scenery. 4 It is important to mention that there were matching distractors<br />
for the locationary identifiers as well. This was required in order to keep all<br />
references to the locationary identifiers in the auditory stimuli ambiguous.<br />
3<br />
E.g.“The cat in front of...” uniquely identifies a cat if the other cat in the picture is<br />
not in front of something.<br />
4<br />
To give an example, in one picture two owls were amidst a woodland scenery, one in<br />
a tree, the other on on a hill. The target here was the owl in the tree (the tree therefore<br />
being the locationary identifier of the target), the distractor the owl on the hill (the hill<br />
therefore being the locationary identifier of the competitor).
2 METHODS 11<br />
There was also a reference object in every picture to study the attention<br />
shift of participants to an easily identifiable, salient target and to compare<br />
those shifts to those elicited by the relevant part of the auditory stimulus.<br />
Figure 2: Exemplary visual stimulus. The target is circled in red, the competitor<br />
in green. The locationary object of the target and its distractor are<br />
circled in purple, the locationary object of the competitor and its distractor<br />
in blue. The reference object is circled in yellow.<br />
2.2.2 Auditory stimuli<br />
As already stated, there were two conditions for every stimulus, i.e. two sto-<br />
ries were designed that solely differed in one preposition in the last sentence.<br />
The stimuli consisted of four to six sentences in four slots. The first sentence<br />
was an informal overview of the scenery, without any direct reference to any<br />
object in it.<br />
1. In der Savanne. (In the savannah.)<br />
The reason for the introduction of that sentence was to measure participants’<br />
fixations on the stimuli while not guided by linguistic input. The next one<br />
to two sentences introduced (referred to) the locationary objects.
2 METHODS 12<br />
2. In der felsigen Landschaft traben zwei Elefanten. (Two elephants<br />
are trotting through the rocky countryside.)<br />
The one to two sentences in the third slot contained references to the tar-<br />
get/competitor, as well as distractors and the reference object.<br />
3. Die beiden Männer beobachten die vielen durstigen Tiere am einzigen<br />
Wasserloch 5 . The two men are watching the many thirsty animal near<br />
the watering hole.<br />
The only difference between the two conditions could be found in the fourth<br />
slot. As explained above, the sentence consisted of a subject NP composed<br />
of an NP and a prepositional phrase. In one case, the preposition was a<br />
more regular one, not capable of identifying the referential object by itself,<br />
i.e. prepositions able to convey more possible relations than others. The<br />
german prepositions auf, neben and bei were used in this condition (meaning<br />
“on”, “next to” and “near”, respectively. In the other, due to the relation<br />
between subject head and prepositional NP conveyed by the preposition,<br />
the ambiguity posed by the subject head could have already been resolved<br />
by the preposition. Here, the german prepositions in, vor, hinter, unter and<br />
an were used, meaning “in”, “in front of”, “behind”, “below/under” and<br />
“at.”<br />
4. Der Mann vor dem grauen Felsen ist ein erfahrener Jäger. (The man<br />
in front of the grey rock is an experienced hunter.)<br />
See Figure 2 for an exemplary stimulus.<br />
All sentences were recorded 6 by using a Trust HS-2100 Headset and Au-<br />
dacity 1.2.6 7 and Cool Edit Pro 2.0 8 . Noise and pitch reduction procedures<br />
were carried out on all audio files. Furthermore, silent intervals were cut to<br />
ensure equal length of all files (18.770s - 19.962s). The number of syllables<br />
differed slightly among all sentences (58-62 syllables). Manual alignment<br />
was performed to ensure that onsets of the subject head NP, the preposi-<br />
tion and the prepositional NP only differed on a small scale. See Table 2<br />
for details. By adding a non-disambiguating adjective to the PP, a time<br />
window of approximately 800 ms between preposition onset and the onset<br />
of the prepositional NP could be ensured for further analysis.<br />
5 This is the reference object.<br />
6 Sentences were all spoken by the experimenter himself.<br />
7 (http://audacity.sourceforge.net/)<br />
8 (http://www.adobe.com/products/audition/)
2 METHODS 13<br />
Stimulus Nr. Onset subj-head NP Onset prep. Onset prep. NP<br />
1 15,348 15,946 16,786<br />
2 15,362 15,911 16,766<br />
3 15,330 15,940 16,704<br />
4 15,319 15,909 16,842<br />
5 15,357 15,919 16,757<br />
6 15,338 15,980 16,736<br />
7 15,358 15,960 16,777<br />
8 15,336 15,930 16,778<br />
9 15,353 15,970 16,792<br />
10 15,328 15,964 16,721<br />
11 15,343 15,877 16,757<br />
12 15,333 15,946 16,780<br />
13 15,325 15,928 16,719<br />
14 15,330 15,944 16,765<br />
15 15,334 15,925 16,732<br />
16 15,348 15,887 16,771<br />
17 15,345 15,948 16,744<br />
18 15,328 15,970 16,739<br />
19 15,329 15,901 16,623<br />
20 15,319 15,968 16,698<br />
mean 15,338 15,936 16,749<br />
Table 2: Onsets of subject head NP, prepositions and prepositional NPs.<br />
Even Stimulus numbers correspond to the ambiguous case, odd to the unambiguous<br />
case. The first two stimuli correspond to visual stimulus 1, the<br />
next two to visual stimulus 2, and so on.
2 METHODS 14<br />
2.2.3 Filler<br />
Filler images were all those images from the material of Alexejenko et al.<br />
(2009), which had not been used to construct stimuli. For each of those<br />
filler images an auditory filler was recorded, which was of equal length as<br />
the auditory stimuli and consisted of 3-5 sentences.<br />
2.3 Apparatus<br />
A head-mounted binocular Eye-Tracker(“Eye Link II”, SR Research, Mis-<br />
sissauga, Ontario, Canada) was used to record subjects’ eye movements.<br />
Two infrared cameras tracked the movements of the participants’ pupils,<br />
one tracked the head position relative to the monitor. A Pentium 4 PC<br />
(Dell Inc., Round Rock, TX, USA) was used to control the eye-tracker. See<br />
figure 3 for an overview of the system 9 . A second PC (Powermac G4 8000<br />
MHz) controlled the stimulus presentation. Stimuli were presented on a 21”<br />
cathode ray tube monitor (SyncMaster 1100DF 2004, Samsung Electronics<br />
Co, Ltd, Korea), resolution set to 1024x768 and a refresh rate of 100Hz.<br />
Pupil positions were tracked using a 500 Hz sample rate.<br />
Figure 3: Eye Link II Head-Mounted Eye-Tracking System<br />
9 Image taken from Karabanov (2006)
2 METHODS 15<br />
2.4 Procedure<br />
The experiment was conducted in a dimly lit room. Prior to the experiment<br />
itself, subjects were welcomed and the experiments procedure was explained<br />
to them. Subjects were informed that they could interrupt the experiment<br />
at any time. Subjects then had to fill out a consent sheet (see section G)<br />
and a standardized questionnaire (see table 1). Tests for ocular dominance<br />
and color deficiency were performed. If subjects were able to follow the in-<br />
structions up until now, it was assumed that their hearing was also sufficient<br />
for the experiment.<br />
Subjects were then seated 80 cm from the monitor and the eye-tracker<br />
was fitted on their head. Afterwards, a 13 point calibration and valida-<br />
tion procedure was started. Participants were asked to fixate a small dot<br />
showing up in a random order at thirteen different locations on the screen.<br />
During calibration, the raw eye-data was mapped to gaze-position. During<br />
validation, the difference between computed fixation and target point was<br />
computed, in order to obtain gaze accuracy. The procedure was repeated<br />
until the mean error for one eye was below 0.3 ◦ , with a maximum error below<br />
1 ◦ , this eye was subsequently tracked during the whole experiment. Subjects<br />
then were provided with headphones (WTS Philips AY3816), through which<br />
the auditory stimuli were presented. The headphones also served the pur-<br />
pose of blocking out background noise in order to ensure full concentration<br />
on the task.<br />
Subjects were told to carefully listen to the auditory stimuli and look at<br />
the visual stimuli. Before each stimulus, a small fixation spot in the middle<br />
of the screen was presented, so that drift correction could be performed<br />
and subjects had the chance to have a small break in between trials. If<br />
the difference between gaze and computed fixation position was too high,<br />
calibration and validation were repeated. The stimuli were presented in a<br />
random order, with the constraints that no more than two actual stimuli<br />
were presented in a row and that every subject was presented with exactly<br />
five stimuli conforming to the ambigue condition and five stimuli of the<br />
unambigue condition. Furthermore, for every subject there was another<br />
subject that was presented with the same order of stimuli, but with exactly<br />
the opposite conditions, as to assure that all stimuli and all conditions were<br />
presented equally often without fully giving up randomization. The ten<br />
stimuli and 15 fillers were presented as one block. After the experiment,
2 METHODS 16<br />
participants were informed about the goal of this study.<br />
2.5 Data Analysis<br />
It has already been shown extensively that measuring eye movements seems<br />
to be an adequate tool for the investigation of attention and is especially<br />
useful when trying to understand the mechanisms behind language process-<br />
ing Tanenhaus et al. (2000). With the help of Playmobil R○ scenarios it has<br />
also been shown that top-down influences seem to at least partially override<br />
bottom-up influences on attention (Kleemeyer (2007), Bärnreuther (2007)).<br />
It therefore seems to be an adequate instrument to study the processing<br />
of prepositions and its influence on attention. A fixation is defined as the<br />
inverse of a saccade, i.e. whenever the eye-tracker does not measure a sac-<br />
cade, there is a steady fixation. The acceleration threshold for a saccade was<br />
8000 ◦ /sec 2 , the velocity threshold 30 ◦ /s and the deflection threshold 0.1 ◦ .<br />
Fixation locations and durations were calculated online by the eye-tracking<br />
software and later converted into ASCII text. All further analysis was done<br />
with MATLAB 10<br />
2.5.1 Regions of Interest<br />
In order to find out whether a subject fixated a referent of the discourse, re-<br />
gions of interest (ROIs) were manually chosen around each referent in every<br />
scene using MATLABs build-in function roipoly. The borders of the referent<br />
were chosen as close as possible around the actual figurine in the scene. As<br />
part of the fixations in question lay outside the manually chosen regions of<br />
interest, they were scaled up 12 pixel in the horizontal axis (being equiva-<br />
lent to 0,552 ◦ of visual angle) and 20 pixel in the vertical axis (equivalent to<br />
0.76 ◦ of visual angle). For an example, see figure 4. An example of all the<br />
fixation outside of the regions of interest can be seen in figure 5.<br />
2.5.2 Statistics<br />
The time course of the probabilities to fixate a certain referent throughout<br />
viewing the scenery is the important part of analysis. In order to inter-<br />
pret rise and fall of fixation probabilities, 150 ms time windows were chosen<br />
in which all relevant statistical analysis was implemented. This particular<br />
10 (www.mathworks.com)
2 METHODS 17<br />
Figure 4: Example image for regions of interest. Left: target (woman) and<br />
target locationary object (car), right: competitor (woman sitting), competitor<br />
locationary object (tree), front: reference object (man)<br />
Figure 5: Example of fixations not belonging to any region of interest<br />
length was chosen as the data was somewhat scarce. In order to test stimu-<br />
lus validity, the first 2.5 seconds (in which no reference to any object in the
2 METHODS 18<br />
scenery was yet made in the auditory stimulus) were analyzed by adding<br />
up all fixations on referents and comparing them among images. As this<br />
revealed some minor issues (see Results [3]), the time window between 2500<br />
ms and 15000 ms was also analyzed (being the time window in which all<br />
referents were introduced) in the same way. Subject validity was analyzed<br />
by summing up all fixations over the different images. For both validity<br />
analyses, MATLABs lillietest function was used to ensure normal distribu-<br />
tions. The influence of prepositions on fixation probabilities (and therefore<br />
on attention) was then tested using bootstrapping algorithms. Both intra-<br />
conditional and inter-conditional testing was performed 11 . For all statistic<br />
tests, a significance level of α = .05 was used.<br />
11 Intra-conditional meaning the comparison of fixation probabilities between different<br />
ROIs of the same condition, inter-conditional being the comparison of fixation probabilities<br />
for a specific ROI in the two different conditions.
3 Results<br />
3.1 Subject Validity<br />
The first statistical test conducted was to find out whether the fixations on<br />
the different ROIs over all subjects constituted normal distributions. For<br />
that, all fixations over the whole time course of the stimulus presentation<br />
were summed up and MATLABs lillietest function was used as a test for<br />
normality. The findings are visualized in figure 6, an overview over the<br />
statistics can be found in table 3. As it could be easily discerned that<br />
subject number 5 was a statistical outlier, all further statistical tests were<br />
conducted without the data of that subject.<br />
The lillietest revealed that all fixation distributions were normalized,<br />
except for the fixations on the locationary object of the competitor. This<br />
could be due to the fact that this object was mostly inanimate and most<br />
stimuli contained considerable amounts of animate distractors, so that fixa-<br />
tions on those objects could be unstable due to the fact that, as Karabanov<br />
(2006) already pointed out, subjects prefer fixations on animate/human ob-<br />
jects over inanimate. This did not pose a problem however, as the data<br />
clearly shows that all subjects fixated the object during the presentation<br />
(mean = 4.3536%, std. − dev. = 0.8902%), i.e. identified it either before or<br />
during the presentation of the relevant part of the stimulus.<br />
3.2 Stimulus Validity<br />
Following that, a series of normality tests was conducted to ensure stimulus<br />
validity. Contrary to previous studies, it could not be shown that fixation<br />
behaviour in the first part of the stimulus, where no objects were yet in-<br />
troduced, could be a reliable baseline for test statistics concerning fixation<br />
behaviour mediated by auditory stimuli.<br />
As can be seen in figure 7 and in table 4, there was quite a large variance<br />
in fixation probabilities, especially on target, competitor and the distractor<br />
of the targets locationary object. This is due to the fact that those objects<br />
varied in size and that each stimulus contained a great amount of distrac-<br />
tor objects. But this also ensured that fixations done on objects during<br />
their introduction via the auditory stimulus could be considered to be di-<br />
rectly linked to the linguistic input, and not to attentional browsing of the<br />
19
3 RESULTS 20<br />
picture 12 .<br />
Figure 6: Subject validity, fixations over all images<br />
That browsing occurred nevertheless can be seen in light of the large<br />
number of fixations on beyond-ROI regions. This was partly also due to the<br />
limited accuracy of the eye-tracker, leading to the fact that a percentage of<br />
fixations that should have counted towards one of the ROIs was off by a few<br />
degrees. Also see figure 5. As can be seen in table 4, fixation probabilities<br />
on target and competitor were nevertheless a normal distribution. To en-<br />
sure that the stimuli were really valid and appropriate for further statistical<br />
testing, the time interval between 2500 and 15000 ms was tested, under the<br />
hypothesis that the auditory stimuli presented similar objects for all stimuli,<br />
so that fixation probabilities should be similar as well. The results are visu-<br />
alized in figure 8 and table 5. One can see quite clear that in every picture<br />
all the relevant objects were fixated prior to the investigated stimulus part.<br />
Thus it was secured that all objects have been seen before and subjects do<br />
not have to search for objects first, overt attention that is due to linguistic<br />
input should be immediately visible.<br />
12 If one has many objects in a stimulus, fixation on one of them precisely at the point<br />
when it is presented in a concomitant auditory stimulus get more and more unlikely to<br />
have been a coincidence with increasing number of objects.
3 RESULTS 21<br />
Figure 7: Stimulus validity, fixations over all subjects, between 0 and 2500<br />
ms<br />
Figure 8: Stimulus validity, fixations over all subjects, between 2500 and<br />
15000 ms
3 RESULTS 22<br />
3.3 Time Course of Fixations<br />
The time course of fixation probability over all images are shown in figures 9<br />
and 10. As expected, the fixation probabilities on both target and competi-<br />
tor object rise twice during the whole presentation of the stimulus. A small<br />
peak beginning around 9000 ms can be distinguished, representing the time<br />
frame in which the target/competitor compatible NP is introduced. This<br />
clearly shows that subjects shift their attention on visual sceneries in line<br />
with the linguistic processing of additional linguistic stimuli.<br />
The second rise of the fixation probabilities (i.e. the relative number of<br />
fixations) occurs concurrently with the second naming of said NP. Around<br />
the time of the onset of the prepositional head-NP, the fixation probabilities<br />
diverge and a considerable number of fixations is directed towards the tar-<br />
get, implying that the subjects we’re focusing their attention on it, having<br />
understood that the subject-NP refers to it. Throughout the rest of the<br />
stimulus, most fixations stay on either the target or the target locationary<br />
object, shifting back and forth between them. To better understand the<br />
Figure 9: Time course of fixation probabilities, ambiguous condition. Yellow<br />
stripe: first introduction of target/competitor-NP. First line: mean onset<br />
subject-head-NP, second line: mean onset prepositional NP-head.
3 RESULTS 23<br />
Figure 10: Time course of fixation probabilities, unambiguous condition.<br />
Yellow stripe: first introduction of target/competitor-NP. First line: mean<br />
onset subject-head-NP, second line: mean onset prepositional NP-head.<br />
stages of linguistic processing of ambiguous sentences and to compare it to<br />
the processing of unambiguous sentences, a closer visual inspection of the<br />
time frame in question was necessary. A visualization of the fixation prob-<br />
abilities of said time frame for both the unambiguous and the ambiguous<br />
condition can be found in figures 11 and 12.<br />
A few observations can be made: first and foremost, the differences in<br />
the time course of fixation probability are minimal at best. Second, there<br />
seems to be an early peak of fixations on the target in the ambiguous case<br />
around 16400 ms, which would have been suspected in the unambiguous case<br />
when the integration of the proposition alone would be enough to resolve the<br />
ambiguity of the subject-NP. Third, increased fixations on both target and<br />
target locationary object seem to last longer in the ambiguous case than in<br />
the unambiguous one (the last peak in the ambiguous case is at 19350 ms).<br />
All of those observations have to be treated carefully, as the dataset is<br />
small and therefore statistical significance cannot be guaranteed.
3 RESULTS 24<br />
Figure 11: Time course of fixation probabilities, unambiguous condition,<br />
time span between subject head onset and end of stimulus<br />
Figure 12: Time course of fixation probabilities, ambiguous condition, time<br />
span between subject head onset and end of stimulus
3 RESULTS 25<br />
3.4 Bootstrapping<br />
Bootstrapping analyses were conducted to find out if there are any signif-<br />
icant differences between fixation probabilities on target and competitor.<br />
Both differences between conditions and in the conditions were analyzed.<br />
Bootstrapping algorithms were applied both over all images and all sub-<br />
jects, to find out for how many images and subjects significant differences<br />
can be found, respectively. Bootstrapping was applied to time windows of<br />
150 ms width, between 15200 ms (shortly before the onset of the subject-<br />
head-NP) and 22000 ms (the last recorded fixations). 1000 bootstrap sam-<br />
ples were taken from the vector of fixations on either ROI1 (target) or ROI2<br />
(competitor), for both the ambiguous and the unambiguous condition.<br />
As a test statistic, the difference of means was calculated and compared<br />
to the actual difference of means, both in and between conditions 13 . A<br />
difference was considered significant if it fell either into the 2,5 percentile<br />
or was larger than the 97,5 percentile. The figures 13, 14, 15 and 16 depict<br />
the results of bootstrapping analyses over images. No graphs are given for<br />
the results of bootstrapping over the subjects, as it did not yield a single<br />
significant difference.<br />
From figure 13 it can be concluded that fixation behaviour on the tar-<br />
get does indeed differ between conditions. Further analysis confirmed the<br />
observation made earlier, namely that the target object gets significantly<br />
more fixations in the ambiguous case. For all four images for which the time<br />
window between 15950 ms and 16100 ms became significant, the difference<br />
between unambiguous and ambiguous case was negative. Interestingly, this<br />
is the time window right after the onset of the preposition. The other peaks<br />
seem to support the claim that in the ambiguous case, fixations stayed more<br />
often on the target for a longer time. The differences here are also all neg-<br />
ative. As there is mostly only one or two pictures that lead a significant<br />
difference, this hypothesis cannot be proven.<br />
There are also significant differences in the fixation probabilities on the<br />
competitor object between conditions. They are even less pronounced than<br />
in the case of the target object. The relevant time frames can be observed<br />
in figure 14.<br />
13 I.e. mean(ROI1 unamb) - mean(ROI1 amb), mean(ROI1 unamb) - mean(ROI2 un-<br />
amb), ...
3 RESULTS 26<br />
Figure 13: Significant Differences after Bootstrapping - Fixations on Target<br />
unamb. vs amb. Condition<br />
Figure 14: Significant Differences after Bootstrapping - Fixations on Competitor<br />
Unamb. vs Amb. Condition, peaks at 16100, 17750 and 19700 ms
3 RESULTS 27<br />
Figure 15: Significant Differences after Bootstrapping - Fixations on Target<br />
vs Competitor Ambiguous Condition<br />
Interestingly, the differences seen in the time course of fixations in both<br />
conditions do not seem to be that significant. For the ambiguous case, there<br />
are 15 time windows in which the differences become significant for one<br />
image. For the unambiguous one, there are 17 time windows, two of which<br />
show two images with significant differences.
Figure 16: Significant Differences after Bootstrapping - Fixations on Target<br />
vs Competitor Unambiguous Condition<br />
4 Discussion<br />
This study about the linguistic processing of prepositions has some interest-<br />
ing implications. Due to the scarcity of the data 14 most of the implications<br />
are in need of future research. It seems that contrary to e.g. Chambers<br />
et al. (1998), prepositions do not seem to provide as much information into<br />
the processing stages of natural language understanding as for experiments<br />
in which choices are limited and subjects rely heavily on them.<br />
It seems that people process the prepositional NP-head fully when faced<br />
with a referentially ambiguous phrase and only then shift their attention to<br />
the referent. It could also be the case that the time window in which an<br />
influence of the preposition was suspected did not suffice. Therefore one<br />
proposal for future research would be to widen the gap between preposi-<br />
tion and PP-head-NP even further. As far as this study is concerned, there<br />
are a few significant differences in fixation probabilities, oddly enough there<br />
14 As could be seen by the fact that no bootstrapping analysis over the subjects yielded<br />
significant results - in most time windows, the single subject did not look at either target<br />
or competitor, only the average over subjects shows results<br />
28
4 DISCUSSION 29<br />
seem to be more fixations on the target in the ambiguous case. This could<br />
be an artifact of this study, i.e. there could be a bias towards fixating the<br />
competitor (even though none of the earlier time windows shows such a dis-<br />
crepancy). Nevertheless, it should be subject of future research. The results<br />
of this study seem to be in favor of the theory that constraints from single<br />
constituents are collected during an incremental construction of semantic<br />
representations.
References<br />
Alexejenko, S., Brukamp, K., Cieschinger, M., and Deng, X. (2009). Mean-<br />
ing, vision and situation - study project.<br />
Bärnreuther, B. (2007). Investigating the influence of visual and semantic<br />
saliency on overt attention - bsc. thesis univ. osnabrueck, cognitive science.<br />
Bosch, P. (2009). Processing Definite Determiners. Formal Semantics Meets<br />
Experimental Results. Lecture Notes on Computer <strong>Science</strong>.<br />
Chambers, C. G., Tanenhaus, M. K., Eberhard, K. M., Carlson, G. N.,<br />
and Filip, H. (1998). Words and worlds: The construction of context for<br />
definite references.<br />
Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken<br />
language. <strong>Cognitive</strong> Psychology, 6.<br />
Hartmann, N. (2006). Processing grammatical gender in german - an eye-<br />
tracking study on spoken-word recognition - bsc. thesis univ. osnabrueck,<br />
cognitive science.<br />
Karabanov, A. N. (2006). Eye tracking as a tool for investigating the compre-<br />
hension of referential expressions - bsc. thesis univ. osnabrueck, cognitive<br />
science.<br />
Kleemeyer, M. (2007). Contribution of visual and semantic information and<br />
their interaction on attention guidance - an eye-tracking study - bsc. thesis<br />
univ. osnabrueck, cognitive science.<br />
Rayner, K. (1998). Eye movements in reading and information processing:<br />
20 years of research. Psychological Bulletin, 124(3).<br />
Tanenhaus, M. K., Magnuson, J. S., Dahan, D., and Chambers, C. (2000).<br />
Eye movements and lexical access in spoken-language comprehension:<br />
Evaluating a linked hypothesis between fixations and linguistic processing.<br />
30
A Visual Stimuli<br />
Figure 17: Visual Stimuli 1-6<br />
31
A VISUAL STIMULI 32<br />
Figure 18: Visual Stimuli 7-10
B Auditory Stimuli<br />
(a) is the unambiguous condition, (b) is the ambiguous one.<br />
1. (a) Im Wald ist viel los. Ein paar Hügel säumen die kleine Lichtung.<br />
Bäume spenden Schatten. Zwei Eulen schauen sich um, Rehe<br />
spielen am Wasser und auch ein Fuchs traut sich dazu. Die Eule<br />
in dem kleinen Baum hält nach Beute Ausschau.<br />
(b) Im Wald ist viel los. Ein paar Hügel säumen die kleine Lichtung.<br />
Bäume spenden Schatten. Zwei Eulen schauen sich um, Rehe<br />
spielen am Wasser und auch ein Fuchs traut sich dazu. Die Eule<br />
auf dem kleinen Baum hält nach Beute Ausschau.<br />
2. (a) In der Savanne. In der felsigen Landschaft traben zwei Elefan-<br />
ten. Die beiden Männer beobachten die vielen durstigen Tiere<br />
am einzigen Wasserloch. Der Mann vor dem grauen Felsen ist<br />
ein erfahrener Jäger.<br />
(b) In der Savanne. In der felsigen Landschaft traben zwei Elefan-<br />
ten. Die beiden Männer beobachten die vielen durstigen Tiere<br />
am einzigen Wasserloch. Der Mann neben dem grauen Felsen ist<br />
ein erfahrener Jäger.<br />
3. (a) Im Wartezimmer. Die Kisten sind voller Spielzeug. Die Frauen<br />
warten schon lange. Die beiden Kinder langweilen sich trotz der<br />
vielen Spielsachen. Auf dem Tisch liegen Zeitschriften. Das Kind<br />
vor der einen Kiste wird gerade aufgerufen.<br />
(b) Im Wartezimmer. Die Kisten sind voller Spielzeug. Die Frauen<br />
warten schon lange. Die beiden Kinder langweilen sich trotz der<br />
vielen Spielsachen. Auf dem Tisch liegen Zeitschriften. Das Kind<br />
bei der einen Kiste wird gerade aufgerufen.<br />
4. (a) Der erste Frühlingstag. Die Kinder spielen vergnügt, nur mit den<br />
Eimern spielt gerade keins. Zwei Kätzchen schleichen herum, und<br />
Blumen blühen überall. Die Frau geniesst die Sonne. Die Katze<br />
vor dem kleinen Kind geht jetzt auf Erkundungstour.<br />
(b) Der erste Frühlingstag. Die Kinder spielen vergnügt, nur mit den<br />
Eimern spielt gerade keins. Zwei Kätzchen schleichen herum, und<br />
33
B AUDITORY STIMULI 34<br />
Blumen blühen überall. Die Frau geniesst die Sonne. Die Katze<br />
bei dem kleinen Kind geht jetzt auf Erkundungstour.<br />
5. (a) Nachmittags im Park. Bänke laden zum Ausruh’n ein. Zwei<br />
Frauen sind mit ihren Enkeln da. Zwei Picknickkörbe steh’n<br />
bereit, die Kinder spielen Fussball und ein Hund tollt freudig<br />
umher. Der Korb hinter der einen Frau ist voller Leckereien.<br />
(b) Nachmittags im Park. Bänke laden zum Ausruh’n ein. Zwei<br />
Frauen sind mit ihren Enkeln da. Zwei Picknickkörbe steh’n<br />
bereit, die Kinder spielen Fussball und ein Hund tollt freudig<br />
umher. Der Korb bei der einen Frau ist voller Leckereien.<br />
6. (a) Im vollen Wirtshaus. An den Tischen sitzen ein paar Männer<br />
und trinken etwas. Zwei Hunde schnüffeln neugierig, die Männer<br />
warten auf’s Essen und die Kellnerin serviert ein Bier. Der Hund<br />
unter dem einen Tisch bettelt um einen Knochen.<br />
(b) Im vollen Wirtshaus. An den Tischen sitzen ein paar Männer<br />
und trinken etwas. Zwei Hunde schnüffeln neugierig, die Männer<br />
warten auf’s Essen und die Kellnerin serviert ein Bier. Der Hund<br />
bei dem einen Tisch bettelt um einen Knochen.<br />
7. (a) Im Klassenzimmer. Es gibt ein paar Tische und Hocker für die<br />
Schüler. Die beiden Kinder setzen sich gerade, die Spielsachen<br />
sind weggeräumt und die Lehrerin beginnt die Stunde. Das Kind<br />
vor dem einen Tisch hört ihr noch nicht richtig zu.<br />
(b) Im Klassenzimmer. Es gibt ein paar Tische und Hocker für die<br />
Schüler. Die beiden Kinder setzen sich gerade, die Spielsachen<br />
sind weggeräumt und die Lehrerin beginnt die Stunde. Das Kind<br />
bei dem einen Tisch hört ihr noch nicht richtig zu.<br />
8. (a) Ein Grillfest im Sommer. Die Familie ist mit zwei Autos da. Bei<br />
den Bäumen spielt ein Hund. Die zwei Frauen sind schon hungrig,<br />
die Kinder sitzen am Feuer und der Vater passt aufs Essen auf.<br />
Die Frau hinter dem grossen Auto holt noch mehr Kohle.<br />
(b) Ein Grillfest im Sommer. Die Familie ist mit zwei Autos da. Bei<br />
den Bäumen spielt ein Hund. Die zwei Frauen sind schon hungrig,<br />
die Kinder sitzen am Feuer und der Vater passt aufs Essen auf.<br />
Die Frau neben dem grossen Auto holt noch mehr Kohle.
B AUDITORY STIMULI 35<br />
9. (a) Auf dem Bauernhof. Die Kinder beobachten die Enten und Gänse<br />
an den Teichen. Zwei Katzen streifen umher, und Hühner gackern<br />
um die Wette. Die Bäuerin hat viel zu tun. Die Katze an dem<br />
kleinen Teich hat grad einen Fisch entdeckt.<br />
(b) Auf dem Bauernhof. Die Kinder beobachten die Enten und Gänse<br />
an den Teichen. Zwei Katzen streifen umher, und Hühner gackern<br />
um die Wette. Die Bäuerin hat viel zu tun. Die Katze bei dem<br />
kleinen Teich hat grad einen Fisch entdeckt.<br />
10. (a) Mitten in der Prärie. Kakteen wachsen auf den Felsen. Zwei Cow-<br />
boys schlagen ein Lager auf. Zwei Geier suchen nach Nahrung<br />
und Pferde laufen herum. Ein schwarzer Hund schaut sich um.<br />
Der Geier vor dem einen Cowboy ist schon ganz abgemagert.<br />
(b) Mitten in der Prärie. Kakteen wachsen auf den Felsen. Zwei Cow-<br />
boys schlagen ein Lager auf. Zwei Geier suchen nach Nahrung<br />
und Pferde laufen herum. Ein schwarzer Hund schaut sich um.<br />
Der Geier bei dem einen Cowboy ist schon ganz abgemagert.
C Fillers - Visual<br />
Figure 19: Filler Images 1-6<br />
36
C FILLERS - VISUAL 37<br />
Figure 20: Filler Images 7-12
C FILLERS - VISUAL 38<br />
Figure 21: Filler Images 13-15
D Fillers - Auditory<br />
1. Beim Zahnarzt. Die Arzthelferin holt die nötigen Instrumente aus<br />
den Schränken. Der Zahnarzt steht noch hinter dem Trennschirm<br />
am Tisch und trinkt noch seinen Kaffee aus. Der Patient auf dem<br />
Behandlungsstuhl fühlt sich schon ein wenig unwohl.<br />
2. Im grossen Burghof. Der grosse goldene Ritter bringt dem kleinen<br />
gerade den Schwertkampf bei. Der Mann bei den Fässern betrinkt<br />
sich und die Marktfrau bietet ihre Waren feil. Der Ritter mit der<br />
Hellebarde bewacht das Stadttor.<br />
3. Nachmittags im Zoo. Zwei Löwen stehen an der Tränke und ein Elefant<br />
ist eine Portion Heu. Die Oma und ihr Enkel beobachten begeistert die<br />
vielen Tiere. Der Tierpfleger will gleich das Elefantengehege sauber<br />
machen.<br />
4. Tief im Dschungel. Auf den Bäumen hocken Vögel und auf dem Boden<br />
streiten sich zwei Affen um Bananen. Die Schildkröte versucht die<br />
reifen Früchte zu erreichen. Der einzelne Affe versucht die anderen<br />
vor der Schlange zu warnen.<br />
5. In der Zirkusmanege. Die Affen und der Elefant rollen Fässer umher<br />
während ein Clown jongliert. Der Dompteur passt auf dass die Tiere<br />
alles richtig machen. Die Zuschauer auf den Rängen amüsieren sich<br />
prächtig.<br />
6. Auf einer Lichtung Bei den Bäumen und an den Blumen tummeln<br />
sich viele Tiere. Zwei Frischlinge halten sich nah bei ihrer Mutter auf,<br />
die kleinen Füchse trauen sich weiter weg. Das Eichhörnchen klettert<br />
lieber auf dem Baum umher.<br />
7. Auf dem Wochenmarkt. In den Körben und auf dem Tisch liegt<br />
frisches Gemüse. Der Mann ist mit dem Fahrrad gekommen um bei<br />
der Bäuerin seine Einkäufe zu erledigen. Die Bäuerin begrüsst ihn und<br />
seinen Hund gerade freundlich.<br />
8. Beim Familienausflug. Die Mutter und ihr Kind wollen gleich mit<br />
dem Kanu los paddeln Der Vogel beim Korb versucht etwas zu essen<br />
39
D FILLERS - AUDITORY 40<br />
zu ergattern und die Enten gehen schwimmen. Der Junge hat seinen<br />
Fussball zum spielen mitgenommen.<br />
9. Auf einer Ranch. Der Bulle frisst Stroh dass die Rancher gerade<br />
zusammengeharkt haben. Das Gras hat der Rancher gebündelt um<br />
es später den Pferden zu geben. Die Frau vor dem Wagen wird gleich<br />
noch die Pferde striegeln.<br />
10. Beim Kinderarzt. Beim Bett stehen allerlei medizinische Gerätschaften<br />
und im Schrank liegt Spielzeug. Der Junge auf dem Stuhl hat sich beim<br />
Sportunterricht verletzt. Die ärztin sagt ihm dass er wahrscheinlich<br />
auf Krücken nach Hause gehen muss.<br />
11. Ein Tag im Stadtpark. Ein paar Hasen und Rehe ruhen sich unter den<br />
Bäumen aus. Die Frau macht einen Spaziergang mit ihrem Hund. Sie<br />
unterhält sich gerade mit dem Mann. Die Ente am Teich schaut ihren<br />
Jungen beim Schwimmen zu.<br />
12. In einem kleinen Park. Die Blumen blühen und die vielen Bäume sind<br />
voller Blätter. Die Oma und ihr Enkel sind mit dem Hund zum Spielen<br />
in den Park gekommen. Das Fahrrad an dem einen Baum gehört den<br />
kleinen Jungen.<br />
13. Morgens in der Schule. Die Kleiderschränke sind noch leer und die<br />
Stühle noch nicht besetzt. Nur die Lehrerin und ein Schüler sind<br />
schon da. Sie fragt ihn wo die anderen bleiben. Die Aktentaschen im<br />
blauen Schrank gehören der Lehrerin.<br />
14. Auf dem Reiterhof. Beim Zaun liegt in einer Schubkarre Stroh für die<br />
Pferde. Auf dem Zaun hängen auch ein paar Sattel. Das kleine Kind<br />
will gleich einen Ausritt machen. Das Pferd neben der Tränke hat<br />
schon einen Sattel auf dem Rücken.<br />
15. Im Indianerdorf. Ein grosses Tipi ist aufgebaut und die Pferde haben<br />
Jagdbemalung. Der Häuptling redet mit dem Cowboy über die bevorste-<br />
hende Jagd. Das braune Pferd, dass gerade am Fluss trinkt, gehört<br />
dem Häuptling.
E Statistics<br />
ROI H mean std-dev.<br />
1 0 10.5542 2.1560<br />
2 0 5.8977 1.1071<br />
3 0 8.7508 1.5973<br />
4 0 5.3703 0.8303<br />
5 1 4.3536 0.8902<br />
6 0 6.0272 0.9649<br />
7 0 9.2993 2.2232<br />
8 0 49.7469 5.0927<br />
Table 3: Statistics of the Subject Validity - H: Outcome of the Lilliefors-test<br />
with α = 0.05, mean value of ROI, standard deviation (both in percent).<br />
Fixations on (from top to bottom): target object, competitor object, target<br />
locationary object, distractor for target locationary object, competitor<br />
locationary object, distractor for competitor locationary object, reference<br />
object, beyond ROI<br />
ROI H mean std-dev.<br />
1 0 7.1418 5.5750<br />
2 0 3.6015 2.5735<br />
3 1 8.2362 7.5051<br />
4 1 5.7861 7.8266<br />
5 0 3.3063 4.3488<br />
6 1 8.0087 13.1942<br />
7 0 12.7139 8.3210<br />
8 0 51.2055 10.8276<br />
Table 4: Statistics of the Stimulus Validity, for the first 2500 ms - H: Outcome<br />
of the Lilliefors-test with α = 0.05, mean value of ROI, standard<br />
deviation. ROIs like above.<br />
41
E STATISTICS 42<br />
ROI H mean std-dev.<br />
1 0 6.9826 2.5666<br />
2 0 6.1760 2.8477<br />
3 0 6.1081 1.7871<br />
4 0 6.0384 3.5721<br />
5 1 5.2184 1.6377<br />
6 1 6.6296 7.2425<br />
7 0 9.7145 3.5632<br />
8 1 53.1324 10.8374<br />
Table 5: Statistics of the Stimulus Validity, for the timespan between 2500<br />
and 15000 ms - H: Outcome of the Lilliefors-test with α = 0.05, mean value<br />
of ROI, standard deviation. ROIs like above.<br />
ROI H mean std-dev.<br />
1 0 10.4496 3.4217<br />
2 0 5.9346 2.2071<br />
3 0 8.7613 2.6996<br />
4 0 5.3531 3.5624<br />
5 1 4.3345 1.8885<br />
6 1 6.0183 7.2006<br />
7 0 9.1997 3.6903<br />
8 0 49.9490 9.5045<br />
Table 6: Statistics of the Stimulus Validity, for the whole presentation of<br />
the stimulus - H: Outcome of the Lilliefors-test with α = 0.05, mean value<br />
of ROI, standard deviation. ROIs like above.
F Complementary Figures<br />
Figure 22: Timecourse of total fixations, ambiguous condition<br />
43
F COMPLEMENTARY FIGURES 44<br />
Figure 23: Timecourse of total fixations, unambiguous condition<br />
Figure 24: Timecourse of total fixations, unambiguous condition, timespan<br />
between subject head onset and end of stimulus
LIST OF FIGURES 45<br />
Figure 25: Timecourse of total fixations, ambiguous condition, timespan<br />
between subject head onset and end of stimulus<br />
List of Figures<br />
1 Example block world, taken from Bosch (2009) . . . . . . . . 6<br />
2 Exemplary visual stimulus . . . . . . . . . . . . . . . . . . . . 11<br />
3 Eye Link II Head-Mounted Eye-Tracking System . . . . . . . 14<br />
4 Example image for regions of interest . . . . . . . . . . . . . . 17<br />
5 Fixations not belonging to any region of interest . . . . . . . 17<br />
6 Subject validity, fixations over all images . . . . . . . . . . . . 20<br />
7 Stimulus validity, fixations over all subjects, between 0 and<br />
2500 ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
8 Stimulus validity, fixations over all subjects, between 2500<br />
and 15000 ms . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
9 Time course of fixation probabilities, ambiguous condition.<br />
Yellow stripe: first introduction of target/competitor-NP. First<br />
line: mean onset subject-head-NP, second line: mean onset<br />
prepositional NP-head. . . . . . . . . . . . . . . . . . . . . . 22
10 Time course of fixation probabilities, unambiguous condi-<br />
tion. Yellow stripe: first introduction of target/competitor-<br />
NP. First line: mean onset subject-head-NP, second line:<br />
mean onset prepositional NP-head. . . . . . . . . . . . . . . . 23<br />
11 Time course of fixation probabilities, unambiguous condition,<br />
time span between subject head onset and end of stimulus . . 24<br />
12 Time course of fixation probabilities, ambiguous condition,<br />
time span between subject head onset and end of stimulus . . 24<br />
13 Significant Differences after Bootstrapping - Fixations on Tar-<br />
get unamb. vs amb. Condition . . . . . . . . . . . . . . . . . 26<br />
14 Significant Differences after Bootstrapping - Fixations on Com-<br />
petitor Unamb. vs Amb. Condition . . . . . . . . . . . . . . . 26<br />
15 Significant Differences after Bootstrapping - Fixations on Tar-<br />
get vs Competitor Ambiguous Condition . . . . . . . . . . . . 27<br />
16 Significant Differences after Bootstrapping - Fixations on Tar-<br />
get vs Competitor Unambiguous Condition . . . . . . . . . . 28<br />
17 Visual Stimuli 1-6 . . . . . . . . . . . . . . . . . . . . . . . . 31<br />
18 Visual Stimuli 7-10 . . . . . . . . . . . . . . . . . . . . . . . . 32<br />
19 Filler Images 1-6 . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
20 Filler Images 7-12 . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />
21 Filler Images 13-15 . . . . . . . . . . . . . . . . . . . . . . . . 38<br />
22 Timecourse of total fixations, ambiguous condition . . . . . . 43<br />
23 Timecourse of total fixations, unambiguous condition . . . . . 44<br />
24 Timecourse of total fixations, unambiguous condition, times-<br />
pan between subject head onset and end of stimulus . . . . . 44<br />
25 Timecourse of total fixations, ambiguous condition, timespan<br />
between subject head onset and end of stimulus . . . . . . . . 45<br />
46
List of Tables<br />
1 Statistics of study participants, collected from subject ques-<br />
tionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />
2 Onsets of subject head NP, prepositions and Prepositional NP 13<br />
3 Statistics Subject Validity . . . . . . . . . . . . . . . . . . . . 41<br />
4 Statistics Stimulus Validity 0-2500 ms . . . . . . . . . . . . . 41<br />
5 Statistics Stimulus Validity 2500-15000 ms . . . . . . . . . . . 42<br />
6 Statistics Stimulus Validity whole timecourse . . . . . . . . . 42<br />
47
G Consent Sheet<br />
<strong>Christian</strong> <strong>Hoffmann</strong><br />
Arbeitsgruppe Computerlinguistik<br />
Universität Osnabrück<br />
Albrechtstrasse 28<br />
49069 Osnabrück<br />
email: chrihoff@uos.de<br />
Aufklärung/Einwilligung<br />
Sehr geehrte Teilnehmerin, sehr geehrter Teilnehmer,<br />
Sie haben sich freiwillig zur Teilnahme dieser Studie gemeldet. Hier erhalten<br />
Sie nun einige Informationen zu Ihren Rechten und zum Ablauf des folgen-<br />
den Experiments. Bitte lesen Sie sich die folgenden Abschnitte sorgfältig<br />
durch.<br />
1) Zweck der Studie<br />
Ziel dieser Studie ist es, neue Erkenntnisse über das Satzverständnis anhand<br />
von Eye-Tracking-Daten zu erhalten.<br />
2) Ablauf der Studie<br />
In dieser Studie werden Ihnen 25 Bilder auf einem Computermonitor gezeigt.<br />
Bitte sehen sie sich die Bilder sorgfältig an. Zugleich werden Sie einen kurzen<br />
Text zu hören bekommen. Hören Sie aufmerksam zu.<br />
Um Ihre Blickposition zu errechnen, wird Ihnen ein ”Eye-Tracker” auf den<br />
Kopf geschnallt. Dieses Gerät erfasst die Position Ihres Auges mit Hilfe von<br />
kleinen Kameras und Infrarotsensoren. Dieses Verfahren ist ein psychome-<br />
trisches Standardverfahren, das in dieser Art bereits vielfach angewandt und<br />
getestet wurde. Bei unseren bisherigen Erfahrungen und Experimenten mit<br />
dem Gerät ist keine Versuchsperson zu Schaden gekommen.<br />
Zu Beginn der Untersuchung muss der ”Eye-Tracker” eingestellt werden,<br />
dieser Vorgang dauert etwa 10-15 Minuten. Das eigentliche Experiment<br />
48
G CONSENT SHEET 49<br />
dauert dann etwa 15 Minuten. Der Versuchsleiter wird während des ganzen<br />
Experiments mit Ihnen im Versuchsraum sein und steht Ihnen für Fragen<br />
jederzeit zur Verfügung. Nach der Studie erhalten Sie weitere Informationen<br />
zum Sinn und Zweck dieser Untersuchung. Bitte geben Sie diese Informatio-<br />
nen an niemanden weiter um die Objektivität eventueller Versuchspersonen<br />
zu wahren.<br />
3) Risiken und Nebenwirkungen<br />
Diese Studie ist nach derzeitigem Wissenstand des Versuchsleiters ungefährlich<br />
und für die Teilnehmer schmerzfrei. Durch Ihre Teilnahme an dieser Studie<br />
setzen Sie sich keinen besonderen Risiken aus und es sind keine Neben-<br />
wirkungen bekannt. Da diese Studie in ihrer Gesamtheit neu ist, kann<br />
das Auftreten von noch unbekannten Nebenwirkungen allerdings nicht aus-<br />
geschlossen werden.<br />
Wichtig: Bitte informieren Sie den Versuchsleiter umgehend, wenn Sie unter<br />
Krankheiten leiden oder sich derzeit in medizinischer Behandlung befinden.<br />
Teilen Sie dem Versuchsleiter bitte umgehend mit, falls Sie schon einmal<br />
einen epileptischen Anfall hatten. Bei Fragen hierzu wenden Sie sich bitte<br />
an den Versuchsleiter.<br />
4) Abbruch des Experiments<br />
Sie haben das Recht, diese Studie zu jedem Zeitpunkt und ohne Angabe<br />
einer Begründung abzubrechen. Ihre Teilnahme ist vollkommen freiwillig<br />
und ohne Verpflichtungen. Es entstehen Ihnen keine Nachteile durch einen<br />
Abbruch der Untersuchung.<br />
Falls Sie eine Pause wünschen oder auf die Toilette müssen, ist dies jederzeit<br />
möglich. Sollten Sie zu irgendeinem Zeitpunkt während des Experiments<br />
Kopfschmerzen oder Unwohlsein anderer Art verspüren, dann informieren<br />
Sie bitte umgehend den Versuchsleiter.<br />
5) Vertraulichkeit<br />
Die Bestimmungen des Datenschutzes werden eingehalten. Personenbezo-<br />
gene Daten werden von uns nicht an Dritte weitergegeben. Die von Ihnen<br />
erfassten Daten werden von uns anonymisiert und nur in dieser Form weit-<br />
erverarbeitet oder veröffentlicht.
G CONSENT SHEET 50<br />
6) Einverständniserklärung<br />
Bitte bestätigen Sie durch Ihre Unterschrift die folgende Aussage:<br />
”Hiermit bestätige ich, dass ich durch den Versuchsleiter dieser Studie über<br />
die oben genannten Punkte aufgeklärt und informiert worden bin. Ich habe<br />
diese Erklärung gelesen und verstanden. Ich stimme jedem der Punkte zu.<br />
Ich ermächtige hiermit die von mir in dieser Untersuchung erworbenen Daten<br />
zu wissenschaftlichen Zwecken zu analysieren und in wissenschaftlichen Ar-<br />
beiten anonymisiert zu veröffentlichen.<br />
Ich wurde über meine Rechte als Versuchsperson informiert und erkläre mich<br />
zu der freiwilligen Teilnahme an dieser Studie bereit.”<br />
Ort, Datum Unterschrift<br />
Bei Minderjährigen, Unterschrift des Erziehungsberechtigten
Acknowledgments<br />
I want to thank Prof. Peter Bosch and Prof. Peter König for their con-<br />
stant support during the development of this thesis and the opportunity to<br />
conduct research of my own in such an exciting field. Furthermore, I want<br />
to thank Torsten Betz and Frank Schumann from the NBP-group for their<br />
open ear and advice when it was dearly needed. Lastly, I want to thank<br />
Vera Mönter for her moral support and permanent motivation.<br />
51
Confirmation<br />
Hereby I confirm that I wrote this thesis independently and that I have not<br />
made use of any other resources or means than those indicated.<br />
Hiermit bestätige ich, dass ich die vorliegende Arbeit selbständig verfasst<br />
und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet<br />
habe.<br />
<strong>Christian</strong> <strong>Hoffmann</strong>, Nijmegen, September 29, 2009<br />
52