21.04.2013 Views

Eckhard Bick - VISL

Eckhard Bick - VISL

Eckhard Bick - VISL

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Here, the variable ‘number’ can be instantiated with the values ‘singular’ or ‘plural’,<br />

and all instances of the same variable in a rule have to “match” (or to be “unified”,<br />

which is why such grammars are called unification grammars), that is, their values have<br />

to be the same in order for the production to be legitimate (note, that in the second rule,<br />

there are two different number-variables - for verb and direct object, respectively -, that<br />

do not have to match!). While pure PSG has a certain appeal for isolating languages like<br />

English - not least due to its pedagogical simplicity - unification grammars are<br />

unavoidable where generative grammar is applied to inflexional languages like French<br />

or German.<br />

Higher level generative grammars, like HPSG, may incorporate other<br />

subcategorisation information, like valency and selection restrictions, into the lexicon,<br />

and thus build a more sophisticated rule set.<br />

Traditionally, four levels of descriptional power are distinguished for generative<br />

grammars:<br />

Chomsky's hierarchy of grammar classes (Chomsky, 1959)<br />

(low number: more powerful, high number: more restricted<br />

0 unrestricted PSG<br />

1 context sensitive PSG<br />

x -> y [where y has more symbols than x, e.g. A B -> C D E]<br />

or: x A z -> x y z [other notation with "visible" context]<br />

2 context free PSG<br />

A -> x<br />

3 regular PSG = finite state grammars<br />

left linear: A -> B t, A -> t<br />

right linear: A -> t B, A -> t<br />

[where: T = terminal; N = non-terminal; A,B, C, D, E ∈ Ν; t ∈ T; x,y,z = sequences of T and/or N]<br />

The computationally most interesting grammars are the least powerful, - finite state<br />

grammars, since they can be implemented as algorithmically very efficient transition<br />

networks (reminiscent of the above described Markov Models, without the transition<br />

probabilities). In such networks, the computer program starts from the start symbol and<br />

moves along possible transition paths (arcs) between non-terminal symbols. Every path<br />

is labelled with a non-terminal symbol (word or word class), and can only be taken, if<br />

the word class or word in question is encountered linearly at the next position to the<br />

right (in right linear grammars 94 ). When it encounters a “dead end” (i.e. a non-terminal<br />

94 In left linear grammars, the algorithm would have to work from right to left, in order to avoid infinite loops created by the<br />

possibility of reiterating non-terminal production of the type A -> A t, as in ADJP -> ADJP adj.<br />

- 141 -

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!