17.01.2013 Views

Chapter 2. Prehension

Chapter 2. Prehension

Chapter 2. Prehension

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Chapter</strong> 4 - Planning of <strong>Prehension</strong> 95<br />

4.6.1 Planning a hand location<br />

When the eyes look at where an object is located, exteroceptive<br />

retinal signals and proprioceptive eye muscle activity specify a unique<br />

encoding of the space around them. Kuperstein (1988) argued that this<br />

information could be associated with arm muscle activity to put the<br />

arm in a configuration that would locate the wrist at where the eyes are<br />

looking. This is much like a baby learning to calibrate her visual sys-<br />

tem with her motor system (see also Held 8z Bauer, 1967, 1974). In<br />

this model, Kuperstein used an adaptive neural network to basically<br />

compute an inverse kinematic arm configuration (see Figure 4.13),<br />

correlating visual sensations to arm muscle settings using a hetero-as-<br />

sociative memory (see Appendix C). In other words, patterns of acti-<br />

vations on the eye muscles and retina were associated with patterns of<br />

activations of arm muscles through a set of adaptable weights. In or-<br />

der to demonstrate the feasibility of such an algorithm, Kuperstein<br />

(1988) tested it on both a simulated robot arm and a real robot arm. In<br />

these tests, he placed a high contrast marker on a cylinder and used the<br />

center of the visual contrast as the location to which the two cameras<br />

should orient.<br />

The algorithm involves two separate phases for the computation.<br />

In the learning. Dhase, self-produced motor signals are generated to<br />

place the arm so that it is holding the cylinder (how the arm gets to that<br />

location is not part of this algorithm). The eyes look at the cylinder,<br />

and sensory information is projected through the hetero-associative<br />

memory producing computed arm muscle signals. The difference<br />

between the actual and computed configuration is determined and then<br />

used to change the weights in the hetero-associative memory.<br />

Initially, the associated configuration will be quite wrong, but as the<br />

weights are updated, the associated configuration improves. In the<br />

use Dhase, the eyes locate the cylinder free in space and, using the<br />

weights cmntly stored in the network, the exact (within a small er-<br />

ror) goal arm configuration is generated. Presumably, this goal in-<br />

formation is passed onto a trajectory planner that then places the arm at<br />

that location and configuration (see discussion of Grossberg VITE<br />

model in <strong>Chapter</strong> 5 for trajectory planning).<br />

The artificial neural network used by Kuperstein is seen in Figure<br />

4.13. Some of the layers are used to recode the inputs before they are<br />

presented to the hetero-associative memory. For example, the first<br />

layer, at the bottom of the figure, recodes exteroceptive retinal and<br />

proprioceptive eye inputs into internal representations. On the bottom<br />

left side of the figure, the visual sensation of the cylinder is registered

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!