07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Session</strong> WedFVT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Multi-modal Learning II<br />

Chair<br />

Co-Chair Shingo Shimoda, RIKEN<br />

16:15–16:30 WedFVT9.1<br />

Experimental study on haptic communication of<br />

a human in a shared human-robot collaborative<br />

task<br />

Julie Dumora, Franck Geffard, Catherine Bidard<br />

LIST, CEA, France<br />

Thibaut Brouillet Philippe Fraisse<br />

EPSYLON, Montpellier, France LIRMM, Montpellier, France<br />

• Robot assistances and operator intention<br />

detection proposed to overcome<br />

limitations of backdrivable robot in long<br />

objects manipulation<br />

• The solution of analysing haptic cues to<br />

tackle the rotation/translation ambiguity is<br />

proposed<br />

• Relationships between operator intention<br />

of motion and haptic measurements are<br />

highlighted<br />

• Wrench measurements are shown to be<br />

an incomplete information for detection of<br />

operator’s intention of motion<br />

Rotation/translation ambiguity<br />

in joint human-robot manipulation<br />

of a long object<br />

(view from top down)<br />

16:45–17:00 WedFVT9.3<br />

Maximally Informative Interaction<br />

Learning for Scene Exploration<br />

Herke van Hoof, Oliver Kroemer, Heni Ben Amor and Jan Peters<br />

FG Intelligent Autonomous Systems, TU Darmstadt, Germany<br />

Max Planck Institute for Intelligent Systems, Germany<br />

• In dynamic environments, robots<br />

need to handle novel objects.<br />

• As annotated data is absent robots<br />

need to learn from the results of<br />

their actions.<br />

• Exploratory actions that maximize<br />

information gain allow more efficient<br />

learning.<br />

• This method allows a robot to<br />

efficiently learn with minimal prior<br />

information.<br />

17:15–17:20 WedFVT9.5<br />

Towards Robotic Calligraphy<br />

Nico Huebel, Elias Mueggler, Markus Waibel,<br />

and Raffaello D’Andrea<br />

Institute for Dynamic Systems and Control,<br />

ETH Zurich, Switzerland<br />

• We present a prototype of a robotic system that learns how to draw<br />

Chinese characters.<br />

• First, the context of the project is presented.<br />

• Then, the experimental setup and the overall approach is introduced.<br />

• Finally, experimental results are presented and discussed.<br />

This is the experimental setup of our prototype consisting of the<br />

KUKA Light Weight Robot, a Prosilica GC 655C camera, and a brush.<br />

16:30–16:45 WedFVT9.2<br />

Robots Move: Bootstrapping the Development of<br />

Object Representations using Sensorimotor<br />

Coordination<br />

Arren Glover and Gordon Wyeth<br />

Queensland University of Technology, Australia<br />

• This paper is concerned with the<br />

unsupervised generation of object<br />

models by fusing appearance and<br />

action<br />

• A FAB-MAP-based approach is<br />

combined with a partially<br />

observable semi-Markov decision<br />

process<br />

• Results indicate stronger bag-ofword<br />

object representations are<br />

formed under sensorimotor<br />

constraints<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–182–<br />

Representations are matched via<br />

self-motion prediction to become<br />

object specific<br />

17:00–17:15 WedFVT9.4<br />

Perceptual Development Triggered<br />

by its Self-Organization in Cognitive Learning<br />

Yuji Kawai, Yukie Nagai, and Minoru Asada<br />

Graduate School of Engineering, Osaka University, Japan<br />

• Our goal: To investigate the role of<br />

visual development triggered by selforganization<br />

in the visual space in a<br />

case of the mirror neuron system (MNS)<br />

• Key idea: The self-triggered visual<br />

development changes adaptively the<br />

developmental speed.<br />

• Result: The self-triggered development<br />

maintains a long enough period of<br />

immature vision which can inhibit selfother<br />

differences in observation. Thus<br />

the development enhances the<br />

acquisition of the association between<br />

self and other (i.e., the MNS).<br />

(a) Early stage<br />

of development<br />

(b) Later stage<br />

of development<br />

17:20–17:25 WedFVT9.6<br />

Learning Throwing and Catching Skills<br />

Jens Kober, Katharina Muelling, and Jan Peters<br />

AGBS, MPI for Intelligent Systems, Germany<br />

IAS, TU Darmstdat, Germany<br />

• Learning hitting skills by imitation and reinforcement<br />

learning<br />

• Generalizing hitting skills to catching skills<br />

• Learning to throw at targets<br />

• Combining throwing and catching skills to play catch<br />

A BioRob and a Barrett WAM playing catch.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!