07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Session</strong> WedCT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Robot Interaction with the Environment and Humans<br />

Chair Li-Chen Fu, National Taiwan Univ.<br />

Co-Chair Jan Peters, Tech. Univ. Darmstadt<br />

11:00–11:15 WedCT7.1<br />

A Brain-Robot Interface for Studying Motor<br />

Learning after Stroke<br />

Timm Meyer 1 , Jan Peters 1,2 , Doris Brötz 3 ,<br />

Thorsten O. Zander 1 , Bernhard Schölkopf 1 ,<br />

Surjo R. Soekadar 3 , Moritz Grosse-Wentrup 1<br />

1 Max Planck Institute for Intelligent Systems, Germany<br />

2 Intelligent Autonomous Systems Group,<br />

Technische Universität Darmstadt, Germany<br />

3 Institute of Medical Psychology and Behavioural Neurobiology,<br />

University of Tübingen, Germany<br />

• System:<br />

Combining robotics and EEG to study<br />

neural correlates of motor learning after<br />

stroke<br />

• Pilot study:<br />

Virtual 3D reaching movements with<br />

stroke patients<br />

• Results:<br />

Pre-trial bandpower in contralesional<br />

sensorimotor areas may be a neural<br />

correlate of motor learning.<br />

Subject wearing an EEG-cap while<br />

being attached to the robot arm<br />

11:30–11:45 WedCT7.3<br />

Haptic Classification and Recognition of Objects<br />

Using a Tactile Sensing Forearm<br />

Tapomayukh Bhattacharjee, James M. Rehg, and<br />

Charles C. Kemp<br />

Center for Robotics and Intelligent Machines,<br />

Georgia Institute of Technology, USA<br />

• Method:<br />

- PCA on concatenated time series<br />

- k-NN on top components<br />

• Leave-one-out cross-validation accuracy<br />

- Fixed vs. Movable: 91%<br />

- 4 Categories: 80%<br />

•(Fixed, Movable) X (Soft, Rigid)<br />

- Recognize which of 18 objects: 72%<br />

• Limitations<br />

- Stereotyped motion of the arm<br />

- Single contact region<br />

12:00–12:15 WedCT7.5<br />

Using a Minimal Action Grammar for<br />

Activity Understanding in the Real World<br />

Douglas Summers-Stay, Ching L. Teo, Yezhou Yang, Cornelia<br />

Fermuller and Yiannis Aloimonos<br />

Commputer Science, University of Maryland College Park, USA<br />

• We have built a system to<br />

automatically build an activity tree<br />

structure from observations of an actor<br />

performing complex manipulation<br />

activities<br />

• We created a dataset of these<br />

activities using Kinect RGBD and<br />

SR4000 time-of -light cameras.<br />

• The grammatical structure used to<br />

understand these actions may provide<br />

insight into a connection between<br />

action and language understanding<br />

• Activities recognized include<br />

assembling a machine, making a<br />

sandwich, creating a valentine card,<br />

etc…<br />

By noting key moments when<br />

objects come together, we build a<br />

tree for activity recognition<br />

11:15–11:30 WedCT7.2<br />

A brain-machine interface to navigate mobile<br />

robots along human-like paths amidst obstacles<br />

Abdullah Akce, James Norton<br />

University of Illinois at Urbana-Champaign, USA<br />

Timothy Bretl<br />

University of Illinois at Urbana-Champaign, USA<br />

• We present an interface that<br />

allows a human user to specify a<br />

desired path with noisy binary<br />

inputs obtained from EEG<br />

• Desired paths are assumed to be<br />

geodesics under a cost function,<br />

which is recovered from existing<br />

data using structured learning<br />

• An ordering between all (local)<br />

geodesics is defined so that users<br />

can specify paths optimally<br />

• Results from human trials<br />

demonstrate the efficacy of this<br />

approach when applied to a<br />

simulated robotic navigation task<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–144–<br />

The interface provides feedback by displaying<br />

an estimate of the desired path. The user gives<br />

left/right inputs based on “clockwise” ordering<br />

of the desired path to the estimated path.<br />

11:45–12:00 WedCT7.4<br />

Proactive premature intention estimation for<br />

intuitive human-robot collaboration<br />

Muhammad Awais and Dominik Henrich<br />

Chair for Applied Computer Science III,<br />

University of Bayreuth, Germany<br />

• Proactive premature intention<br />

estimation by determining the<br />

• Earliest possible trigger state<br />

in a Finite State Machine<br />

representing the human<br />

intention<br />

• Selecting the most probable<br />

intention prematurely for more<br />

than one ambiguous human<br />

intentions<br />

• Selection of trigger state is based<br />

on common state transition<br />

sequence<br />

• Premature intention recognition<br />

by the weights of the transition<br />

condition<br />

a 3 a a<br />

3 2 a<br />

a2<br />

a a1 a1<br />

2 a2 a2<br />

a a1 a1<br />

2 a2 a2<br />

a a1 a1<br />

2 a2 4<br />

a a3<br />

a1 a1<br />

4<br />

a a3<br />

a1 a1<br />

4<br />

a a3<br />

a1 a1<br />

a4 a4 a4<br />

a3<br />

4<br />

a1<br />

a 2<br />

a<br />

S1<br />

S2<br />

S 9<br />

a3 a5<br />

S1<br />

S2<br />

S 9<br />

a a5<br />

S<br />

3<br />

1<br />

S 2<br />

S 9<br />

a3 a5<br />

3<br />

S 4<br />

S 5<br />

S 6<br />

• • • • •<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

a •<br />

•<br />

•<br />

n<br />

a •<br />

•<br />

an •<br />

•<br />

a •<br />

•<br />

an •<br />

n<br />

a •<br />

•<br />

an •<br />

•<br />

a an •<br />

n<br />

a an an<br />

an<br />

n a a3 a3 a2 a 3 a3 a2 a 3 a3 a2 a 2 a<br />

a2 a1 a1 a1 2<br />

a 4<br />

a4<br />

a3<br />

a3<br />

a3 a 1<br />

a1<br />

a a5<br />

a<br />

S1<br />

S 2<br />

a<br />

2<br />

S<br />

7<br />

a4<br />

S1<br />

S 2<br />

a<br />

2<br />

S<br />

7<br />

a4<br />

S 1<br />

S 2<br />

S<br />

7<br />

a4<br />

2<br />

3<br />

S 4<br />

S 5<br />

S 6<br />

• • • • •<br />

• •<br />

•<br />

•<br />

• •<br />

•<br />

•<br />

•<br />

a •<br />

• •<br />

n an •<br />

•<br />

an •<br />

•<br />

a •<br />

•<br />

an •<br />

n an •<br />

•<br />

an •<br />

•<br />

a an •<br />

n an an an<br />

an<br />

FSM 1<br />

FSM 2<br />

S<br />

n<br />

S<br />

n<br />

a1<br />

a<br />

2<br />

a 3<br />

a4<br />

a 2<br />

a 5<br />

S<br />

n + 1<br />

S<br />

n + 1<br />

Proactive premature intention<br />

recognition. Top: earliest possible<br />

trigger state selection for<br />

proactive intention recognition.<br />

Bottom: Premature intention<br />

recognition<br />

12:15–12:30 WedCT7.6<br />

On-Line Human Action Recognition by Combining<br />

Joint Tracking and Key Pose Recognition<br />

E-Jui Weng and Li-Chen Fu<br />

Department Name, University Name, Country<br />

• Propose a boosting approach by<br />

combining the pose estimation and the<br />

upper body tracking to recognize human<br />

actions.<br />

• Our method can recognize human poses<br />

and actions at the same time.<br />

• Apply the action recognition results as a<br />

feedback to the pose estimation process<br />

to increase its efficiency and accuracy.<br />

• Present an on-line spotting scheme based<br />

on the gradients of the hidden Markov<br />

models probabilities.<br />

System overview

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!