07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Session</strong> WedFT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Estimation and Sensor Fusion<br />

Chair Li-Chen Fu, National Taiwan Univ.<br />

Co-Chair<br />

16:15–16:30 WedFT1.1<br />

Contactless deflection sensing of concave and<br />

convex shapes assisted by soft mirrors<br />

Michal Karol Dobrzynski, Ionut Halasz, Ramon Pericet-Camara<br />

and Dario Floreano<br />

Laboratory of Intelligent Systems,<br />

Ecole Polytechnique Federale de Lausanne, Switzerland<br />

• Deflection sensor capable of concave<br />

and convex shape estimation with no<br />

impact on the deflected substrate<br />

softness.<br />

• Dynamic range of 130° with 0.8°<br />

resolution and 400Hz data acquisition.<br />

• Analytical model in good agreement<br />

with measurements (average error of<br />

8%)<br />

• Novel quick manufacturing method for<br />

soft PDMS mirrors based<br />

on surface tension.<br />

Top: Sensor in standard configuration perceives<br />

concave deflections only; Middle and bottom:<br />

By attaching a customized mirror the range can<br />

be extended towards convex deflections.<br />

16:45–17:00 WedFT1.3<br />

Manipulator State Estimation with Low Cost<br />

Accelerometers and Gyroscopes<br />

Philip Roan and Nikhil Deshpande<br />

Robert Bosch LLC, USA and North Carolina State University, USA<br />

Yizhou Wang and Benjamin Pitzer<br />

University of California, Berkeley, USA and Robert Bosch LLC, USA<br />

• Estimate manipulator joint angles using<br />

triaxial accelerometers and uniaxial<br />

gyroscopes<br />

• Comparing three different compensation<br />

strategies:<br />

• Complementary Filter<br />

• Time Varying Complementary Filter<br />

• Extended Kalman Filter<br />

• Mean error of 1.3° over the joints<br />

estimated, resulting in end-effector errors<br />

of 6.1 mm or less<br />

A generic joint between two links<br />

showing how accelerometers and<br />

gyroscopes are mounted.<br />

17:15–17:30 WedFT1.5<br />

Sensor Fusion Based Human Detection and<br />

Tracking System for Human-Robot Interaction<br />

Kai Siang Ong, Yuan Han Hsu, and Li Chen Fu, Fellow, IEEE<br />

Department of Computer Science & Information Engineering,<br />

Department of Electrical Engineering<br />

National Taiwan University, Taiwan, R.O.C.<br />

• Integrating the information of laser<br />

range finder and that from vision<br />

sensor using Covariance<br />

Intersection algorithm<br />

• Propose a behavior-response<br />

system:<br />

(1) human behavior‘s inference<br />

(2) robot reaction<br />

• For Human Behavior Intention<br />

Inference, we take the proxemics<br />

framework into consideration.<br />

Public Space<br />

Interaction Space<br />

vlrf<br />

d<br />

f<br />

x f , y f<br />

θia , θ fd<br />

16:30–16:45 WedFT1.2<br />

Deformable Structure From Motion by Fusing<br />

Visual and Inertial Measurement Data<br />

Stamatia Giannarou, Zhiqiang Zhang, Guang-Zhong Yang<br />

Hamlyn Centre for Robotic Surgery, Imperial College London, UK<br />

• 3D reconstruction of a deforming surgical<br />

environment in MIS is important for intraoperative<br />

guidance.<br />

• A novel adaptive UKF parameterization<br />

scheme is proposed to fuse vision<br />

information with data from an Inertial<br />

Measurement Unit for accurate 3D<br />

reconstruction.<br />

• A direct application of the proposed<br />

framework is free-form deformation<br />

recovery to enable adaptive motion<br />

stabilization and visual servoing in<br />

robotically assisted laparoscopic surgery.<br />

17:00–17:15 WedFT1.4<br />

Vision-Aided Inertial Navigation<br />

Using Virtual Features<br />

Chiara Troiani and Agostino Martinelli<br />

INRIA Rhone Alpes, France<br />

• MAV equipped with a monocular camera<br />

and a laser pointer mounted on a fixed<br />

baseline and an IMU<br />

• Unique point feature used: laser spot<br />

projected on a planar surface and<br />

observed by the monocular camera<br />

• Analytical derivation of all the observable<br />

modes<br />

• Local decomposition and recursive<br />

estimation of the observable modes<br />

performed using an Extended Kalman<br />

Filter<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–171–

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!