07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Session</strong> <strong>WedAT1</strong> <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Pose Estimation<br />

Chair<br />

Co-Chair<br />

08:30–08:45 <strong>WedAT1</strong>.1<br />

A Flexible 3D Object Localization System for<br />

Industrial Part Handling<br />

Øystein Skotheim, Sigurd A. Fjerdingen<br />

SINTEF ICT, Trondheim, Norway<br />

Morten Lind, Pål Ystgaard<br />

SINTEF Raufoss Manufacturing AS, Trondheim, Norway<br />

• A flexible system is presented that can<br />

scan and localize work pieces in 3D for<br />

assembly and pick and place operations<br />

• The system includes software that<br />

recognizes and localizes objects based on<br />

an acquired 3D point cloud and a CAD<br />

model of the objects to search for<br />

• The method is based on oriented point<br />

pairs and a Hough-like voting method<br />

• An industrial prototype work cell for<br />

recognizing, grasping and packaging chair<br />

parts in card board boxes is presented as<br />

an example application of the system<br />

Vision and scanning robot and<br />

result of scanning and recognition<br />

09:00–09:15 <strong>WedAT1</strong>.3<br />

3D Pose Estimation of Daily Objects Using an<br />

RGB-D Camera<br />

Changhyun Choi and Henrik I. Christensen<br />

College of Computing, Georgia Institute of Technology, USA<br />

• An 6-DOF object pose estimation exploiting both depth and color<br />

information<br />

• Do not rely on the table-top assumption<br />

• Estimate pose of a target object in heavy clutter.<br />

• Define a color point pair feature which is employed in a voting scheme<br />

• Exploiting color information significantly enhances the performance of<br />

the voting process in terms of both time and accuracy<br />

• Extensive quantitative results of comparative experiments between our<br />

approach and a state-of-the-art are shown.<br />

08:45–09:00 <strong>WedAT1</strong>.2<br />

6D pose estimation of textureless shiny objects<br />

using random ferns for bin-picking<br />

Jose Rodrigues, João Xavier, Pedro Aguiar<br />

Institute for Systems and Robotics, Instituto Superior Tecnico, UTL, Portugal<br />

Jun-Sik Kim, Takeo<br />

Kanade<br />

Robotics Institute, CMU, USA<br />

• Multi-light imaging system: Image color<br />

changes with surface normal, enabling<br />

efficient pose estimation from patches.<br />

• Data-driven method for 6D pose<br />

estimation: Random ferns map patches into<br />

pose hypotheses votes.<br />

• No need for object-specific tuning of<br />

parameters for various objects<br />

• Fast and Robust: Recognition runs within<br />

0.5 sec with 99.6% accuracy. <strong>10</strong>0 sequential<br />

picking tests involves strong occlusions,<br />

shadows, and inter-reflections.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–119–<br />

Makoto Furukawa<br />

Honda Engineering Co., Ltd., Japan<br />

Recognition of 2 distinct<br />

textureless shiny objects.<br />

09:15–09:30 <strong>WedAT1</strong>.4<br />

Multi-Camera Based Real-Time Configuration<br />

Estimation of Continuum Robots<br />

Bernhard Weber and Paul Zeller<br />

Institute of Automatic Control Engineering, TU München, Germany<br />

Kolja Kühnlenz<br />

Institute for Advanced Study, TU München, Germany<br />

• Novel concept for configuration estimation<br />

of a continuum robot using a camera array<br />

• Robot parameters and scene coordinates<br />

are estimated optimally and<br />

simultaneously<br />

• Real-time capability is provided by fast<br />

feature matching and special treatment of<br />

sparse structure<br />

• Robot parameters have a known but<br />

arbitrary relation to camera parameters<br />

Continuum robot with attached<br />

camera array


<strong>Session</strong> WedAT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Physical Human-Robot Interaction I<br />

Chair Yasuhisa Hirata, Tohoku Univ.<br />

Co-Chair Dongheui Lee, Tech. Univ. of Munich<br />

08:30–08:45 WedAT2.1<br />

Wire-type Human Support System<br />

Controlled by Servo Brakes<br />

Yasuhisa Hirata, Yuki Tozaki and Kazuhiro Kosuge<br />

Department of Bioengineering and Robotics, Tohoku University, Japan<br />

• This paper presents a wire-type motion<br />

support system controlled by servo<br />

brakes.<br />

• This paper focuses on the feasible braking<br />

force region of the system.<br />

• We proposes the conditions for realizing<br />

the system with a large feasible braking<br />

force region, even if we do not use a large<br />

number of brake units.<br />

• We conduct path-following experiments<br />

considering table tennis as an example of<br />

sports training.<br />

Sport Training by using Wire-type<br />

Motion Support System<br />

09:00–09:15 WedAT2.3<br />

Elastic Strips<br />

Implementation on a Physical Humanoid Robot<br />

Jinsung Kwon and Oussama Khatib<br />

Stanford University, USA<br />

Taizo Yoshikawa<br />

Honda Research Institute USA, inc., USA<br />

• The Elastic Strip framework is a reactive motion modification<br />

approach for high d.o.f. robot tasks under real-time conditions.<br />

• The approach is implemented and tested on a humanoid<br />

robot.<br />

• The humanoid robot is able to reach a goal position by<br />

following the elastic strip path which is updated in real-time.<br />

08:45–09:00 WedAT2.2<br />

Real-time Estimate of Period Derivatives using Adaptive<br />

Oscillators: Application to Impedance-Based Walking Assistance<br />

R. Ronsse 1 , S.M.M. De Rossi 2 , N. Vitiello 2 , T. Lenzi 2 , B. Koopman 3 ,<br />

H. van der Kooij 3 , M.C. Carrozza 2 , and A.J. Ijspeert 4<br />

1 Institute of Mechanics, Materials and Civil Engineering,<br />

Université catholique de Louvain, Belgium<br />

2 BioRobotics Institute, Scuola Superiore Sant’Anna, Italy<br />

3 Biomechanical Engineering Laboratory, University of Twente, The<br />

Netherlands<br />

4 Biorobotics Laboratory, EPFL, Switzerland<br />

• New approach to infer velocity and<br />

acceleration from a noisy quasi-periodic<br />

position signal.<br />

• Using adaptive oscillators.<br />

• Measurement noise is filtered out AND<br />

estimates are delay-free with respect to<br />

the actual derivatives.<br />

• Validation with an impedance-based<br />

strategy for assisting human walking in the<br />

LOPES device.<br />

• Intrinsically stable method.<br />

Knee velocity and acceleration during<br />

walking. Black: actual kinematics; blue:<br />

Kalman filter; red: new approach.<br />

09:15–09:30 WedAT2.4<br />

6D Workspace Constraints<br />

for Physical Human-Robot Interaction using<br />

Invariance Control with Chattering Reduction<br />

Melanie Kimmel Martin Lawitzky Sandra Hirche<br />

Institute of Automatic Control Engineering<br />

Technische Universität München, Germany<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–120–


<strong>Session</strong> WedAT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Field Robotics I<br />

Chair Urbano Nunes, Univ. de Coimbra<br />

Co-Chair Marcel Bergerman, Carnegie Mellon Univ.<br />

08:30–08:45 WedAT3.1<br />

Natural Feature Based Localization<br />

in Forested Environments<br />

Meng Song, Fengchi Sun<br />

College of Software, Nankai University, China<br />

Karl Iagnemma<br />

Department of Mechanical Engineering, MIT, USA<br />

• A new feature based<br />

scan matching method<br />

for solving full 6D<br />

localization problem in<br />

forested environments.<br />

• Tree trunks are directly<br />

utilized as high-level<br />

features for registration.<br />

• The registration result is<br />

independent of the initial<br />

poses of the scans.<br />

09:00–09:15 WedAT3.3<br />

Electro-hydraulically actuated forestry<br />

manipulator: Modeling and Identification<br />

Pedro La Hera<br />

Forest Technology, SLU, Sweden<br />

Bilal Ur Rehman and Daniel Ortiz Morales<br />

Applied Physics, Umeå University, Sweden<br />

• We consider the problem of<br />

modeling dynamics of a electrohydraulic<br />

forestry manipulator.<br />

• Results of simulation tests show a<br />

significant correspondence of the<br />

model to the recorded data<br />

• Such models are to be used<br />

further for model based design.<br />

Experimental setup<br />

08:45–09:00 WedAT3.2<br />

A Practical Obstacle Detection System<br />

for Autonomous Orchard Vehicles<br />

Gustavo Freitas<br />

Dept. of Electrical Eng., Federal University of Rio de Janeiro, Brazil<br />

Bradley Hamner, Marcel Bergerman and Sanjiv Singh<br />

Field Robotics Center, Robotics Institute, Carnegie Mellon University, USA<br />

• Goal: An obstacle detection system for<br />

autonomous orchard vehicle navigation<br />

between rows of trees<br />

• Key requirement: To be affordable to<br />

growers, the system should not add to the<br />

hardware cost of the vehicle<br />

• Our approach: Detect people and bins<br />

using a laser-scanner, a low-cost inertial<br />

measurement unit, and steering and wheel<br />

encoders<br />

• Results: Field experiments in apple<br />

orchards show the system reliably detects<br />

the target obstacles, and to an extent<br />

small and moving ones<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–121–<br />

Person detected during field<br />

tests. The perceived obstacle is<br />

marked with a black star.<br />

09:15–09:30 WedAT3.4<br />

Rocker-Pillar : Design of the Rough Terrain<br />

Mobile Robot Platform with Caterpillar and<br />

Rocker Bogie Mechanism<br />

Dongkyu Choi, Jeong R Kim, Sunme Cho, Seungmin Jung,<br />

and Jongwon Kim<br />

School of Mechanical Engineering,<br />

Seoul National University, Seoul, Korea<br />

• Mobile robot platform with caterpillar on<br />

front of the rocker-bogie mechanism<br />

• High maneuverable on urban environment<br />

by using caterpillars<br />

• High stability on rough terrain in a high<br />

speed with rocker-bogie mechanism<br />

• Experiments are performed on rough<br />

terrain (rugged terrain, holes, steps, and<br />

stairs )<br />

r ugged t er r ai n<br />

hol e<br />

s t ep s t ai r


<strong>Session</strong> WedAT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Humanoid Robots II<br />

Chair Paul Y. Oh, Drexel Univ.<br />

Co-Chair<br />

08:30–08:45 WedAT4.1<br />

p ref i<br />

Online Walking Pattern Generation for Push<br />

Recovery and Minimum Delay to Commanded<br />

Change of Direction and Speed<br />

Junichi Urata 1 , Koichi Nshiwaki 2 , Yuto Nakanishi 1 ,<br />

Kei Okada 1 , Satoshi Kagami 2 and Masayuki Inaba 1<br />

1 Department of Mechano-Informatics, The University of Tokyo, Japan<br />

2 National Institute of Advanced Industrial Science and Technology (AIST)<br />

• New online walking pattern generation method<br />

• Direction and speed change with minimum delay<br />

• Push recovery while walking<br />

x,x’<br />

Original P ref<br />

p ref<br />

Modification<br />

(p i ,x i )<br />

LIPM<br />

State Error without<br />

Offset<br />

x<br />

ZMP-CoM Loop<br />

M<br />

CoM<br />

Generation<br />

HPF<br />

t<br />

Delay<br />

y<br />

Full Body<br />

Dynamics<br />

Compensation<br />

Error<br />

t<br />

+ -<br />

Real World<br />

Model<br />

Error<br />

K<br />

External Force<br />

Stabilizer<br />

09:00–09:15 WedAT4.3<br />

Applying Human Motion Capture to Design Energyefficient<br />

Trajectories for Miniature Humanoids<br />

Kiwon Sohn and Paul Oh<br />

Mechanical Engineering and Mechanics, Drexel University, USA<br />

• Reinforcement Learning based Approach<br />

to Optimize Motions for Humanoids<br />

• Optimize the Trajectories with respect to<br />

Energy Consumption and Similarity to a<br />

Human’s Natural Motion<br />

• Energy Cost is Estimated by a Dynamic<br />

Model(Propac), and Validated using<br />

System Identification(SID)<br />

• With a Mocap, Human Motions were<br />

Collected and Produced Another Cost for<br />

Optimization<br />

08:45–09:00 WedAT4.2<br />

Humanoid Full-body Controller<br />

Adapting Constraints in Structured Objects<br />

through Updating Task-level Reference Force<br />

Shunichi Nozawa, Iori Kumagai, Yohei Kakiuchi,<br />

Kei Okada and Masayuki Inaba<br />

Department of Mechano-Infomatics, The University of Tokyo, Japan<br />

• Force-control-based Humanoid<br />

Manipulation of Structured Objects<br />

• Update of Hand’s Reference Forces based<br />

on Movable Direction to Adapt to<br />

Operational Force Change<br />

• Experiments for Five Different Structured<br />

Objects<br />

Opening a Door and Going through It<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–122–<br />

Object Velocity Command<br />

Reference Force<br />

Update<br />

Reference Force<br />

Force-based<br />

Humanoid Controller<br />

Joint Angles<br />

Real Robot<br />

Structured Object<br />

Humanoid’s Controller<br />

based on Update of<br />

Reference Force<br />

-<br />

+<br />

Reaction Force<br />

09:15–09:30 WedAT4.4<br />

Trajectory Design and Control of<br />

Edge-landing Walking of a Humanoid for<br />

Higher Adaptability to Rough Terrain<br />

Koichi Nishiwaki and Satoshi Kagami<br />

Digital Human Research Center, AIST, Japan<br />

JST, CREST, Japan<br />

• Online decision of stepping position,<br />

landing edge, and step timing for the<br />

balance maintenance of walking is<br />

presented.<br />

• Unknown roughness along forward<br />

direction is explicitly considered.<br />

• Inclined sole landing is used for estimating<br />

the decrease of the support region.<br />

• The effect of multi-body dynamics is also<br />

considered when deciding the stepping<br />

position.


<strong>Session</strong> WedAT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Kinematic Modeling<br />

Chair<br />

Co-Chair<br />

08:30–08:45 WedAT5.1<br />

Constant curvature continuum kinematics<br />

as fast approximate model for the<br />

Bionic Handling Assistant<br />

Matthias Rolf and Jochen J. Steil<br />

Research Institute for Cognition and Robotics (CoR-Lab),<br />

Bielefeld University, Germany<br />

• The Bionic Handling Assistant is a lightweight<br />

continuum robot actuated pneumatically<br />

• We evaluate the use of a constant curvature<br />

approach in order to simulate its kinematics<br />

• We provide a new, elegant and parameterless<br />

method to deal with the geometric singularity<br />

• The model is compared to ground truth motion<br />

data recorded on the robot<br />

• Result: only 1% relative prediction error<br />

• Computationally very fast method with<br />

measured 47900Hz on a single CPU core<br />

• Open source implementation of kinematics<br />

and 3D visualization 1<br />

1 http://www.cor-lab.de/software-continuum-kinematics-simulation<br />

Kinematic structure and model<br />

(background) of the<br />

Bionic Handling Assistant<br />

09:00–09:15 WedAT5.3<br />

Forward Kinematic Model for Continuum<br />

Robotic Surfaces<br />

Jessica Merino and Ian D. Walker<br />

Electrical & Computer Engineering, Clemson University, USA<br />

Anthony L. Threatt and Keith E. Green<br />

School of Architecture, Clemson University, USA<br />

• Continuum robotic two-dimensional<br />

surfaces have received little attention in<br />

the realm of robotic research.<br />

• Such surfaces hold potential use in many<br />

unusual applications that rigid link robots<br />

cannot afford.<br />

• We introduce novel kinematic models for<br />

these continuum robotic surfaces.<br />

• We then compare the kinematic models to<br />

physical continuum surfaces and validate<br />

their performance.<br />

08:45–09:00 WedAT5.2<br />

Fast Inverse Kinematics Algorithm for Large DOF<br />

System with Decomposed Gradient Computation<br />

Based on Recursive Formulation of Equilibrium<br />

Ko Ayusawa and Yoshihiko Nakamura<br />

Department of Mechano-Informatics, The University of Tokyo, Japan<br />

• Fast inverse kinematics method for Large<br />

DOF system is proposed.<br />

• The method utilizes nonlinear programing<br />

techniques that require the gradient of<br />

cost function.<br />

• The gradient is computed from static<br />

equilibrium by the recursive Newton-Euler<br />

algorithm.<br />

• The method is tested on a large-DOF<br />

manipulator and a human musculoskeletal<br />

model.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–123–<br />

Conceptual diagram of the<br />

proposed method<br />

09:15–09:30 WedAT5.4<br />

A Method for Measuring the Upper Limb Motion and<br />

Computing a Compatible Exoskeleton Trajectory<br />

Nathanael Jarrassé and Vincent Crocher and Guillaume Morel<br />

Pierre et Marie Curie University<br />

Institute for Intelligent Systems and Robotics, CNRS - UMR 7222<br />

Paris – France<br />

Emails : {jarrasse, crocher, morel}@isir.upmc.fr<br />

- This paper deals with the problem of<br />

computing trajectories for an exoskeleton<br />

that match a motion recorded on a given<br />

subject.<br />

- Direct mapping of human joint posture to<br />

the exoskeleton joint space can not give<br />

good reproduction results without complex<br />

models of the robot and the human limb.<br />

- Thanks to passive fixation mechanisms,<br />

and the dual property of isostaticity in the<br />

coupling, we were able to compute<br />

kinematically compatible postures for 4 DoF<br />

exoskeleton with several interaction points<br />

with an acceptable error (<strong>10</strong>mm), and<br />

without requiring any model of subject arm.<br />

Subject wearing the two splints<br />

with optical markers and<br />

performing a pointing task without<br />

robot, during the record phase


<strong>Session</strong> WedAT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Mapping I<br />

Chair<br />

Co-Chair<br />

08:30–08:45 WedAT6.1<br />

IPJC: The Incremental Posterior Joint<br />

Compatibility Test for Fast Feature Cloud<br />

Matching<br />

Yangming Li<br />

Institute of Intelligence Machines, Chinese Academy of Sciences, China<br />

Edwin Olson<br />

Computer Science and Engineering, University of Michigan, USA<br />

• We propose a new<br />

probabilistic data association<br />

method for feature clouds.<br />

• Dramatically faster than<br />

JCBB, while mathematically<br />

equivalent in linear case.<br />

• Better false positive/true<br />

positive performance than<br />

JCBB in non-linear case.<br />

09:00–09:15 WedAT6.3<br />

Patch Map: A Benchmark for Occupancy Grid<br />

Algorithm Evaluation<br />

Rehman S. Merali and Timothy D. Barfoot<br />

University of Toronto Institute for Aerospace Studies, Canada<br />

• Traditional occupancy grid (OG) mapping<br />

makes two assumptions for computational<br />

efficiency<br />

• We present the full Bayesian solution for<br />

OG mapping, which makes no assumptions<br />

• The full solution cannot be computed for<br />

realistic 2D (or 3D) maps, so we introduce<br />

(a) Traditional occupancy grid mapping<br />

a novel patch map algorithm<br />

• The patch map is shown to approximate the<br />

full solution in a simple 1D test case,<br />

whereas traditional OG mapping does not<br />

• The patch map is shown to work on realistic<br />

2D data, where the full solution cannot be<br />

computed<br />

• The patch map is a suitable benchmark to<br />

quantify/optimize future online OG mapping (b) Patch map algorithm<br />

algorithms<br />

Patch map algorithm better approximates<br />

the true information in the map.<br />

08:45–09:00 WedAT6.2<br />

Fast Incremental Clustering and Representation<br />

of a 3D Point Cloud Sequence with Planar Regions<br />

Francesco Donnarumma<br />

Istituto di Scienze e Tecnologie della Cognizione, CNR, Italy<br />

Vincenzo Lippiello<br />

Dipartimento di Informatica e Sistemistica,<br />

Università degli Studi di Napoli Federico II, Italy<br />

Matteo Saveriano<br />

Institute of Automatic Control Engineering,<br />

Technische Universität München, Germany<br />

• An incremental clustering technique to<br />

partition 3D points into planar regions is<br />

presented<br />

• Incremental PCA and a compact<br />

geometrical representation (concavehull)<br />

for computational efficiency<br />

• The algorithm works in real-time on<br />

unknown and noisy data<br />

• Validated both on synthetic and real<br />

(interior of a building) datasets<br />

09:15–09:30 WedAT6.4<br />

Independent Markov Chain Occupancy Grid Maps<br />

for Representation of Dynamic Environments<br />

Jari Saarinen<br />

Automation and Systems Technology, Aalto University, Finland<br />

Henrik Andreasson, Achim J. Lilienthal<br />

Center of Applied Autonomous Sensor Systems, Örebro University, Sweden<br />

� Each cell is an independent Markov chain<br />

(iMac)<br />

• The state transition parameters are<br />

modeled as two Poisson processes<br />

• Online learning of parameters<br />

• Model estimates both the expected<br />

occupancy as well as behavior of<br />

dynamics on a cell level (static, dynamic<br />

and shades of semi-static)<br />

• Approach is evaluated with a long-term<br />

dataset taken from an AGV in production<br />

use.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–124–<br />

Evolution of model parameters


<strong>Session</strong> WedAT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Multiple Mobile Robot Planning I<br />

Chair Ronald Arkin, Georgia Tech.<br />

Co-Chair<br />

08:30–08:45 WedAT7.1<br />

Combining Classification and Regression for<br />

WiFi Localization of Heterogeneous Robot<br />

Teams in Unknown Environments<br />

Benjamin Balaguer, Gorkem Erinc, and Stefano Carpin<br />

School of Engineering, University of California, Merced, U.S.A.<br />

• Solves the problem of robot localization<br />

with wireless signals using data-driven<br />

machine learning classification and<br />

regression techniques.<br />

• Implementation of six classification<br />

algorithms, compared and evaluated on<br />

two different datasets.<br />

• Novel regression algorithm builds upon the<br />

best classification algorithms.<br />

• End-to-end algorithm exploits robots’<br />

odometry with Monte Carlo Localization.<br />

• Algorithm works in completely unknown<br />

environments, builds maps efficiently, and<br />

localizes in real-time.<br />

Localization traces (SLAM, WiFi<br />

MCL, and Ground Truth) for an<br />

indoor exploration scenario<br />

09:00–09:15 WedAT7.3<br />

A Bio-Inspired Developmental Approach to<br />

Swarm Robots Self-Organization<br />

Yan Meng<br />

Department of Electrical and Computer Engineering, Stevens Institute of<br />

Technology, USA<br />

Hongliang Guo<br />

Almende Organizing Networks, Netherlands<br />

• Inspired by biological morphogensis, a<br />

developmental approach, i.e., network motifs<br />

based gene regulatory network model (NM-<br />

GRN), is proposed for self-organization of<br />

swarm robots to autonomously generate<br />

dynamic patterns to adapt to uncertain<br />

environments<br />

• First, a general GRN model is proposed with<br />

predefined network motifs as building blocks,<br />

then covariance matrix adaptation evolution<br />

strategy is applied to evolve the structure<br />

and parameters of the general GRN model<br />

to build up the NM-GRN<br />

• Experimental results demonstrate the<br />

efficiency and robustness of the NM-GRN<br />

model.<br />

60<br />

50<br />

40<br />

30<br />

20<br />

<strong>10</strong><br />

0<br />

-<strong>10</strong><br />

-<strong>10</strong> 0 <strong>10</strong> 20 30 40 50 60<br />

60<br />

50<br />

40<br />

30<br />

20<br />

<strong>10</strong><br />

0<br />

-<strong>10</strong><br />

-<strong>10</strong> 0 <strong>10</strong> 20 30 40 50 60<br />

-<strong>10</strong><br />

-<strong>10</strong> 0 <strong>10</strong> 20 30 40 50 60<br />

Swarm robots traverse uncertain<br />

environments<br />

60<br />

50<br />

40<br />

30<br />

20<br />

<strong>10</strong><br />

0<br />

60<br />

50<br />

40<br />

30<br />

20<br />

<strong>10</strong><br />

0<br />

-<strong>10</strong><br />

-<strong>10</strong> 0 <strong>10</strong> 20 30 40 50 60<br />

08:45–09:00 WedAT7.2<br />

Distributed Coordination of a Formation of<br />

Heterogeneous Agents with Individual Regrets<br />

and Asynchronous Communications<br />

Nicolas Carlési<br />

LIRMM, Univ. Montpellier 2, France<br />

Pascal Bianchi<br />

Institut Télécom / Télécom Paris-Tech, CNRS – LTCI, France<br />

• Objective: a distributed algorithm able<br />

to coordinate heterogeneous agents to<br />

perform various missions.<br />

• Proposed approach: each agent<br />

minimizes a regret function which takes<br />

into account natural motion constraints<br />

and individual objectives in order to find<br />

its control variables.<br />

• Simulations: comparison of the agents’<br />

behavior for different communication<br />

scenarios.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–125–<br />

The trajectories of the agents<br />

09:15–09:30 WedAT7.4<br />

Real-time Optimization of Trajectories that<br />

Guarantee the Rendezvous of Mobile Robots<br />

Sven Gowal and Alcherio Martinoli<br />

DISAL, EPFL, Switzerland<br />

• The decentralized rendezvous of<br />

differential-wheeled robots is<br />

investigated.<br />

• The individual trajectories are<br />

optimized according to a userdefined<br />

cost function using receding<br />

horizon control.<br />

• Mathematical guarantees on the<br />

convergence of the robots to a<br />

common rendezvous location are<br />

given.<br />

Trajectories of 4 real Khepera III<br />

robots performing the rendezvous


<strong>Session</strong> WedAT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Dynamics and Control I<br />

Chair<br />

Co-Chair<br />

08:30–08:45 WedAT8.1<br />

Contribution to the modeling of Cablesuspended<br />

Parallel Robot hanged on the four<br />

points<br />

Mirjana Filipovic,Mihajlo Pupin Institute, University of Belgrade, Serbia,<br />

Ana Djuric,Wayne State University , Detroit, MI 48202, U.S.A.,<br />

Ljubinko Kevac,School of Electrical Engineering, The University of Belgrade,<br />

Serbia<br />

• The kinematic and dynamic model of the<br />

Cable-suspended Parallel Robot - CPR<br />

system is generated via Jacobi matrix.<br />

• The dynamical model is calculated by<br />

mapping the resultant forces which are<br />

acting on the shaft of each motor and<br />

forces acting on a camera carrier by the<br />

Jacobi matrix.<br />

• The software packages AIRCAMA has<br />

been used to verify the selection of motors<br />

for any size of workspace, any weight or<br />

desired velocity of a camera carrier, and<br />

so on.<br />

z<br />

y<br />

O<br />

wall anchor<br />

contour of the workspace<br />

y<br />

x<br />

. O<br />

z x<br />

A<br />

motorized winch 4<br />

of fiber-optic kablae<br />

fiber-optic cable<br />

camera platform<br />

winch 4<br />

motor 4<br />

winch 3<br />

motor 3<br />

motorized<br />

winch-1<br />

CPR, a) in 3D b) top view<br />

winch 1<br />

camera platform<br />

motor 1<br />

contour of the workspace<br />

motor 2<br />

winch 2<br />

s<br />

pulleys<br />

v<br />

motorized<br />

winch-2<br />

θ 2<br />

θ 3<br />

θ 1 motorized<br />

winch-3<br />

09:00–09:15 WedAT8.3<br />

Planning Trajectories on Uneven Terrain using<br />

Optimization and Non-Linear Time Scaling<br />

Techniques<br />

Arun. K. Singh + , K. Madhava Krishna + and Srikanth Saripalli ++<br />

+ Robotics Research Centre IIIT-Hyderabad, India<br />

++ ASTRIL Arizona State University, U.S.A<br />

• A path is computed in terms of some<br />

parametric functions in terms of some<br />

variable u<br />

• A transformation from the variable u to<br />

time t is done through a scaling function<br />

h(u)<br />

• A novel scaling function in the form h(u) =<br />

a*exp(b*u) is proposed.<br />

• Framework for choosing appropriate a and<br />

b is proposed.<br />

• The resulting velocity and acceleration<br />

through scaling function satisfy stability<br />

constraints.<br />

pulleys<br />

k<br />

n<br />

k<br />

wall anchor<br />

n<br />

d<br />

a)<br />

b)<br />

h<br />

m<br />

θ 4<br />

h<br />

Vehicle evolving on Uneven<br />

terrain along a stable path<br />

m<br />

08:45–09:00 WedAT8.2<br />

Modeling and Control of a Flying Robot<br />

for Contact Inspection<br />

Matteo Fumagalli and Raffaella Carloni and Stefano Stramigioli<br />

Robotics And Mechatronics, University of Twente, The Netherlands<br />

Roberto Naldi and Alessandro Macchelli and Lorenzo Marconi<br />

CASY, University of Bologna, Italy<br />

Analysis of the Interaction<br />

of<br />

Quadrotor UAV enhanced<br />

with<br />

a multi-DoF Manipulator,<br />

with<br />

a R emote Environment<br />

• Modeling of the Flying Robot<br />

• Control of a floating base<br />

manipulation system for<br />

physical interaction<br />

• E xperimental Validation<br />

09:15–09:30 WedAT8.4<br />

Distributed Voronoi partitioning for multi-robot<br />

systems with limited range sensors<br />

K.R. Guruprasad<br />

Department of Mechanical Engineering, National Institute of Technology<br />

Karnataka, Surathkal, India<br />

Prithviraj Dasgupta<br />

Department of Computer Science, University of Nebraska, Omaha, USA<br />

• Each robot constructs the<br />

corresponding range constrained<br />

Voronoi cell in a distributed<br />

manner.<br />

• Only the positional information<br />

about the robots within<br />

communication range is used.<br />

• Communication range should be at<br />

least twice that of the sensor<br />

range.<br />

• Relative position in polar<br />

coordinate system is used.<br />

• Structured and efficient<br />

computation of Voronoi cells.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–126–<br />

Range constrained Voronoi cell<br />

constructed using the proposed algorithm


<strong>Session</strong> WedAT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Medical Robotics I<br />

Chair Fumihito Arai, Nagoya Univ.<br />

Co-Chair Jake Abbott, Univ. of Utah<br />

08:30–08:45 WedAT9.1<br />

Control Strategies of an Assistive Robot<br />

Using a Brain-Machine Interface<br />

Andrés Úbeda, Eduardo Iáñez, Javier Badesa,<br />

Ricardo Morales, José M. Azorín, Nicolás García<br />

Biomedical Neuroengineering Group,<br />

Miguel Hernández University of Elche, Spain<br />

• An assistive application has been<br />

designed combining a BMI and a<br />

pneumatic planar robot<br />

• The application consists of<br />

performing movements of the<br />

robot to reach several targets<br />

• The non-invasive spontaneous<br />

BMI is based on the correlation of<br />

EEG maps<br />

• Two control protocols are<br />

compared: a hierarchical control<br />

protocol and a directional control<br />

protocol<br />

09:00–09:15 WedAT9.3<br />

Optical Encoding of Catheter Motion Capture for<br />

Quantitative Evaluation in Endovascular Surgery<br />

Hirokatsu Kodama, Carlos Tercero, Katsutoshi Ooe,<br />

Chaoyang Shi, Seiichi Ikeda and Toshio Fukuda<br />

Micro-Nano Syatems Engineering, Nagoya University, Japan<br />

Fumihito Arai<br />

Mechanical Science and Engineering, Nagoya University, Japan<br />

Makoto Negoro<br />

Neurosurgery, Fujita Health University, Japan<br />

Ikuo Takahashi<br />

Neurosurgery, Anjo Kosei Hospital, Japan<br />

Guiryong Kown<br />

Product Division, Terumo Clinical Supply, Japan<br />

• Optical Encoding of linear and<br />

rotational motion of the catheter<br />

• Quantitative evaluation of human skills<br />

with the Cyber-physical system<br />

• Comparison of catheter manipulation<br />

skills between novice users and<br />

medical doctor<br />

Cyber-Physical System for<br />

technical skill evaluation<br />

08:45–09:00 WedAT9.2<br />

Non-ideal Behaviors of Magnetically Driven<br />

Screws in Soft Tissue<br />

Arthur W. Mahoney¹, Nathan D. Nelson², Erin M. Parsons²<br />

and Jake J. Abbott²<br />

¹ School of Computing, ² Dept. of Mech. Engineering, University of Utah, USA<br />

• Untethered, magnetically driven<br />

screws have been studied for the<br />

delivery of biomedical payloads<br />

through human tissue.<br />

• We present magnetic<br />

phenomena that cause non-ideal<br />

screw behavior while operating in<br />

soft tissue.<br />

• The undesired phenomena<br />

harms screw stability and control<br />

during normal operation, and<br />

must be accounted for in closedloop<br />

control.<br />

• We demonstrate our results with<br />

an artificial tissue phantom.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–127–<br />

While steering, magnetic torque will always<br />

cause a magnetic screw to yaw perpendicular<br />

to the desired direction. (Image sequence)<br />

09:15–09:30 WedAT9.4<br />

A Voice-Coil Actuated Ultrasound Micro-Scanner<br />

for Intraoral High Resolution Impression Taking<br />

Thorsten Vollborn, Daniel Habor, Simon Junk,<br />

Klaus Radermacher and Stefan Heger<br />

Chair of Medical Engineering, RWTH Aachen University, Germany<br />

• Silicone based impression-taking of<br />

prepared teeth is well-established but<br />

error-prone and inefficient for CAD/CAM of<br />

dental prosthetics<br />

• We designed an intraoral ultrasonic device<br />

for micrometer resolution scanning<br />

• We used a 2-DOF high-frequency concept<br />

based on voice-coil technology for precise<br />

& highly dynamic scanning<br />

• In this contribution, we describe the set-up<br />

and investigate the dependency of the<br />

lateral displacement of the micro-scanners<br />

end-effector from the oscillation rate using<br />

laser triangulation


<strong>Session</strong> <strong>WedAT1</strong>0 Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 08:30–09:30<br />

Skill Learning – Dynamics<br />

Chair Rüdiger Dillmann, KIT Karlsruhe Inst. for Tech.<br />

Co-Chair<br />

08:30–08:45 <strong>WedAT1</strong>0.1<br />

Autonomous Online Learning of Velocity<br />

Kinematics on the iCub: a Comparative Study<br />

Alain Droniou, Serena Ivaldi, Vincent Padois and Olivier Sigaud<br />

Institut des Systèmes Intelligents et de Robotique - CNRS UMR 7222,<br />

Université Pierre et Marie Curie<br />

• Incremental and autonomous online<br />

learning of velocity kinematics, from<br />

scratch and in a limited time<br />

• Visual servoing task with general target<br />

and end-effector (unknown tool)<br />

• Three contexts: reaching the target in one<br />

or two different workspaces; tracking a<br />

target moving unpredictably by a human<br />

• Comparison of three ML algorithms:<br />

LWPR, XCSF and ISSGPR<br />

• Testing generalization capabilities, velocity<br />

in learning and robustness of parameters<br />

• ISSGPR performs better in all studied<br />

criteria<br />

The learning contexts<br />

09:00–09:15 <strong>WedAT1</strong>0.3<br />

Learning Concurrent Motor Skills in Versatile<br />

Solution Spaces<br />

Christian Daniel, Gerhard Neumann and Jan Peters<br />

FG Intelligent Autonomous Systems, TU Darmstadt, Germany<br />

Max Planck Institute for Intelligent Systems, Germany<br />

• Many interesting motor skill tasks<br />

have several distinct solutions.<br />

• Representing multiple solutions<br />

ensures operability of the robot<br />

even if the environment changes<br />

and can in addition lead to faster<br />

learning.<br />

• We present a hierarchical policy<br />

search method which can<br />

simultaneously learn multiple<br />

motor skills to solve complex<br />

tasks.<br />

08:45–09:00 <strong>WedAT1</strong>0.2<br />

Online Learning of Inverse Dynamics via<br />

Gaussian Process Regression<br />

Joseph Sun de la Cruz<br />

National Instruments, USA<br />

Bill Owen Dana Kulić<br />

University of Waterloo, Canada<br />

• On-line learning of the inverse dynamics of a robot manipulator with<br />

Gaussian Process Regression<br />

• Model is trained on a sparse subset of the observed data, with incremental<br />

updates to both the model and the hyper-parameters<br />

• Investigate the impact of full or partial prior information on the convergence<br />

• Comparison to existing approaches shows improved accuracy and reduced<br />

computational requirements<br />

Computation Time Required for a Single Prediction<br />

09:15–09:30 <strong>WedAT1</strong>0.4<br />

Learning Robot Dynamics with<br />

Kinematic Bézier Maps<br />

Stefan Ulbrich, Michael Bechtel,<br />

Tamim Asfour and Rüdiger Dillmann<br />

Humanoids and Intelligence Systems Lab<br />

Institute for Anthropomatics at the Karlsruhe Institute for Technology<br />

• Novel model-based machine<br />

learning algorithm<br />

• Tailored to efficiently learn the<br />

inverse dynamics<br />

• Based on the Kinematic Bézier<br />

Maps algorithm<br />

• Exact encoding of the equations of<br />

motion<br />

• Batch and incremental (online)<br />

learning<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–128–<br />

Plot of the Coriolis and centripetal<br />

forces on a robot joint


<strong>Session</strong> WedBT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Sensor Fusion<br />

Chair Reid Simmons, Carnegie Mellon Univ.<br />

Co-Chair Magnus Jansson, KTH Royal Inst. of Tech.<br />

09:30–09:45 WedBT1.1<br />

Ground Plane Feature Detection in Mobile<br />

Vision-Aided Inertial Navigation<br />

Ghazaleh Panahandeh, Nasser Mohammadiha, Magnus Jansson<br />

KTH Royal Institute of Technology, Sweden<br />

• The hardware of the mobile system consists of a monocular camera<br />

mounted on an inertial measurement unit (IMU).<br />

• Exploiting the complementary nature of the IMU-camera sensor fusion<br />

system for estimating the camera translation and rotation, the developed<br />

algorithm consists of two parts:<br />

I. Homography-based outlier rejection<br />

II. Normal-based outlier rejection<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT1.3<br />

IEEE/RSJ IROS <strong>2012</strong> Digest Template<br />

Gaussian Process for lens distortion modeling<br />

Pradeep Ranganathan and Edwin Olson<br />

Computer Science and Engineering, University of Michigan, USA<br />

Contributions:<br />

• Incorporate a GP model into a factor<br />

graph inference framework optimized for<br />

Gaussian factor potentials.<br />

• Model evaluation based on test image set.<br />

• GP distortion models achieve accuracy<br />

comparable to the best parametric model.<br />

• GP models provide an implicit but rigorous<br />

framework for automatically determining<br />

distortion model complexity.<br />

09:45–<strong>10</strong>:00 WedBT1.2<br />

Sensor Fusion for<br />

Human Safety in Industrial Workcells<br />

Paul Rybski, Peter Anderson-Sprecher, Daniel Huber,<br />

Chris Niessl, and Reid Simmons<br />

The Robotics Institute, Carnegie Mellon University, USA<br />

• We present a sensor-based approach<br />

for ensuring safety of people in<br />

proximity to robots.<br />

• Our approach fuses data from multiple<br />

3D sensors into an evidence grid.<br />

• People and robots are surrounded by a<br />

safety and danger zone, respectively.<br />

• Impending intersections between safety<br />

and danger zones are identified and the<br />

robot is stopped.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–129–<br />

The person’s position is registered by the<br />

3D sensors (shown in green) and is<br />

compared against the 3D volume filled by<br />

the robot (shown in red).<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT1.4<br />

Distributed Altitude and Attitude Estimation<br />

from Multiple Distance Measurements<br />

Maximilian Kriegleder, Raymond Oung and Raffaello D’Andrea<br />

Institute for Dynamic Systems and Control, ETH Zurich, Switzerland<br />

• Distance sensors are attached to<br />

known positions on a rigid body.<br />

• Attitude and altitude may be<br />

estimated directly when sensors are<br />

centrally measurable.<br />

• For distributed sensor networks, this<br />

approach is modified to a scalable,<br />

distributed scheme.<br />

• In the limit of sharing information<br />

between sensor nodes, the<br />

estimates approach those<br />

obtainable in a centralized system.<br />

Each module of the Distributed Flight Array<br />

obtains a distance measurement and<br />

communicates with its neighbours.


<strong>Session</strong> WedBT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Physical Human-Robot Interaction II<br />

Chair Yasuhisa Hirata, Tohoku Univ.<br />

Co-Chair Dongheui Lee, Tech. Univ. of Munich<br />

09:30–09:45 WedBT2.1<br />

Human-Humanoid Haptic Joint Transportation<br />

Case Study<br />

Antoine Bussy André Crosnier<br />

Université Montpellier 2-CNRS LIRMM, France<br />

Abderrahmane Kheddar François Keith<br />

CNRS-AIST Joint Robotics Laboratory, Japan<br />

• Study of a Human-Human Joint<br />

Transportation Task<br />

• Task Decomposition in Motion Primitives<br />

to estimate the leader's intentions<br />

• Trajectory-based Impedance Control<br />

• Experiments with our humanoid robot<br />

HRP2 to assess our approach<br />

HRP2 carrying a table with a<br />

human partner<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT2.3<br />

Feedback Motion Planning and Learning from<br />

Demonstration in Physical Robotic Assistance:<br />

Differences and Synergies<br />

Martin Lawitzky Jose Ramon Medina<br />

Dongheui Lee Sandra Hirche<br />

Institute of Automatic Control Engineering<br />

Technische Universität München, Germany<br />

• Goal-directed physical assistance behavior<br />

generated through<br />

• Feedback Motion Planning (SNG)<br />

• Learning from Demonstration (tHMM)<br />

• Is exploitation of complementary strengths<br />

possible through fusion?<br />

• Three fusion methods proposed:<br />

• Hierarchical multi-criterion optimization<br />

• Virtual demonstration from planning<br />

• Uncertainty-based blending<br />

• Evaluation in 2-DoF VR and in 6-DoF on<br />

highly integrated mobile manipulator<br />

• Fusion outperforms individual algorithms<br />

09:45–<strong>10</strong>:00 WedBT2.2<br />

Disagreement-Aware Physical Assistance<br />

Through Risk-Sensitive Optimal Feedback<br />

Control<br />

J.R. Medina, T. Lorenz, D. Lee and S. Hirche<br />

Institute of Automatic Control Engineering<br />

Technische Universität München, Germany<br />

• Goal: intuitive proactive physical robotic<br />

assistance � requires human haptic<br />

behavior model for anticipation<br />

• Challenge: robot predictions might<br />

disagree with real human intentions<br />

• Method: probabilistic model based<br />

anticipation using risk –sensitive control<br />

with online disagreement estimation<br />

• Result: adaptive robot role allocation<br />

depending on estimated disagreement and<br />

prediction uncertainty. Psychological<br />

experiments indicate higher helpfulness<br />

and decreased human effort.<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT2.4<br />

IEEE/RSJ IROS <strong>2012</strong> Digest Template<br />

Paper Title in One or Two Lines<br />

Han Pang Huang*, Tzu-Hao Huang, Ching-An Cheng,<br />

Jiun-Yih Kuan, Po-Ting Lee, Shih-Yi Huang<br />

Department of Mechanical Engineering, National Taiwan University, Taiwan<br />

• Design concept of BTSA: backdrivable<br />

torsion spring actuator is constructed<br />

using a simple torsion spring, bevel gears,<br />

and an actuator.<br />

• A human-robot interaction model is<br />

proposed to investigate the dynamic<br />

properties of the system.<br />

• Hybrid control that switches between<br />

direct EMG biofeedback control and<br />

zero impedance control is proposed to<br />

provide a new rehabilitation training and<br />

walking assistance mechanism for<br />

rehabilitation.<br />

• Both simulations and experiments are<br />

conducted to show some desired<br />

properties of the proposed BTSA and<br />

hybrid control system.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–130–<br />

Design Concept of BTSA &<br />

Hybrid Control of direct EMG<br />

biofeedback control and zero<br />

impedance control


<strong>Session</strong> WedBT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Field Robotics II<br />

Chair Urbano Nunes, Univ. de Coimbra<br />

Co-Chair Marcel Bergerman, Carnegie Mellon Univ.<br />

09:30–09:45 WedBT3.1<br />

Monocular Visual Navigation of an Autonomous<br />

Vehicle in Natural Scene Corridor-like Environments<br />

Ji Zhang, George Kantor,<br />

Marcel Bergerman, and Sanjiv Singh<br />

The Robotics Institute<br />

Carnegie Mellon University, USA<br />

• Goal: Autonomous vehicle row following in<br />

modern orchards using a monocular camera<br />

• Approach:<br />

I. Reconstruct tree rows based on<br />

structure from motion while<br />

minimizing variance of vehicle lookahead<br />

point<br />

II. Motion estimation is obtained from<br />

monocular camera and wheel<br />

encoders only<br />

• Results: 1.7 km of successful autonomous<br />

driving in a variety of orchards with different<br />

environmental conditions<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT3.3<br />

Piecewise Affine Control for Fast Unmanned<br />

Ground Vehicles<br />

André Benine-Neto and Christophe Grand<br />

Institute of Intelligent Systems and Robotics (ISIR),<br />

University Pierre et Marie Curie, France<br />

• Steering control of Unmanned Ground<br />

Vehicle taking using steering angles and<br />

independent wheels torque<br />

• Nonlinear behavior of lateral forces taken<br />

into account in control synthesis through a<br />

Piecewise Affine approach.<br />

• Control synthesis through optimization<br />

problem involving Linear Matrix<br />

Inequalities. Stability ensured by<br />

Piecewise Quadratic Lyapunov function<br />

• Enhanced performance verified through<br />

simulations on nonlinear vehicle model<br />

Unmanned Ground Vehicle<br />

09:45–<strong>10</strong>:00 WedBT3.2<br />

The AmphiHex: a Novel Amphibious Robot with<br />

Transformable Leg-flipper Composite<br />

Propulsion Mechanism<br />

Xu Liang, Min Xu, Lichao Xu, Peng Liu, Xiaoshuang Ren,<br />

Ziwen Kong, Jie Yang, Shiwu Zhang<br />

Department of Precision Machinery and Precision Instrumentation,<br />

University of Science and Technology of China, China<br />

• The detailed structure design of the<br />

transformable leg-flipper propulsion<br />

mechanism and its drive module is<br />

introduced.<br />

• A preliminary theoretical analysis is<br />

conducted to study the interaction between<br />

the elliptic leg and transitional environment<br />

such as granular medium.<br />

• An orthogonal experiment is designed to<br />

study the leg locomotion in the sandy and<br />

muddy terrain with different water content.<br />

• Basic propulsion experiments of<br />

AmphiHex-I are launched to verify that the<br />

locomotion capability on land and<br />

underwater.<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT3.4<br />

Tube-type Active Scope Camera<br />

with High Mobility and Practical Functionality<br />

Hiroaki Namari, Kazuhito Wakana,<br />

Michihisa Ishikura, Masashi Konyo and Satoshi Tadokoro<br />

Graduate School of Information Scienses, Tohoku University, Japan<br />

• A tube-type active scope camera<br />

enhanced mobility and functionality for<br />

search and rescue was developed<br />

• A smart structure to mount vibration<br />

motors on a long tubular cable without<br />

any rigid projections was designed<br />

• An auditory communication and gravity<br />

direction indication systems were<br />

mounted for practical use<br />

• High performance was confirmed at a<br />

training site for first responders<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–131–<br />

Integrated system and running<br />

experiment included the rubbles


<strong>Session</strong> WedBT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Humanoid Robots III<br />

Chair Paul Y. Oh, Drexel Univ.<br />

Co-Chair<br />

09:30–09:45 WedBT4.1<br />

Design Methodology for the Thorax and Shoulder of<br />

Human Mimetic Musculoskeletal Humanoid Kenshiro<br />

-A Thorax structure with Rib like Surface -<br />

Toyotaka Kozuki,Hironori Mizoguchi,Yuki Asano,Masahiko Osada,<br />

Takuma Shirai,Urata Junichi,Yuto Nakanishi,Kei Okada,Masayuki Inaba<br />

Univ.of Tokyo, Japan<br />

• Design concept of Detailed<br />

Musculoskeletal Humanoid<br />

Kenshiro’s Upper limb<br />

• Joint Structure<br />

• Muscle Arrangement<br />

• Range of Motion<br />

• Mechanical Key Points<br />

of the Upper Limb<br />

• Rib like Thorax<br />

• Planar muscle<br />

• Muscle Cushion<br />

• Open Sphere Joint<br />

Prototype of Human Mimetic<br />

Musculoskeletal Humanoid<br />

“Kenshiro” Upper Limbs<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT4.3<br />

Dynamic motion imitation of two articulated systems<br />

using nonlinear time scaling of joint trajectories<br />

Karthick Munirathinam, Sophie Sakka, Christine Chevallereau<br />

IRCCyN, Robotics Team, Ecole centrale de nantes,<br />

Nantes, France<br />

• An approach for motion imitation of<br />

articulated systems with balance and<br />

physical constraints using optimization.<br />

• We modify the temporal evolution of joint<br />

motion rather than the traditional way of<br />

formulating on geometric evolution of joint<br />

motion of the imitating system.<br />

• Time scaling based imitation has an<br />

immanent advantage of tracking the joint<br />

trajectory of the reference system by<br />

compromising on joint velocity and<br />

acceleration.<br />

• Simulations are carried out to validate our<br />

proposed method.<br />

Model: Foot connected to n-serial<br />

links(n=4)<br />

09:45–<strong>10</strong>:00 WedBT4.2<br />

State Estimation of a Walking<br />

Humanoid Robot<br />

Xinjilefu and Christopher G. Atkeson<br />

Robotics Institute, Carnegie Mellon University, USA<br />

• We compare two approaches to designing<br />

Kalman filters for walking systems.<br />

• One design uses LIPM dynamics, the<br />

other uses more complete Planar<br />

dynamics.<br />

• LIPM design is more robust to modeling<br />

error<br />

• Planar design estimates COM height and<br />

joint velocities, and tracks horizontal COM<br />

translation more accurately.<br />

• We also investigate different ways of<br />

handling contact states and force sensing<br />

in state estimation.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–132–<br />

The Sarcos Humanoid Robot<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT4.4<br />

The Anatomy of a Fall: Automated Real-time<br />

Analysis of Raw Force Sensor Data from<br />

Bipedal Walking Robots and Humans<br />

Petar Kormushev, Luca Colasanto,<br />

Nikolaos G. Tsagarakis, and Darwin G. Caldwell<br />

Advanced Robotics, Istituto Italiano di Tecnologia, Italy<br />

Barkan Ugurlu<br />

Control Systems, Toyota Technological Institute, Japan<br />

• Algorithms for automated analysis<br />

of ground reaction force data<br />

• Automatically detect single, double<br />

support, and swing phases, heel<br />

strikes, phase transitions, etc.<br />

• Detect early indications of<br />

instability that could lead to a fall.<br />

• Approach: generic, model-free,<br />

parameter-free, robust, efficient<br />

• Three experiments on: compliant<br />

humanoid robot COMAN, stiff robot<br />

HOAP-2, and a human subject


<strong>Session</strong> WedBT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Identification, Modeling and Motion Control<br />

Chair kiminao kogiso, Nara Inst. of Science and Tech.<br />

Co-Chair<br />

09:30–09:45 WedBT5.1<br />

Identification Procedure for<br />

McKibben Pneumatic Artificial Muscle Systems<br />

Kiminao Kogiso Kenta Sawano and Kenji Sugimoto<br />

Graduate School of Information Science, NAIST, Japan<br />

Takashi Itto<br />

Mitsui Chemicals, Inc., Japan<br />

• Present how to model a McKibben<br />

pneumatic artificial muscle (PAM) that<br />

vertically suspends mass, in hybrid form.<br />

• Analyze the hybrid PAM model to clarify<br />

dominant parameters in steady-state and<br />

transient behaviors, respectively.<br />

• Propose a novel procedure for identifying<br />

the parameters, which can be used to get<br />

mass-continuously-parameterized models.<br />

• Conclude the proposed procedure is<br />

experimentally validated by using steadystate<br />

and transient responses.<br />

PAM system to be modeled and<br />

experimental validation results.<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT5.3<br />

Dynamic Model of Three Wheeled Narrow Tilting<br />

Vehicle and Corresponding Experiment Verification<br />

Hiroki Furuichi and Toshio Fukuda<br />

Dept. Micro-Nano Systems Engineering, Nagoya University, Japan<br />

Jian Huang<br />

Dept. Control Science and Engineering, Huazhong University of Science and<br />

Technology, China<br />

Takayuki Matsuno<br />

Natural Science and Technology, Okayama University, Japan<br />

• A new switching dynamical model<br />

considering several Narrow Tilting Vehicle<br />

(NTV) running states are proposed.<br />

• The simulation platform is establish to<br />

simulate the control of NTV.<br />

• Verification for the effectiveness of<br />

proposed model and simulation platform is<br />

performed through comparison of<br />

simulations and experiments.<br />

Picture of NTV<br />

09:45–<strong>10</strong>:00 WedBT5.2<br />

The Cubli: A Cube That Can Jump up and Balance<br />

Mohanarajah Gajamohan, Michael Merz, Igor Thommen, and<br />

Raffaello D'Andrea<br />

Department of Mechanical and Process Engineering, ETH Zurich, Switzerland<br />

• The paper introduces the concept of the<br />

Cubli, a cube that will be able to jump up,<br />

using an impact based strategy, and<br />

balance on a corner.<br />

• Design and control of the 1D prototype<br />

(three of which will be combined in the<br />

final prototype) is presented with<br />

experimental results.<br />

(a) (b)<br />

The Cubli jump-up strategy: (a) Flat to Edge: Lying flat<br />

on its face, the Cubli jumps up to stand on its edge.<br />

(b) Edge to Corner: The Cubli goes from balancing about<br />

an edge to balancing on a corner.<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT5.4<br />

An Energy-Based State Observer for Dynamical<br />

Subsystems with Inaccessible state Variables<br />

Islam S. M. Khalil * , Asif Sabanovic **<br />

and Sarthak Misra *<br />

* MIRA—Institute for Biomedical Technology and Technical Medicine<br />

University of Twente, The Netherlands<br />

** Faculty of Engineering and Natural Sciences, Sabanci University, Turkey<br />

• Effort and flow variables are<br />

considered as natural feedback<br />

from dynamical subsystems with<br />

inaccessible outputs<br />

• State variables of subsystems<br />

with inaccessible outputs can be<br />

estimated by an energy-based<br />

state observer<br />

• Stability margins of the energybased<br />

state observer are<br />

investigated<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–133–<br />

Lumped mass-spring system with 4 degrees-offreedom.<br />

We presume that subsystem P has<br />

inaccessible outputs for measurement


<strong>Session</strong> WedBT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Multiple Mobile Robot Planning II<br />

Chair Ronald Arkin, Georgia Tech.<br />

Co-Chair<br />

09:30–09:45 WedBT7.1<br />

Goal Assignment using Distance Cost<br />

in Multi-Robot Exploration<br />

Jan Faigl Miroslav Kulich Libor Přeučil<br />

Department of Cybernetics<br />

Czech Technical University in Prague, Czech Republic<br />

• Multi-robot exploration strategies<br />

• Performance evaluation of: (1) greedy<br />

assignment; (2) iterative assignment; (3)<br />

Hungarian algorithm; and (4) multiple traveling<br />

salesman (MTSP) approaches<br />

• MTSP based assignment with (cluster first,<br />

route second)<br />

• Clustering of the goal candidates preserving<br />

geodesic distances<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT7.3<br />

Finding Graph Topologies for Feasible<br />

Multirobot Motion Planning<br />

Pushkar Kolhe, Henrik I. Christensen<br />

•For a Kiva-like warehousing<br />

scenario:<br />

1.Where should I place n robots?<br />

2.Can we ensure<br />

deadlock/collision free motion<br />

planning from these n places?<br />

•An Integer Programming formulation<br />

to find a graph for solving multirobot<br />

motion planning problems<br />

Goal Nodes<br />

Robot Nodes<br />

09:45–<strong>10</strong>:00 WedBT7.2<br />

Multi-Agent Generalized Probabilistic<br />

RoadMaps (MAGPRM)<br />

Sandip Kumar and Suman Chakravorty<br />

Texas A&M University, College Station, USA<br />

MAGPRM is a sampling based method for<br />

planning the motion of multiple agents under<br />

process uncertainty, workspace constraints,<br />

and non-trivial dynamics<br />

MAGPRM utilizes the GPRM, a sampling based<br />

method for planning the motion of a single agent<br />

under process uncertainty, and a Multiple<br />

Traveling Salesman Problem (MTSP) solver.<br />

MAGPRM guarantees performance in terms of<br />

a maximum allowable probability of failure for<br />

the agents<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT7.4<br />

Dynamic Positioning of Beacon Vehicles for<br />

Cooperative Underwater Navigation<br />

Alexander Bahr and Alcherio Martinoli<br />

Distributed Intelligent Systems and Algorithms Lab, EPFL, Switzerland<br />

John J. Leonard<br />

Department of Mechanical Engineering, MIT, USA<br />

• Beacon vehicles serve as navigation aids<br />

for submerged AUVs<br />

• Our algorithm optimizes the beacon<br />

vehicles’ position to improve the AUVs’<br />

navigation accuracy<br />

• No a-priori information, such as the AUVs’<br />

mission plan, required<br />

• Distributed algorithm adapts to group size<br />

and connectivity<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–134–


<strong>Session</strong> WedBT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Dynamics and Control II<br />

Chair<br />

Co-Chair<br />

09:30–09:45 WedBT8.1<br />

Exploiting Redundancy in Cartesian Impedance<br />

Control of UAVs Equipped with a Robotic Arm<br />

Vincenzo Lippiello and Fabio Ruggiero<br />

Dipartimento di Informatica e Sistemistica, Università degli studi di Napoli<br />

Federico II, Italy<br />

• A Cartesian impedance control for UAVs<br />

equipped with a robotic arm is presented.<br />

• A dynamic relationship between external<br />

forces acting on the structure and the<br />

system motion, specified in terms of<br />

Cartesian space coordinates, is provided.<br />

• Through a suitable choice of such<br />

variables it is possible to exploit the<br />

redundancy of the system to perform<br />

some useful subtasks.<br />

• The hovering control of a quadrotor,<br />

equipped with a 3-DOF robotic arm and<br />

subject to contact forces and external<br />

disturbances is tested in a simulated case<br />

study.<br />

UAV/Arm system illustration with<br />

the related reference frames.<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT8.3<br />

A hybrid particle/grid wind model for realtime<br />

small UAV flight simulation<br />

Adam Harmat and Inna Sharf<br />

Mechanical Engineering, McGill University, Canada<br />

Michael Trentini<br />

DRDC-Suffield, Canada<br />

• Fast vortex particle method for approximate wind dynamics<br />

• Integrated with Gazebo robot simulator as a plug-in<br />

• Qualitative comparison to high-fidelity CFD test cases<br />

• Small UAV flight over a building, comparison to simpler wind model<br />

09:45–<strong>10</strong>:00 WedBT8.2<br />

Modeling and Motion Analysis of Fixed-pitch Coaxial<br />

Rotor Unmanned Helicopter<br />

Satoshi Suzuki<br />

Young Researchers Empowerment Center , Shinshu University, Japan<br />

Takahiro Ishii<br />

Graduate School of Science and Technology, Shinshu University, Japan<br />

Gennai Yanagisawa and Yasutoshi Yokoyama<br />

GEN Corporation, Japan<br />

Kazuki Tomita<br />

Engineering System, Japan<br />

• Fixed-pitch co-axial rotor unmanned<br />

helicopter with specific mechanisms is<br />

proposed.<br />

• Precise mathematical model of the<br />

helicopter is derived using multi-body<br />

dynamics technique.<br />

• Motion analysis is performed to<br />

establish the relation between the<br />

motion and mechanical parameters.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–135–<br />

Fixed-pitch co-axial rotor<br />

unmanned helicopter<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT8.4<br />

Parallel Force-Position Control Mediated by<br />

Tactile Maps for Robot Contact Tasks<br />

Simone Denei, Fulvio Mastrogiovanni and Giorgio Cannata<br />

University of Genova, Italy<br />

• Tactile maps prove to be<br />

a good representation<br />

structure for defining<br />

contact trajectories.<br />

• This concept is further extended for<br />

embedding additional information that are<br />

local with respect to a specific robot<br />

body area.<br />

• Augmented maps provides a good way to<br />

associate references, such as forces or<br />

contact motion velocities, to areas on the<br />

robot skin.<br />

An example of augmented tactile<br />

map (on the right) of the skin<br />

placed on a robot forearm (on the<br />

left). A polygon divides it into<br />

an inner region (in yellow) and<br />

an outside region (in grey)<br />

associated to different force<br />

references.<br />

• The contact centroid is reported on the maps and used to extract the<br />

references to fed to the Parallel Force-Position Control moving the<br />

robot in contact.


<strong>Session</strong> WedBT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Medical Robotics II<br />

Chair Fumihito Arai, Nagoya Univ.<br />

Co-Chair Jake Abbott, Univ. of Utah<br />

09:30–09:45 WedBT9.1<br />

Space-Time Localization and Registration on the<br />

Beating Heart<br />

Nathan Wood, Kevin Waugh, Tian Yu Liu, Cameron Riviere<br />

The Robotics Institute, Carnegie Mellon University, USA<br />

Marco Zenati<br />

BHS Department of Cardiothoracic Surgery, Harvard Medical School, USA<br />

• The HeartLander Robot adheres to<br />

and moves over the surface of the<br />

heart.<br />

• The heart deforms periodically due to<br />

the physiological cycles of heartbeat<br />

and respiration.<br />

• Instead of rejecting the deformations<br />

as noise, the periodic motion can be<br />

used to aid in localization.<br />

• Particle filter framework is used to<br />

estimate the pose of the robot on the<br />

heart and the cardiac phase, as well<br />

as the pose of the heart.<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT9.3<br />

Catheter Navigation Based on Probabilistic Fusion<br />

of ElectromagneticTracking and Physically-Based<br />

Simulation<br />

Alessio Dore 1 , Gabrijel Smoljkic 2 , Emmanuel Vander Poorten 2 ,<br />

Mauro Sette 2,3 , Jos Vander Sloten 2 , and Guang-Zhong Yang 1<br />

1 Hamlyn Centre, Imperial College London, UK<br />

2 Department of Mechanical Engineering, KU Leuven, Belgium<br />

3 Institute of Mechatronic Systems, ZHAW, Switzerland<br />

• Reduced use of fluoroscopy and better<br />

visualization are key point to improve the<br />

safety of catheter-based endovascular<br />

procedures..<br />

• A catheter navigation approach is<br />

proposed based on the use of<br />

electromagnetic tracking and physicallybased<br />

catheter insertion simulation.<br />

• Catheter shape and position are estimated<br />

and registered to the pre-operative model<br />

to provide improved visualization.<br />

• The approach has been tested on a 2D<br />

mockup obtaining an average localization<br />

error of 2mm.<br />

Example of 3D localization<br />

performed in an aortic arch<br />

silicon phantom<br />

09:45–<strong>10</strong>:00 WedBT9.2<br />

Reliable Planning and Execution of a Human-<br />

Robot Cooperative System Based on<br />

Noninvasive BCI with Uncertainty<br />

Wenchuan Jia, Huayan Pu, Xin Luo, Xuedong Chen<br />

State Key Laboratory of Digital Manufacturing Equipment and Technology,<br />

Huazhong University of Science and Technology, China.<br />

Dandan Huang, Ou Bai<br />

EEG&BCI Laboratory, Virginia Commonwealth University, US<br />

• Three cooperative modes is<br />

proposed to trade-off of<br />

robot’s autonomy and user’s<br />

flexibility.<br />

• Look-ahead visual feedback is<br />

applied to achieve continuous<br />

motion.<br />

• The user can adjust the<br />

intention and/or actively<br />

correct extraction error of the<br />

BCI before the robot reaches<br />

current path node.<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT9.4<br />

Organ-explanted Bionic Simulator (OBiS) :<br />

Concurrent Microcardiovascular Anastomosis of Chick Embryo<br />

1 Hirofumi Owaki, 1 Taisuke Masuda, 2 Tomohiro Kawahara, 1 Natsuki Takei,<br />

1 Keiko Miwa-Kodama, 3 Kota Miyasaka, 3 Toshihiko Ogura, and 1 Fumihito Arai<br />

1 Graduate School of Engineering, Nagoya University, Japan<br />

2 Kyusyu Instutute of Technology, Japan as well as Massachusetts Institute of Technology, USA<br />

3 Department of Developmental Neurobiology, Tohoku University, Japan<br />

• Organ-explanted Bionic Simulator<br />

(OBiS) is newly proposed.<br />

• A heart isolated from chick embryo is<br />

used for OBiS.<br />

• Suction-induced vascular fixation<br />

(SVF) method is proposed for<br />

concurrent microcardiovascular<br />

anastomosis.<br />

• Dual-arm micromanipulator system is<br />

developed for assisting the vascular<br />

anastomosis.<br />

• 4 blood vessels led to the chick’s heart<br />

can be concurrently connected with<br />

the alginate tubes by SVF method.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–136–<br />

Monitoring<br />

/ Control unit<br />

Mechanical stimulus<br />

Pump<br />

Electrical stimulus<br />

Chemical stimulus<br />

CO2 / Heater<br />

Culture<br />

Microfluidic chip<br />

Shape<br />

(Dynamic)<br />

Laser displacement<br />

sensor<br />

Explanted organ<br />

Shape<br />

(Static / Dynamic)<br />

Image sensor<br />

The concept of<br />

bionic simulator system<br />

pH<br />

O2 Temp.<br />

Patch-clamp<br />

Electrical sensor


<strong>Session</strong> WedBT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Visual Learning I<br />

Chair<br />

Co-Chair<br />

09:30–09:45 WedBT<strong>10</strong>.1<br />

Bag of Multimodal Hierarchical Dirichlet Processes:<br />

Model of Complex Conceptual Structure<br />

for Intelligent Robots<br />

Tomoaki Nakamura and Takayuki Nagai<br />

The University of Electro-Communications, Japan<br />

Naoto Iwahashi<br />

NICT Knowledge Creating Communication Research Center, Japan<br />

• A novel framework for concept<br />

formation<br />

• Various models are formed by<br />

multimodal HDP-based categorization<br />

with varying parameters<br />

Bag of Multimodal HDP models<br />

• Word meanings are grounded in<br />

formed categories through the<br />

interaction between users and the<br />

robot<br />

• The interaction works as model<br />

selection for Bag of Multimodal HDP<br />

• Complex structure is visualized by<br />

Multidimensional scaling<br />

Observation of<br />

objects<br />

Categorization of<br />

multimodal<br />

information<br />

Word grounding and model selection<br />

This is “yellow”<br />

“maraca”.<br />

Maraca<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBT<strong>10</strong>.3<br />

Learning a Projective Mapping to Locate<br />

Animals in Video Using RFID<br />

Pipei Huang Rahul Sawhney<br />

Daniel Walker<br />

Aaron Bobick Tucker Balch<br />

Robotics and Intelligent Machines<br />

School of Interactive Computing<br />

Georgia Institute of Technology, USA<br />

• Goal is to annotate video with correct<br />

locations and IDs of multiple moving<br />

animals wearing active RFID tags.<br />

• Challenges include noisy position data and<br />

integration with camera calibration.<br />

• Approach:<br />

• Filtering and outlier removal improves<br />

RFID reported position.<br />

• Use Machine Learning-based<br />

approach to map from RFID (x,y,z) to<br />

image plane.<br />

• Learn mapping of offset from best-fit<br />

standard camera calibration model.<br />

Reduces data needed,<br />

• Improves performance and reduces<br />

calibration effort.<br />

Kim Wallen<br />

Dept. of<br />

Psychology<br />

Emory Univ., USA<br />

Shiyin Qin<br />

Yello<br />

w<br />

School of Automation<br />

Science and Elec Eng<br />

Geihang Univ, China<br />

09:45–<strong>10</strong>:00 WedBT<strong>10</strong>.2<br />

Robust and Fast Visual Tracking Using Constrained<br />

Sparse Coding and Dictionary Learning<br />

Tianxiang Bai, Y.F. Li, and Xiaolong Zhou<br />

Department of Mechanical and Biomedical Engineer, City Univ. of Hong Kong,<br />

Hong Kong SAR, China<br />

• The visual appearance is represented<br />

and modeled by sparse<br />

representation and online dictionary<br />

learning.<br />

• A sparsity consistency constraint is<br />

defined to unify sparse representation<br />

and online dictionary learning.<br />

• An elastic-net constraint is enforced<br />

to capture the local appearances<br />

during dictionary learning stage.<br />

• The proposed appearance model is<br />

integrated with particle filter to form a<br />

robust tracking algorithm.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–137–<br />

Proposed Appearance Model<br />

<strong>10</strong>:15–<strong>10</strong>:30 WedBT<strong>10</strong>.4<br />

A Discriminative Approach for Appearance<br />

Based Loop Closing<br />

Thomas Ciarfuglia, Gabriele Costante, Paolo Valigi and Elisa Ricci<br />

Departement of Information and Electronic Engineering, University of Perugia,<br />

Italy<br />

• Bag of Visual Words is a common<br />

paradigm for loop closing, but has<br />

limitations<br />

• We propose a novel optimization<br />

approach to compute visual word<br />

weights for loop closing<br />

• More discriminative words are<br />

emphasized, while less one are<br />

de-emphasized<br />

• This Place Recognition approach<br />

yields competitive results with state-of-<br />

the-art approaches<br />

Learned weights enhance visual words<br />

which increase the similarity score<br />

between images of the same place while<br />

keeping away images from different<br />

classes.


<strong>Session</strong> WedBVT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Mapping II<br />

Chair Edwin Olson, Univ. of Michigan<br />

Co-Chair<br />

09:30–09:45 WedBVT6.1<br />

Variable reordering strategies for SLAM<br />

• We have evaluated existing<br />

reordering strategies on standard<br />

SLAM datasets.<br />

Pratik Agarwal and Edwin Olson<br />

Computer Science and Engineering,<br />

University of Michigan, USA<br />

• We propose an easy to implement<br />

reordering algorithm called BHAMD<br />

which yields competitive performance.<br />

• We provide evidence showing that<br />

few gains remain with respect to<br />

variants of minimum degree ordering.<br />

Reorder and solve time for<br />

different reordering algorithms<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBVT6.3<br />

Planar Polygon Extraction and Merging from<br />

Depth Images<br />

Joydeep Biswas and Manuela Veloso<br />

School of Computer Science, Carnegie Mellon University, USA<br />

We introduce an approach to building 3D<br />

maps of indoor environments modelled as<br />

planar surfaces using depth cameras.<br />

•Neighborhoods of plane filtered points are<br />

extracted from each observed depth image<br />

and fitted to convex polygons.<br />

•The polygons from each frame are<br />

matched to existing map polygons using<br />

OpenGL accelerated ray casting.<br />

•Matched polygons are then merged over<br />

time using sequential update and merging of<br />

the scatter matrix of observed polygons.<br />

The polygon extraction and merging<br />

algorithms take on an average 2.5 ms for<br />

each depth image of size 640x480 pixels.<br />

Plane Filtered and fitted convex polygons<br />

shown in blue from each frame (top) are<br />

merged across successive frames to<br />

generate complete map (bottom).<br />

<strong>10</strong>:20–<strong>10</strong>:25 WedBVT6.5<br />

2D PCA-based Localization for Mobile Robots<br />

in Unstructured Environments<br />

Fernando Carreira (1) , Camilo Christo (2) , Duarte Valério (2) ,<br />

Mário Ramalho (2) , Carlos Cardeira (2) , João Calado (1,2)<br />

and Paulo Oliveira (3)<br />

(1) ADEM / ISEL, Polytechnic Institute of Lisbon, Portugal<br />

(2) IDMEC / IST, Technical University of Lisbon, Portugal<br />

(3) ISR / IST, Technical University of Lisbon, Portugal<br />

• Self-localization system for mobile<br />

robots to operate in indoor<br />

environment, with only onboard<br />

sensors<br />

• The database of images stored<br />

onboard is of reduced size, when<br />

compared with acquired images<br />

• No hypothesis is made about specific<br />

features in the environment<br />

• The localization system estimates in<br />

real time the position and slippage<br />

with global stable error dynamics.<br />

Results of stability tests considering<br />

wrong initial position and attitude<br />

09:45–<strong>10</strong>:00 WedBVT6.2<br />

An Object-Based Semantic World Model for Long-<br />

Term Change Detection and Semantic Querying<br />

• RGB-D based mapping<br />

aboard a localized robot<br />

• Uniquely weak perceptual<br />

assumptions<br />

• Scales to large (1600 m 2 ),<br />

long-term (six weeks)<br />

operation<br />

• Supports change detection<br />

and semantic querying<br />

• 326 GB dataset available!<br />

Julian Mason and Bhaskara Marthi<br />

Duke University and Willow Garage<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–138–<br />

Example of semantic query for<br />

“medium-sized things in the cafeteria.”<br />

<strong>10</strong>:15–<strong>10</strong>:20 WedBVT6.4<br />

Reconfigurable Intelligent Space, R+iSpace,<br />

and Mobile Module, MoMo<br />

JongSeung Park<br />

Graduate School of Sci. and Eng, Ritsumeikan University, Japan<br />

Joo-Ho Lee<br />

College of Information Sci. and Eng, Ritsumeikan University, Japan<br />

• In this video and paper, the new concept<br />

of Intelligent space, ‘R+iSpace’ is<br />

introduced.<br />

• The whole devices in the R+iSpace can<br />

rearrange themselves automatically<br />

according to situation.<br />

• To achieve R+iSpace in real environment,<br />

we propose a mobile module, ‘MoMo’.<br />

• The devices can move on the wall and the<br />

ceiling through mounting on the MoMo.<br />

The R+iSpace system<br />

and The MoMo<br />

<strong>10</strong>:25–<strong>10</strong>:30 WedBVT6.6<br />

Deformable Soft Wheel Robot using<br />

Hybrid Actuation<br />

Je-sung Koh, Dae-young Lee, Seung-won Kim<br />

and Kyu-jin Cho<br />

Dept. Mechanical and Aerospace Eng., Seoul National University, South Korea<br />

• Multimodal motion ; Three driving modes<br />

for penetrating obstacles.<br />

• Caterpillar motion : passing through<br />

the gap<br />

• Wheel driving : fast movement on the<br />

ground<br />

• Legged wheel motion : climbing the<br />

stair<br />

• Hybrid actuation system<br />

• Deformable wheel : Shape memory<br />

alloy coil spring actuator<br />

• Wheel driving : DC motor<br />

Three driving modes of<br />

Deformable wheel robot


<strong>Session</strong> WedCT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Object Detection and Tracking<br />

Chair<br />

Co-Chair<br />

11:00–11:15 WedCT1.1<br />

Reliable Object Detection<br />

and Segmentation using Inpainting<br />

Ji Hoon Joung 1 , M. S. Ryoo 2<br />

Sunglok Choi 3 and Sung-Rak Kim 1<br />

1 Robotics Research Department, Hyundai Heavy Industries, South Korea<br />

2 Mobility and Robotic Systems Section, Jet Propulsion Laboratory, USA<br />

3 Robot Research Department, ETRI, South Korea<br />

• This paper presents a novel object detection and segmentation method<br />

utilizing an inpainting algorithm. We newly utilize inpainting to judge<br />

whether an object candidate region includes the foreground object or not.<br />

• The key idea is that if we erase a certain region from an image, the<br />

inpainting algorithm is expected to recover the erased image only when it<br />

belongs a background area.<br />

• We illustrate how our<br />

inpainting-based detection /<br />

segmentation approach<br />

benefits the object detection<br />

using two different pedestrian<br />

datasets<br />

Concept of the proposed method<br />

11:30–11:45 WedCT1.3<br />

Exploiting and modeling local 3D structure for<br />

predicting object locations<br />

Alper Aydemir and Patric Jensfelt<br />

Center for Autonomous Systems, KTH, Sweden<br />

• We propose the use of local 3D shape<br />

around objects in everyday scenes as a<br />

strong indicator of the placement of these<br />

objects. We call this the 3D context of an<br />

object.<br />

• We propose a conceptually simple and<br />

effective method to capture this<br />

information.<br />

• Our results show that 3D contextual<br />

information is a strong indicator of object<br />

placement in everyday scenes.<br />

• An RGB-D data set from five different<br />

countries in Europe was collected and<br />

used for evaluation.<br />

Top figure shows a kitchen<br />

scene, and the bottom figure the<br />

method’s response for the object<br />

cup.<br />

12:00–12:15 WedCT1.5<br />

Fast High Resolution 3D Laser Scanning<br />

by Real-Time Object Tracking and Segmentation<br />

Jens T. Thielemann, Asbjørn Berge, Øystein Skotheim<br />

and Trine Kirkhus<br />

SINTEF ICT, Norway<br />

E-mail: {jtt,trk,asbe,osk}@sintef.no<br />

• This paper presents a real-time contour tracking and object<br />

segmentation algorithm for 3D range images. The algorithm<br />

is used to control a novel micro-mirror based imaging laser<br />

scanner, which provides a dynamic trade-off between<br />

resolution and frame rate. The micro-mirrors are<br />

controllable, enabling us to speed up acquisition<br />

significantly by only sampling on the object that is tracked<br />

and of interest. As the hardware is under development, we<br />

benchmark our algorithms on data from a SICK LMS<strong>10</strong>0-<br />

<strong>10</strong>000 laser scanner mounted on a tilting platform. We find<br />

that objects are tracked and segmented well on pixel-level;<br />

that frame rate/resolution can be increased 3-4 times<br />

through our approach compared to scanners having static<br />

scan trajectories, and that the algorithm runs in 30<br />

ms/image on a Intel Core i7 CPU using a single core.<br />

11:15–11:30 WedCT1.2<br />

3D Textureless Object Detection and Tracking:<br />

An Edge-based Approach<br />

Changhyun Choi and Henrik I. Christensen<br />

College of Computing, Georgia Institute of Technology, USA<br />

Example frames from our detection and tracking results<br />

• An approach to textureless object detection and tracking of the 3D pose<br />

• Detection and tracking schemes are coherently integrated in a particle<br />

filtering framework on the SE(3)<br />

• For object detection, an efficient chamfer matching is employed<br />

• A set of coarse poses is estimated from the chamfer matching results<br />

• Particles are initialized from the coarse pose hypotheses by randomly<br />

drawing based on costs of the matching<br />

• To ensure the initialized particles are at or close to the global optimum, an<br />

annealing process is performed after the initialization<br />

• Comparative results for several image sequences with clutter are shown<br />

to validate the effectiveness of our approach<br />

11:45–12:00 WedCT1.4<br />

Birth Intensity Online Estimation in GM-PHD<br />

Filter for Multi-Target Visual Tracking<br />

Xiaolong Zhou, Y.F. Li, Tianxiang Bai and Yazhe Tang<br />

Dept. of MBE, City University of Hong Kong, Hong Kong, China<br />

Bingwei He<br />

Dept. of Mechanical Engineering and Automation, Fuzhou University, China<br />

• A multi-target visual tracking system that<br />

combines object detection and GM-PHD<br />

filter is developed<br />

• An improved measurement-dependent<br />

birth intensity online estimation method<br />

that based on the entropy distribution and<br />

the coverage rate is proposed<br />

• Entropy distribution based birth intensity<br />

update is proposed to remove those<br />

Gaussian components like noises within<br />

the birth intensity which are irrelevant with<br />

the birth measurements<br />

• Coverage rate based birth intensity update<br />

is proposed to further eliminate the noises<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–139–<br />

Tracking results comparison for<br />

data sets PETS 2000 and<br />

CAVIAR. (a) Detection results. (b)<br />

Tracking results with birth process<br />

proposed in [13]. (c) Tracking<br />

results with birth process proposed<br />

in this paper.<br />

12:15–12:30 WedCT1.6<br />

A Heteroscedastic Approach to Independent<br />

Motion Detection for Actuated Visual Sensors<br />

Carlo Ciliberto, Sean Ryan Fanello, Lorenzo Natale<br />

and Giorgio Metta<br />

Robotics Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Italy<br />

• A compositional framework is presented to<br />

perform real-time independent motion<br />

detection for applications in Robotics.<br />

• The algorithm can be easily adapted to a<br />

wide range of robotic platforms thanks to<br />

the flexibility granted by its modular<br />

structure.<br />

• The proposed method overcomes the<br />

shortcomings of the current state of the art<br />

approaches by exploiting the known robot<br />

kinematics to predict egomotion rather<br />

than relying on vision alone.<br />

• A heteroscedastic learning layer is<br />

employed to tune the egomotion predictive<br />

capabilities of the system.<br />

Experiments were conducted on<br />

the iCub humanoid robot.


<strong>Session</strong> WedCT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Human Performance Augumentation<br />

Chair Yasuhisa Hasegawa, Univ. of Tsukuba<br />

Co-Chair<br />

11:00–11:15 WedCT2.1<br />

Full-body Exoskeleton Robot Control<br />

for Walking Assistance by<br />

Style-phase Adaptive Pattern Generation<br />

Takamitsu Matsubara 1,2 , Akimasa Uchikata 1,2<br />

and Jun Morimoto 1<br />

1. Department of Brain Robot Interface, ATR-CNS, Japan<br />

2. Graduate School of Information Science, NAIST, Japan<br />

• Propose an adaptive walking assistance<br />

strategy using an coupled oscillator<br />

model for full-body exoskeleton robot<br />

control.<br />

• Consider the diversity of user motions<br />

and the interactions among a user, a<br />

robot, and an environment.<br />

• Adapt to time-varying user walking<br />

spatiotemporally by style-phase<br />

adaptive pattern generation.<br />

• Demonstrate that the necessary torque<br />

for the simulated user walking was<br />

reduced around 40% by using our<br />

method.<br />

Schematic diagram of our adaptive<br />

walking assistance strategy.<br />

11:30–11:45 WedCT2.3<br />

Synergy–based Optimal Design of Hand Pose<br />

Sensing<br />

Matteo Bianchi, Paolo Salaris and Antonio Bicchi<br />

Interdept. Research Center “Enrico Piaggio”, University of Pisa, Italy.<br />

Antonio Bicchi<br />

Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy.<br />

• This paper investigates the optimal design<br />

of low-cost gloves for hand pose sensing;<br />

• The cost constraints may limit both the<br />

number and the quality of sensor and<br />

technologies used;<br />

• We exploit the knowledge on how humans<br />

most frequently use their hands in<br />

grasping tasks ;<br />

• We study how and where to place sensors<br />

on the glove in order to get the maximum<br />

information about the actual hand posture.<br />

• Experiments validate the proposed optimal<br />

design.<br />

12:00–12:15 WedCT2.5<br />

Pinching Force Accuracy Affected by Thumb<br />

Sensation in Human Force Augmentation<br />

Yasuhisa Hasegawa, Tetsuri Ariyama and Kiyotaka Kamibayashi<br />

Department of Intelligent Interaction Technologies,<br />

University of Tsukuba, Japan<br />

• Confirmed contribution of thumb sensation<br />

to pinching force accuracy when it is<br />

augmented by an exoskeleton.<br />

• Proposed an exoskeleton structure to<br />

achieve high precision force control based<br />

on human sensation.<br />

• The structure transmits an assistive force<br />

to a grasping object through two paths at<br />

fixed distribution ratio.<br />

Pinching with force assistance<br />

and exoskeleton structure for<br />

accurate force control<br />

11:15–11:30 WedCT2.2<br />

Development and Evaluation of Add-On<br />

End-Effector for Linear Power Assist Unit<br />

with Variable Assist Gain<br />

Marina Kaneko, Taishi Kitano, Takahiro Wakatabe,<br />

Norihiro Kamamichi and Jun Ishikawa<br />

Department of Robotics and Mechatronics, Tokyo Denki University, Japan<br />

• Linear power assist unit easy to be<br />

designed by using an add-on end-effector<br />

• Robustly worked against human and<br />

environmental perturbations achieving an<br />

assisting bandwidth of 1 to 3 Hz<br />

• Online adjusting of the assist gain<br />

available so that the system can be easyto-use<br />

for various applications<br />

• Could be helpful in rehabilitations by<br />

tuning the load depending on the degree<br />

of recovery<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–140–<br />

Proposed power assist system<br />

using add-on end-effector<br />

11:45–12:00 WedCT2.4<br />

Demonstration-Based Control of Supernumerary<br />

Robotic Limbs<br />

Baldin Llorens-Bonilla and Federico Parietti<br />

Mechanical Engineering, Massachusetts Institute of Technology, USA<br />

H. Harry Asada<br />

Mechanical Engineering, Massachusetts Institute of Technology, USA<br />

• This system, called the Supernumerary<br />

Robotic Limbs (SRL), consists of two<br />

additional robotic arms worn through a<br />

backpack-like harness.<br />

• The SRL performs movements closely<br />

coordinated with the user and exhibit<br />

human-like dynamics, thus extend the<br />

worker’s range of available skills and<br />

manipulation possibilities.<br />

• Demonstration data of two workers are<br />

analyzed and, through the use of System<br />

Identifications, a state estimation<br />

algorithm is extracted.<br />

12:15–12:30 WedCT2.6<br />

Implementation of Force Sensing in a Haptic<br />

Musical Instrument Without Additional Sensors<br />

Mark Havryliv and Fazel Naghdy<br />

Faculty of Informatics, University of Wollongong, Australia<br />

Greg Schiemer<br />

School of Sound and Music Design, University of Technology, Australia<br />

• A haptic system for simulating the forcefeedback<br />

of a centuries-old carillon.<br />

• Our prototype device will allow performers<br />

to rehearse in private.<br />

• A single linear actuator for each key<br />

provides force-feedback in an admittance<br />

control scheme.<br />

• The control loop is closed by sensing force<br />

based on current being supplied to the<br />

actuator, therefore requiring no additional<br />

force sensors.<br />

• The current is filtered using Kalman<br />

estimation for precise and quick forcefeedback.<br />

The keyboard of the National<br />

Carillon, Canberra, Australia.


<strong>Session</strong> WedCT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Sensors, Sensor Networks and Networked Robots<br />

Chair<br />

Co-Chair<br />

11:00–11:15 WedCT3.1<br />

Semi-Autonomous Visual Inspection of Vessels<br />

Assisted by an Unmanned Micro Aerial Vehicle<br />

Francisco Bonnin-Pascual, Emilio Garcia-Fidalgo<br />

and Alberto Ortiz<br />

Department of Mathematics and Computer Science,<br />

University of Balearic Islands, Spain<br />

• Semi-autonomous approach to<br />

the vessel inspection problem<br />

making use of an autonomous<br />

Micro Aerial Vehicle<br />

• The vehicle provides the<br />

surveyors with images of the<br />

areas of the hull to be inspected<br />

• Supplied images can be<br />

processed by corrosion and<br />

crack detection algorithms<br />

based on texture, colour and<br />

morphology<br />

• Experimental results are<br />

provided, which show that the<br />

approach fulfils the application<br />

requirements<br />

11:30–11:45 WedCT3.3<br />

Prioritized Multi-Task Motion Control of<br />

Redundant Robots under Hard Joint Constraints<br />

Fabrizio Flacco Alessandro De Luca<br />

DIAG, Università di Roma “La Sapienza”, Italy<br />

Oussama Khatib<br />

Artificial Intelligence Laboratory, Stanford University, USA<br />

• Extension to multiple prioritized tasks of<br />

our recent SNS (Saturation in the Null<br />

Space) algorithm for acceleration-level<br />

control of redundant robots<br />

• Hard bounds on joint range, velocity, and<br />

acceleration/torque are always satisfied<br />

• A multi-task least scaling strategy is<br />

integrated in the SNS, when some of the<br />

original tasks turn out to be unfeasible<br />

• Efficient preemptive approach: A task of<br />

higher priority uses at best all the robot<br />

capabilities needed; lower priority tasks<br />

exploit the residual capabilities, without<br />

interfering with higher priority tasks<br />

A KUKA LWR robot cycles through<br />

Cartesian points, with self-motion<br />

damping as the secondary task and<br />

while satisfying all joint constraints<br />

12:00–12:15 WedCT3.5<br />

Entropy-aware Cluster-based Object Tracking<br />

for Camera Wireless Sensor Networks<br />

Alberto De San Bernabé, Jose Ramiro Martinez-de Dios and<br />

Anibal Ollero<br />

Robotics, Vision and Control Group, University of Seville, Spain<br />

• Entropy-based mechanisms for energy<br />

efficiency and robustness to transmission<br />

errors in object tracking systems are<br />

presented.<br />

• Activation/deactivation of camera-nodes<br />

using an active sensing method based on<br />

cost-gain analyses to reduce energy<br />

consumption.<br />

• Method that dynamically selects the<br />

cluster head using entropies and<br />

transmission error rates.<br />

• The proposed methods have been tested<br />

in experiments carried out in the CONET<br />

Robot-WSN Testbed (http://conet.us.es)<br />

Picture of one experiment in the<br />

CONET Robot-WSN Testbed<br />

11:15–11:30 WedCT3.2<br />

Web Mining Driven Object Locality Knowledge<br />

Acquisition for Efficient Robot Behavior<br />

Kai Zhou, Michael Zillich and Markus Vincze<br />

Automation and Control Institute (ACIN), Vienna University of Technology, Austria<br />

Hendrik Zender<br />

Language Technology Lab, German Research Center for Artificial Intelligence (DFKI), Germany<br />

• Probabilistic conceptual knowledge that<br />

represents the relations of object and its<br />

situated environments is obtained online.<br />

• More accurate quantification is achieved<br />

by fusing search engine query data and<br />

professional robotic database.<br />

• Diverse localities including various<br />

supporting surfaces and room categories<br />

have been investigated to find the<br />

dominant location of object.<br />

• Multiple objects search task has been<br />

performed using the discovered<br />

probabilistic knowledge.<br />

• Plentiful experimental results (200+<br />

objects/furniture, 3 surfaces, 7 rooms)<br />

validate the intuition of discovering object<br />

locality knowledge online.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–141–<br />

Example scenario and object<br />

search task<br />

11:45–12:00 WedCT3.4<br />

Optical-Inertial Tracking with Active Markers and<br />

Changing Visibility<br />

Florian Steidle and Andreas Tobergte<br />

and Gerd Hirzinger<br />

Robotics and Mechatronics Center,<br />

German Aerospace Center, Germany<br />

• Extended Kalman Filter to fuse low latency measurements of inertial<br />

measurement unit with 2D marker measurements<br />

• Markers, identificated by individual activation, are subsequently locally<br />

tracked in the image pane<br />

• Robust with respect<br />

to temporary marker<br />

occlusions<br />

• Real time<br />

implementation and<br />

verification with<br />

experiments<br />

12:15–12:30 WedCT3.6<br />

Intelligent Sensor-Scheduling for<br />

Multi-Kinect-Tracking<br />

Florian Faion, Simon Friedberger, Antonio Zea,<br />

and Uwe D. Hanebeck<br />

Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)<br />

• Scenario: target tracking with a Multi-<br />

Kinect-sensor-network<br />

• Challenge: high bandwidth, computational<br />

cost, interference<br />

• Idea: measuring the target exclusively with<br />

best available sensor<br />

• Contribution: uncertainty minimizing<br />

scheduling algorithm, stochastic Kinect<br />

sensor model, Kinect IR-projector<br />

modification


<strong>Session</strong> WedCT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Humanoid Robots IV<br />

Chair Kyungshik Roh, Samsung Electronics Co., Ltd<br />

Co-Chair Christian Ott, German Aerospace Center (DLR)<br />

11:00–11:15 WedCT4.1<br />

Development of the Lower Limbs<br />

for a Humanoid Robot<br />

Joohyung Kim, Younbaek Lee, Sunggu Kwon, Keehong Seo,<br />

Hoseong Kwak, Heekuk Lee, and Kyungshik Roh<br />

Samsung Advanced Institute of Technology, South Korea<br />

• We Present an overview of developing a<br />

novel biped walking machine for a<br />

humanoid robot Roboray.<br />

• Torque sensors are integrated at each<br />

joint of 6-DOF legs.<br />

• A new tendon type joint module is used<br />

in a pitch joint drive module, which is<br />

highly back-drivable.<br />

• Control system is decentralized using<br />

small controller boards named Smart<br />

Driver.<br />

Roboray and its lower limbs<br />

without cover.<br />

11:30–11:45 WedCT4.3<br />

Optimal Gait Primitives for Dynamic<br />

Bipedal Locomotion<br />

Bokman Lim, Jusuk Lee, Joohyung Kim, Minhyung Lee,<br />

Hoseong Kwak, Sunggu Kwon, Heekuk Lee,<br />

Woong Kwon, and Kyungshik Roh<br />

Samsung Advanced Institute of Technology, Korea<br />

• A movement generation framework for<br />

dynamic bipedal locomotion is proposed.<br />

• A set of parametric gait primitives is first<br />

constructed using the dynamics-based<br />

movement optimization algorithm.<br />

• Dynamic walking to follow the arbitrary<br />

path is then generated online via<br />

sequentially composing primitive motions.<br />

• Proposed method is applied to a torque<br />

controlled robot platform, Roboray.<br />

• Results show that dynamic gaits are<br />

humanlike and efficient compared to the<br />

conventional knee bent walkers.<br />

Roboray<br />

12:00–12:15 WedCT4.5<br />

Robust Descriptors for 3D Point Clouds using<br />

Geometric and Photometric Local Feature<br />

Hyoseok Hwang, Seungyong Hyung, Sukjune Yoon and<br />

Kyungshik Roh<br />

Samsung Advanced Institute of Technology<br />

Samsung Electronics<br />

Republic of Korea<br />

• We propose robust descriptors called<br />

GPLF(Geometric and Photometric Local<br />

Feature) for object recognition and pose<br />

estimation<br />

• The proposed descriptors simultaneously<br />

use geometric and photometric features of<br />

point clouds from RGB-D camera<br />

• GPLF has robust discriminative ability<br />

regardless of characteristics such as<br />

shapes or appearances of objects.<br />

• The experimental results show the<br />

recognition accuracy of the proposed<br />

descriptors is higher than other<br />

approaches which use a single feature<br />

The proposed descriptors of<br />

3D point clouds<br />

11:15–11:30 WedCT4.2<br />

On-board Odometry Estimation<br />

for 3D Vision-based SLAM of Humanoid Robot<br />

SungHwan Ahn, Sukjune Yoon, Seungyong Hyung,<br />

Nosan Kwak and Kyung Shik Roh<br />

Samsung Advanced Institute of Technology (SAIT),<br />

Samsung Electronics, Korea<br />

• Vision-based 3D motion estimation<br />

method for dynamic walking robots<br />

accompanying large swaying motion and<br />

uncertainty in camera movement.<br />

• On-board odometry filter fuses kinematic<br />

odometry, visual odometry, and raw IMU<br />

data.<br />

• Vision-based SLAM utilizes the fused<br />

odometry, and it improves the SLAM<br />

estimates by compensating motion errors.<br />

• Experiments verifies the method with the<br />

biped humanoid robot, Roboray, designed<br />

by Samsung.<br />

11:45–12:00 WedCT4.4<br />

Towards Natural Bipedal Walking: Virtual Gravity<br />

Compensation and Capture Point Control<br />

Keehong Seo, Joohyung Kim, Kyungshik Roh<br />

Samsung Advanced Institute of Technology, South Korea<br />

• A pose controller is proposed that<br />

compensates Cartesian pose errors for a<br />

bipedal robot.<br />

• Virtual gravity compensation (VGC) is<br />

developed to convert 6-dof Cartesian force<br />

to joint torques in the legs.<br />

• A walking algorithm using VGC and<br />

capture point control produces natural<br />

and robust walking gait.<br />

• Tested with Roboray, humanoid from<br />

Samsung Electronics, to balance on<br />

slopes, to walk over bumps, and to<br />

recover from pushes while walking.<br />

• Resulting gait shows clear heel-landing<br />

and toe-off motions as humans do.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–142–<br />

Figure caption is optional,<br />

use Arial Narrow 20pt<br />

12:15–12:30 WedCT4.6<br />

Active Stabilization of a Humanoid Robot for<br />

Impact Motions with Unknown Reaction Forces<br />

Seung-Joon Yi and Daniel D. lee<br />

GRASP Lab, University of Pennsylvania, USA<br />

Byoung-Tak Zhang<br />

BI Lab, Seoul National University, Korea<br />

Dennis Hong<br />

RoMeLa Lab, Virginia Tech, USA<br />

• Humans utilize whole body impact<br />

motions to generate large forces<br />

• Uncertainty in the ensuing reaction<br />

forces can lead robot to instability<br />

• A hierarchical push recovery<br />

controller is used along with<br />

simple robot model to reactively<br />

stabilize the robot against<br />

unknown reaction force<br />

• Implemented and tested on the<br />

DARwIn-OP small humanoid robot


<strong>Session</strong> WedCT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Force Control<br />

Chair<br />

Co-Chair<br />

11:00–11:15 WedCT5.1<br />

“Open Sesame!” - Adaptive Force/Velocity<br />

Control for Opening Unknown Doors<br />

Yiannis Karayiannidis, Christian Smith, Francisco E. Viña,<br />

Petter Ögren, and Danica Kragic<br />

Centre for Autonomous Systems, KTH, Sweden<br />

• This paper proposes a method that can<br />

open doors in real time without prior<br />

knowledge of the door kinematics.<br />

• The method uses force measurements<br />

and estimates of the radial direction based<br />

on adaptive estimates of the position of<br />

the door hinge.<br />

• We supply theoretical proof for stability<br />

and convergence, along with simulation<br />

and experimental evaluation of the<br />

method.<br />

11:30–11:45 WedCT5.3<br />

A New Hybrid Actuator Approach for Force-<br />

Feedback Devices<br />

Carlos Rossa, José Lozada, and Alain Micaelli<br />

CEA, France<br />

• The contribution proposes a novel hybrid<br />

actuator approach for haptic devices,<br />

which is based on a MR brake, a DC<br />

motor and a freewheel mechanism.<br />

• Since the brake can exert a resistive<br />

force only in a defined direction, the<br />

system enables the brake and the motor<br />

to be engaged at the same time.<br />

• The system is able to combine a<br />

powerful brake with a small DC motor to<br />

provide stability and high force density.<br />

• The control laws allow this actuation<br />

approach to be adaptable to many<br />

different haptic applications.<br />

Hybrid haptic interface based on<br />

an unidirectional MR brake: The<br />

handle is linked to a DC motor<br />

and its axis is linked to a MR<br />

brake through a freewheel<br />

mechanism.<br />

12:00–12:15 WedCT5.5<br />

On the role of load motion compensation in<br />

high-performance force control<br />

Thiago Boaventura, Michele Focchi, Marco Frigerio, Jonas Buchli,<br />

Claudio Semini, Gustavo A. Medrano-Cerda, Darwin G. Caldwell<br />

Department of Advanced Robotics, Istituto Italiano di Tecnologia (IIT), Italy<br />

• A high-performance torque source allows<br />

to use model-based control techniques<br />

and also to control robot/environment<br />

interaction<br />

• An intrinsic load motion feedback is<br />

present in the force dynamics<br />

• This feedback is independent of the<br />

actuation technology<br />

• Compensating this load motion feedback<br />

improves the force tracking performance<br />

• Both electric and hydraulic actuators of<br />

the HyQ robot demonstrate the<br />

effectiveness of this approach<br />

HyQ – A fully torque-controlled<br />

hydraulic quadruped robot<br />

11:15–11:30 WedCT5.2<br />

Control of Contact Forces: the Role of<br />

Tactile Feedback for Contact Localization<br />

Andrea Del Prete, Francesco Nori,<br />

Giorgio Metta and Lorenzo Natale<br />

Robotics Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Italy<br />

• This paper investigates the role of<br />

precise contact point estimation in<br />

force control<br />

• We find the analytical expression of<br />

the error in contact force that is<br />

induced by an hypothetic error in<br />

contact point estimation<br />

• We see how errors in contact<br />

localization affect the performance<br />

of parallel force/position control on<br />

iCub<br />

• We do not use any model of robot<br />

or environment but we exploit tactile<br />

sensors and force/torque sensors<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–143–<br />

The iCub humanoid robot making contact<br />

with an external object using parallel<br />

force/position control<br />

11:45–12:00 WedCT5.4<br />

A locally adaptive on-line grasp control strategy<br />

using array sensor force feedback<br />

Michael Stachowsky, Medhat Moussa and Hussein Abdullah<br />

School of Engineering, University of Guelph, Canada<br />

• Predicting grip force is critical for grasping<br />

unfamiliar objects<br />

• A novel strategy for grasping free-form<br />

objects without prior object knowledge is<br />

presented<br />

• It is capable of adapting the desired grip<br />

force on-line using biologically inspired<br />

algorithms<br />

• Experimental results show adaptation can<br />

take place within 50ms, and results in a<br />

stable grasp<br />

The Titan prototype hand, used in<br />

the experiments<br />

12:15–12:30 WedCT5.6<br />

A Set-Point-Generator for Indirect-Force-Controlled<br />

Manipulators Operating Unknown Constrained<br />

Mechanisms<br />

Ewald Lutscher and Gordon Cheng<br />

Institute for Cognitive Systems, Technische Universität München, Germany<br />

� Joint space set-point selection based<br />

on local estimation of the<br />

constrained trajectory<br />

� Consideration of applied interaction<br />

forces<br />

� No model or external sensors<br />

required<br />

Experimental setup<br />

� Manipulating various mechanisms without parameter tuning


<strong>Session</strong> WedCT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Robot Interaction with the Environment and Humans<br />

Chair Li-Chen Fu, National Taiwan Univ.<br />

Co-Chair Jan Peters, Tech. Univ. Darmstadt<br />

11:00–11:15 WedCT7.1<br />

A Brain-Robot Interface for Studying Motor<br />

Learning after Stroke<br />

Timm Meyer 1 , Jan Peters 1,2 , Doris Brötz 3 ,<br />

Thorsten O. Zander 1 , Bernhard Schölkopf 1 ,<br />

Surjo R. Soekadar 3 , Moritz Grosse-Wentrup 1<br />

1 Max Planck Institute for Intelligent Systems, Germany<br />

2 Intelligent Autonomous Systems Group,<br />

Technische Universität Darmstadt, Germany<br />

3 Institute of Medical Psychology and Behavioural Neurobiology,<br />

University of Tübingen, Germany<br />

• System:<br />

Combining robotics and EEG to study<br />

neural correlates of motor learning after<br />

stroke<br />

• Pilot study:<br />

Virtual 3D reaching movements with<br />

stroke patients<br />

• Results:<br />

Pre-trial bandpower in contralesional<br />

sensorimotor areas may be a neural<br />

correlate of motor learning.<br />

Subject wearing an EEG-cap while<br />

being attached to the robot arm<br />

11:30–11:45 WedCT7.3<br />

Haptic Classification and Recognition of Objects<br />

Using a Tactile Sensing Forearm<br />

Tapomayukh Bhattacharjee, James M. Rehg, and<br />

Charles C. Kemp<br />

Center for Robotics and Intelligent Machines,<br />

Georgia Institute of Technology, USA<br />

• Method:<br />

- PCA on concatenated time series<br />

- k-NN on top components<br />

• Leave-one-out cross-validation accuracy<br />

- Fixed vs. Movable: 91%<br />

- 4 Categories: 80%<br />

•(Fixed, Movable) X (Soft, Rigid)<br />

- Recognize which of 18 objects: 72%<br />

• Limitations<br />

- Stereotyped motion of the arm<br />

- Single contact region<br />

12:00–12:15 WedCT7.5<br />

Using a Minimal Action Grammar for<br />

Activity Understanding in the Real World<br />

Douglas Summers-Stay, Ching L. Teo, Yezhou Yang, Cornelia<br />

Fermuller and Yiannis Aloimonos<br />

Commputer Science, University of Maryland College Park, USA<br />

• We have built a system to<br />

automatically build an activity tree<br />

structure from observations of an actor<br />

performing complex manipulation<br />

activities<br />

• We created a dataset of these<br />

activities using Kinect RGBD and<br />

SR4000 time-of -light cameras.<br />

• The grammatical structure used to<br />

understand these actions may provide<br />

insight into a connection between<br />

action and language understanding<br />

• Activities recognized include<br />

assembling a machine, making a<br />

sandwich, creating a valentine card,<br />

etc…<br />

By noting key moments when<br />

objects come together, we build a<br />

tree for activity recognition<br />

11:15–11:30 WedCT7.2<br />

A brain-machine interface to navigate mobile<br />

robots along human-like paths amidst obstacles<br />

Abdullah Akce, James Norton<br />

University of Illinois at Urbana-Champaign, USA<br />

Timothy Bretl<br />

University of Illinois at Urbana-Champaign, USA<br />

• We present an interface that<br />

allows a human user to specify a<br />

desired path with noisy binary<br />

inputs obtained from EEG<br />

• Desired paths are assumed to be<br />

geodesics under a cost function,<br />

which is recovered from existing<br />

data using structured learning<br />

• An ordering between all (local)<br />

geodesics is defined so that users<br />

can specify paths optimally<br />

• Results from human trials<br />

demonstrate the efficacy of this<br />

approach when applied to a<br />

simulated robotic navigation task<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–144–<br />

The interface provides feedback by displaying<br />

an estimate of the desired path. The user gives<br />

left/right inputs based on “clockwise” ordering<br />

of the desired path to the estimated path.<br />

11:45–12:00 WedCT7.4<br />

Proactive premature intention estimation for<br />

intuitive human-robot collaboration<br />

Muhammad Awais and Dominik Henrich<br />

Chair for Applied Computer Science III,<br />

University of Bayreuth, Germany<br />

• Proactive premature intention<br />

estimation by determining the<br />

• Earliest possible trigger state<br />

in a Finite State Machine<br />

representing the human<br />

intention<br />

• Selecting the most probable<br />

intention prematurely for more<br />

than one ambiguous human<br />

intentions<br />

• Selection of trigger state is based<br />

on common state transition<br />

sequence<br />

• Premature intention recognition<br />

by the weights of the transition<br />

condition<br />

a 3 a a<br />

3 2 a<br />

a2<br />

a a1 a1<br />

2 a2 a2<br />

a a1 a1<br />

2 a2 a2<br />

a a1 a1<br />

2 a2 4<br />

a a3<br />

a1 a1<br />

4<br />

a a3<br />

a1 a1<br />

4<br />

a a3<br />

a1 a1<br />

a4 a4 a4<br />

a3<br />

4<br />

a1<br />

a 2<br />

a<br />

S1<br />

S2<br />

S 9<br />

a3 a5<br />

S1<br />

S2<br />

S 9<br />

a a5<br />

S<br />

3<br />

1<br />

S 2<br />

S 9<br />

a3 a5<br />

3<br />

S 4<br />

S 5<br />

S 6<br />

• • • • •<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

•<br />

a •<br />

•<br />

•<br />

n<br />

a •<br />

•<br />

an •<br />

•<br />

a •<br />

•<br />

an •<br />

n<br />

a •<br />

•<br />

an •<br />

•<br />

a an •<br />

n<br />

a an an<br />

an<br />

n a a3 a3 a2 a 3 a3 a2 a 3 a3 a2 a 2 a<br />

a2 a1 a1 a1 2<br />

a 4<br />

a4<br />

a3<br />

a3<br />

a3 a 1<br />

a1<br />

a a5<br />

a<br />

S1<br />

S 2<br />

a<br />

2<br />

S<br />

7<br />

a4<br />

S1<br />

S 2<br />

a<br />

2<br />

S<br />

7<br />

a4<br />

S 1<br />

S 2<br />

S<br />

7<br />

a4<br />

2<br />

3<br />

S 4<br />

S 5<br />

S 6<br />

• • • • •<br />

• •<br />

•<br />

•<br />

• •<br />

•<br />

•<br />

•<br />

a •<br />

• •<br />

n an •<br />

•<br />

an •<br />

•<br />

a •<br />

•<br />

an •<br />

n an •<br />

•<br />

an •<br />

•<br />

a an •<br />

n an an an<br />

an<br />

FSM 1<br />

FSM 2<br />

S<br />

n<br />

S<br />

n<br />

a1<br />

a<br />

2<br />

a 3<br />

a4<br />

a 2<br />

a 5<br />

S<br />

n + 1<br />

S<br />

n + 1<br />

Proactive premature intention<br />

recognition. Top: earliest possible<br />

trigger state selection for<br />

proactive intention recognition.<br />

Bottom: Premature intention<br />

recognition<br />

12:15–12:30 WedCT7.6<br />

On-Line Human Action Recognition by Combining<br />

Joint Tracking and Key Pose Recognition<br />

E-Jui Weng and Li-Chen Fu<br />

Department Name, University Name, Country<br />

• Propose a boosting approach by<br />

combining the pose estimation and the<br />

upper body tracking to recognize human<br />

actions.<br />

• Our method can recognize human poses<br />

and actions at the same time.<br />

• Apply the action recognition results as a<br />

feedback to the pose estimation process<br />

to increase its efficiency and accuracy.<br />

• Present an on-line spotting scheme based<br />

on the gradients of the hidden Markov<br />

models probabilities.<br />

System overview


<strong>Session</strong> WedCT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Sensing in Medical Robotics<br />

Chair M. Cenk Cavusoglu, Case Western Res. Univ.<br />

Co-Chair<br />

11:00–11:15 WedCT9.1<br />

Scanning the surface of soft tissues with a<br />

micrometer precision thanks to endomicroscopy<br />

based visual servoing<br />

Benoît Rosa, Mustapha Suphi Erden, Jérôme Szewczyk, and<br />

Guillaume Morel<br />

ISIR, Université Pierre et Marie Curie, Paris, France<br />

Tom Vercauteren<br />

Mauna Kea Technologies, Paris, France<br />

• Probe-based confocal endomicroscopy is a<br />

promising imaging modality for performing<br />

optical biopsies<br />

• Problem: tissue deformation while scanning for<br />

getting wide field of view mosaics<br />

• Solution proposed: visual servo control using<br />

the confocal images as a measurement of<br />

probe/tissue displacement<br />

• Ex vivo validation on different tissues and<br />

trajectories using a precision robot<br />

• Further work: in vivo trial with a dedicated<br />

laparoscopic instrument<br />

Mosaics from raster scans on<br />

liver tissue. Up: without visual<br />

servo control (light line is the<br />

robot trajectory). Down: using<br />

visual servo control.<br />

11:30–11:45 WedCT9.3<br />

Internal Bleeding Detection Algorithm Based on<br />

Determination of Organ Boundary by Low-<br />

Brightness Set Analysis<br />

Keiichiro Ito, Shigeki Sugano, Fellow IEEE<br />

Creative Science and Engineering, Waseda University, Japan<br />

Hiroyasu Iwata, Member IEEE<br />

Waseda Institute for Advanced Study, Waseda University, Japan<br />

• This paper proposes an organ<br />

boundary determination method for<br />

detecting internal bleeding.<br />

• We developed method for extracting<br />

low-brightness areas and<br />

determining algorithms of organ<br />

boundaries by low-brightness set<br />

analysis, and we detect internal<br />

bleeding by combining these two<br />

methods.<br />

• Experimental results based on<br />

clinical US images of internal<br />

bleeding between Liver and Kidney<br />

showed that proposed algorithms<br />

had a sensitivity of 77.8% and<br />

specificity of 95.7%.<br />

Kidney<br />

Liver<br />

Gap between the organs<br />

(Internal bleeding)<br />

Internal Bleeding Detection Algorithm<br />

12:00–12:15 WedCT9.5<br />

Heart motion measurement with three dimensional<br />

sonomicrometry and acceleration sensing<br />

Tetsuya Horiuchi and Ken Masamune<br />

Graduate School of Information Science and Technology, University of Tokyo,<br />

Japan<br />

Eser Erdem Tuna and Murat Cenk Çavuşoğlu<br />

Department of Electrical Engineering and Computer Science, Case Western<br />

Reserve University, USA<br />

• Point of Interest for Coronary<br />

artery bypass graft surgery.<br />

• Estimation by particle filter with<br />

position and acceleration sensor<br />

which has uncertain incline.<br />

• New estimation method,<br />

“Differential Probability Method”,<br />

which enhanced particle filter.<br />

• Reduce 27.2% RMS error from<br />

Conventional method.<br />

Overview of the system<br />

11:15–11:30 WedCT9.2<br />

Preliminary Evaluation of a Micro-Force Sensing<br />

Handheld Robot for Vitreoretinal Surgery<br />

Berk Gonenc, Marcin A. Balicki<br />

Russell H. Taylor and Iulian Iordachita<br />

ERC for Computer Integrated Surgery, Johns Hopkins University, USA<br />

James Handa and Peter Gehlbach<br />

Wilmer Eye Institute, The Johns Hopkins School of Medicine, USA<br />

Cameron N. Riviere<br />

Robotics Institute, Carnegie Mellon University, USA<br />

• A 2-DOF force sensing hook is<br />

integrated with a handheld robot,<br />

Micron, for superior performance<br />

in membrane peeling operations.<br />

• FBG based force sensing<br />

instrument could directly inform<br />

the surgeon of the extremely<br />

delicate peeling forces.<br />

• Preliminary tests were done on<br />

bandage phantom and inner shell<br />

membrane of raw chicken eggs.<br />

• The peeling forces were kept<br />

below 7 mN with a significant<br />

reduction in 2-20 Hz oscillations.<br />

11:45–12:00 WedCT9.4<br />

A Cyber-Physical System for Strain<br />

Measurements in the Cerebral Aneurysm Models<br />

Chaoyang Shi, Masahiro Kojima, Carlos Tercero, Seiichi Ikeda,<br />

Toshio Fukuda, Fumihito Arai<br />

Micro-Nano Systems Engineering, Nagoya University, Japan<br />

Makoto Negoro and Keiko Irie<br />

Department of Neurosurgery, Fujita Health University, Japan<br />

� Build a novel in-vitro<br />

experimental platform for the<br />

dynamic deformation measurements<br />

on the aneurysm<br />

� Justify a link between robotic<br />

technologies and this cyberphysical<br />

systems for the<br />

aneurysm diagnosis and<br />

prognosis<br />

� Realize the high resolution<br />

analysis by observing an<br />

enlarged silicone membrane<br />

aneurysm model under the<br />

microscope<br />

� Combine CFD (Computational<br />

Fluid Dynamics) simulation<br />

with experiments for validation<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–145–<br />

Experimental setup including the pump for<br />

blood flow simulation pump, the cerebral<br />

aneurysm model and vision system<br />

12:15–12:30 WedCT9.6<br />

Surface Texture and Pseudo Tactile Sensation<br />

Displayed by a MEMS-Based Tactile Display<br />

Junpei Watanabe, Hiroaki Ishikawa, and Arouette Xavier<br />

Department of Mechanical Engineering, Keio University, Japan<br />

Norihisa Miki<br />

Department of Mechanical Engineering, Keio University, Japan<br />

and<br />

JST PRESTO, Japan<br />

• We demonstrate display of artificial tactile<br />

feeling using large displacement MEMS<br />

actuator arrays.<br />

• We investigated the artificial tactile feeling<br />

projected onto the fingertip in contact with<br />

the display.<br />

• The actuator arrays could successfully<br />

display “rough” and “smooth” tactile feeling<br />

distinctly.<br />

• We experimentally deduced the conditions<br />

when the pseudo tactile sensation was<br />

generated.<br />

Schematic view of a tactile display


<strong>Session</strong> WedCT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Visual Learning II<br />

Chair Edwin Olson, Univ. of Michigan<br />

Co-Chair<br />

11:00–11:15 WedCT<strong>10</strong>.1<br />

Clustering-based Discriminative Locality<br />

Alignment for Face Gender Recognition<br />

Duo Chen<br />

College of Communication Engineering, Chongqing University, China<br />

Jun Cheng<br />

Shenzhen Institutes of Advanced Technology, CAS, China<br />

The Chinese University of Hong Kong<br />

Dacheng Tao<br />

Faculty of Engineering and Information Technology, University of Technology<br />

Sydney, Australia<br />

• To facilitate human-robot interactions,<br />

human gender information is very<br />

important.<br />

• It is essential to develop a simple and fast<br />

way based on dimensional reduction to<br />

recognize gender.<br />

• Both global geometry and local geometry<br />

of data are essential to estimate the lower<br />

dimensional projection.<br />

• CDLA exploits global geometry, local<br />

geometry and discriminative information.<br />

p1<br />

A<br />

B<br />

p2<br />

CDLA makes the connected<br />

points in the k1 nearest graph<br />

closer. By k-means clustering<br />

(taking the global geometry into<br />

count), it avoids making the far<br />

away points connected.<br />

11:30–11:45 WedCT<strong>10</strong>.3<br />

A System of Automated Training Sample<br />

Generation for Visual-based Car Detection<br />

Chao Wang, Huijing Zhao and Hongbin Zha<br />

Key Lab of Machine Perception (MOE), Peking Univ., China<br />

Franck Davoine<br />

CNRS and LIAMA Sino French Laboratory, Beijing, China<br />

• This paper presents a system to automatically<br />

generate car image sample dataset.<br />

• The dataset contains multi-view car image<br />

samples with car’s pose information.<br />

• A system of detecting and tracking onroad<br />

vehicles using multiple single-layer<br />

lasers is developed.<br />

• Multi-view car samples are generated<br />

based on the tracking results and multiview<br />

camera data.<br />

Car samples divided into 8 subcategories<br />

12:00–12:15 WedCT<strong>10</strong>.5<br />

On-line semantic perception using uncertainty<br />

Roderick de Nijs, Juan Sebastian Ramos Pachón, Kolja Kühnlenz<br />

Institute of Automatic Control Engineering, Technische Universität München,<br />

Germany<br />

Gemma Roig, Xavier Boix, Luc van Gool<br />

Computer Vision Laboratory, ETH Zurich, Switzerland<br />

Can a semantic labeling algorithm<br />

benefit from uncertainty?<br />

• Buffer of images for on-line semantic<br />

segmentation<br />

• Perturb-and-MAP random fields to<br />

compute uncertainty<br />

• Spend more computation time on<br />

uncertain regions<br />

Above: Urban scene<br />

Below: Class undertainty<br />

11:15–11:30 WedCT<strong>10</strong>.2<br />

IEEE/RSJ IROS <strong>2012</strong> Digest Template<br />

Incorporating Geometric Information into Gaussian<br />

Process Terrain Models from Monocular Images<br />

Tariq Abuhashim and Salah Sukkarieh<br />

Australian Centre for Field Robotics<br />

The University of Sydney<br />

NSW 206, Australia<br />

• This paper presents a novel approach to<br />

incorporate differential geometry into<br />

depth estimation from monocular images<br />

that is based on the Gaussian Process<br />

Derivative Observations (GDP)<br />

formulation.<br />

• Experimental results are presented using<br />

synthesized examples and real monocular<br />

images captured from an Unmanned<br />

Aerial Vehicle (UAV).<br />

• Results show improvement in depth<br />

estimation over standard Gaussian<br />

Process Regression (GPR).<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–146–<br />

Ground and aerial robotics used<br />

to reconstruct 3D maps<br />

11:45–12:00 WedCT<strong>10</strong>.4<br />

Learning and Recognition of Objects Inspired by<br />

Early Cognition<br />

Maja Rudinac and Pieter Jonker<br />

Biorobotics Lab, Delft University of Technology, The Netherlands<br />

Gert Kootstra and Danica Kragic<br />

Computer Vision and Active Perception lab, KTH Royal Institute of Technology,<br />

Sweden<br />

• We present a unifying approach for learning<br />

and recognition of objects in unstructured<br />

environments through exploration. We<br />

establish 4 principles for object learning.<br />

• First, early object detection is based on an<br />

attention mechanism detecting salient parts in<br />

the scene.<br />

• Second, motion of the object allows more<br />

accurate object localization,<br />

• Next, acquiring multiple observations of the<br />

object through manipulation allows a more<br />

robust representation of the object.<br />

• And last, object recognition benefits from a<br />

multi-modal representation.<br />

• This approach shows significant improvement<br />

of the system when multiple observations are<br />

acquired from active object manipulation.<br />

Cognitive model for object<br />

learning and recognition<br />

12:15–12:30 WedCT<strong>10</strong>.6<br />

A High-Accuracy Visual Marker<br />

Based on a Microlens Array<br />

Hideyuki Tanaka, Yasushi Sumi, and Yoshio Matsumoto<br />

Intelligent Systems Research Institute, AIST, Japan<br />

• ArrayMark: A novel AR marker utilizing a 2-D moiré pattern<br />

based on a microlens array<br />

• Accurate pose estimation (


<strong>Session</strong> WedCVT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

SLAM II<br />

Chair Seth Hutchinson, Univ. of Illinois<br />

Co-Chair<br />

11:00–11:15 WedCVT6.1<br />

CurveSLAM: An approach for Vision-based<br />

Navigation without Point Features<br />

Dushyant Rao, Soon-Jo Chung and Seth Hutchinson<br />

University of Illinois at Urbana-Champaign, IL, USA<br />

• Many existing SLAM methods use feature<br />

points without exploiting structure.<br />

• We perform stereo vision-based SLAM<br />

using cubic Bézier curves to represent<br />

landmarks.<br />

• Curve parameters are extracted without<br />

any point-based stereo matching.<br />

• The proposed algorithm can perform<br />

SLAM using only path edges as curve<br />

structures.<br />

11:30–11:45 WedCVT6.3<br />

Realizing, Reversing, Recovering:<br />

Incremental Robust Loop Closing<br />

over time using the iRRR algorithm<br />

Yasir Latif, César Cadena and José Neira<br />

University of Zaragoza, Spain<br />

• We consider the problem of false positive<br />

loop closures that any place recognition<br />

system will eventually provide.<br />

• We propose an incremental algorithm to<br />

realize that the place recognition system<br />

has generated wrong constraints, remove<br />

them if necessary, and recompute the<br />

state estimation.<br />

• We demonstrate the performance of our<br />

algorithm in multiple real cases, in multisession<br />

experiments and compared<br />

against the state of the art in robust backends.<br />

12:00–12:15 WedCVT6.5<br />

Towards Persistent Indoor Localization,<br />

Mapping and Navigation using CAT-Graph<br />

Will Maddern, Michael Milford and Gordon Wyeth<br />

School of EE&CS, Queensland University of Technology, Australia<br />

• We present CAT-Graph, an approach to<br />

topo-metric appearance-based SLAM with<br />

constant computational and memory<br />

requirements in a fixed-size environment.<br />

• Loop closures are calculated using a<br />

particle filter constrained to edges on the<br />

topological graph for fixed computation<br />

time.<br />

• Nodes are pruned using a local<br />

information content metric based on visual<br />

saliency to limit total map size.<br />

• We present results on a 7 day indoor<br />

experiment demonstrating constant<br />

update rate and map size, high recall with<br />

zero false positives and reliable<br />

topological path planning within 20% of the<br />

optimal metric path which improves over<br />

time.<br />

Graphical representation of<br />

continuous topology. Nodes<br />

represent visual observations and<br />

edges store local odometry<br />

11:15–11:30 WedCVT6.2<br />

Seamless Aiding of Inertial-SLAM using Visual<br />

Directional Constraints from a Monocular Vision<br />

Usman Qayyum and Jonghyuk Kim<br />

Research School of Engineering, Australian National University, Australia<br />

• The concept of visual directional<br />

constraint is proposed to resolve<br />

the scale ambiguity problem in<br />

monocular visual-inertial systems<br />

• Direct integration of visual<br />

directional vectors to the inertial<br />

system which enable aiding at high<br />

rates<br />

• 3D map being still used to constrain<br />

the drifts but in a relaxed way.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–147–<br />

Fig. 1: Multiple loop aiding architecture.<br />

11:45–12:00 WedCVT6.4<br />

Location and Orientation Estimation with an<br />

Electrosense Robot<br />

Yonatan Silverman, Yang Bai<br />

Mechanical Engineering, Northwestern University, USA<br />

James Snyder and Malcolm A. MacIver<br />

Biomedical Engineering, Northwestern University, USA<br />

• Model uses voltage perturbations as a sensor modality from a<br />

generated electric field<br />

• To solve this RO-SLAM problem, orientation must be estimated from<br />

only orientation dependent range metrics<br />

• We determined the correct state of a robot using experimental data<br />

from an electrosensing robot.<br />

• The estimate of the state improved greatly when the robot rotated as<br />

well as translated.<br />

Electric Field without objects<br />

Electric Field with lateral wall<br />

Electric Field with front wall<br />

12:15–12:20 WedCVT6.6<br />

IEEE/RSJ IROS <strong>2012</strong><br />

Pedestrian Detection in Industrial Environments:<br />

Seeing around corners.<br />

Paulo Borges, Ash Tews, Dave Haddon<br />

Autonomous Systems Lab- ICT Centre - CSIRO<br />

� We propose a safety system which integrates a vision-based offboard<br />

pedestrian tracking subsystem with an onboard localisation and navigation<br />

subsystem.<br />

� This combination enables warnings to be communicated and effectively<br />

extends the vehicle controller’s field of view to include areas that would<br />

otherwise be blind spots.<br />

� A simple flashing light interface in the vehicle cabin provides a clear and<br />

intuitive interface to alert drivers of potential collisions.<br />

� We implemented and tested the proposed solution on an automated<br />

industrial vehicle to verify the applicability for both human drivers and under<br />

autonomous operation.


<strong>Session</strong> WedCVT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

SLAM II<br />

Chair Seth Hutchinson, Univ. of Illinois<br />

Co-Chair<br />

12:20–12:25 WedCVT6.7<br />

ISRobotCar: The Autonomous<br />

Electric Vehicle Project<br />

Marco Silva, Fernando Moita, Urbano Nunes, Luís Garrote,<br />

Hugo Faria, and João Ruivo Paulo<br />

Institute of Systems and Robotics, University of Coimbra, Portugal<br />

• Autonomous vehicles can be decisive in<br />

order to reduce road accidents.<br />

• The adressed video shows an overview of<br />

the ISRobotCar, an experimental<br />

autonomous electric vehicle moving<br />

autonomously in a parking-like area as<br />

well as its main hardware and control<br />

modules.<br />

• The ISRobotCar main purpose is to serve<br />

as a platform for experimental testing of<br />

algorithms required for autonomous<br />

navigation and cooperative navigation in<br />

urban environments.<br />

The ISRobotCar<br />

12:25–12:30 WedCVT6.8<br />

Autonomy for Mobility on Demand Systems<br />

Z.J.Chong, B.Qin, M.H.Ang Jr., D.Hsu,<br />

NUS, Singapore<br />

T.Bandyopadhyay, T.Wongpiromsarn, B.Rebsamen, S.Kim, P.Dai<br />

S.M.A.R.T, Singapore<br />

E. Frazzoli, D. Rus<br />

MIT, Cambridge<br />

• Mobility on Demand (MoD) systems are<br />

becoming important in current urban<br />

transportation systems<br />

• We present a minimalistic autonomous<br />

platform for our MoD system<br />

•<br />

• We show the challenges of urban driving<br />

encountered by our system and approach<br />

taken to address the issues.<br />

• We demonstrate a complete service cycle<br />

utilizing the system in NUS campus under<br />

real conditions<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–148–<br />

A minimalistic autonomous<br />

platform is designed for our<br />

mobility on demand system


<strong>Session</strong> WedCVT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Soft Robots<br />

Chair Sungchul Kang, Korea Inst. of Science & Tech.<br />

Co-Chair Barbara Mazzolai, Istituto Italiano di Tecnologia<br />

11:00–11:15 WedCVT8.1<br />

Innovative Soft Robots Based on Electro-<br />

Rheological Fluids<br />

Ali Sadeghi 1,2 , Lucia Beccai 1 and Barbara Mazzolai 1<br />

1- Center for Micro-BioRobotics@SSSA, Istituto Italiano di Tecnologia, Italy<br />

2- BioRobotics Institute, Scuola Superiore Sant’Anna,Italy<br />

• Control over the flexibility of soft robots<br />

bodies by controlling the ER fluid flow in<br />

soft elements of robot body<br />

• Electro-rheological (ER) fluids are smart<br />

fluids which can transform into solid-like<br />

phase by applying an electric field<br />

• Simplifying the hydraulic circuits in<br />

hydraulic based soft robots due to the<br />

simple design of ER valves<br />

• Using ER based hydraulic actuators for<br />

soft robotics applications<br />

Rubber bellows<br />

Rubber bellows<br />

ER Valves<br />

Backward ERF flow<br />

Forward ERF flow<br />

11:30–11:45 WedCVT8.3<br />

ER valves<br />

Tendon<br />

s<br />

bellows<br />

Design of a Tubular Snake-like Manipulator with<br />

Stiffening Capability by Layer Jamming<br />

Yong-Jae Kim<br />

Samsung Advanced Institute of Technology, Samsung Electronics Co., Korea<br />

Shanbao Cheng<br />

Direct Drive Systems, FMC Technologies, USA<br />

Sangbae Kim, and Karl Iagnemma<br />

Mechanical Engineering Dept., Massachusetts Institute of Technology, USA<br />

• Design of a hollow snake-like<br />

manipulator using layer jamming<br />

mechanism having tunable stiffness<br />

capability.<br />

• The proposed layer jamming<br />

mechanism is composed of multiple<br />

layers of thin film which make use of<br />

amplified friction between the films<br />

by applying vacuum pressure.<br />

• It has highly flexible and underactuated<br />

properties without vacuum;<br />

however, it becomes highly stiff<br />

when a vacuum is applied.<br />

Layer Jamming<br />

Manipulator having<br />

Stiffening Capability<br />

Actuation and<br />

Transmission System<br />

Hollow Snake-like Manipulator<br />

Closed up View of Tubular Shape<br />

12:00–12:15 WedCVT8.5<br />

Design of Soft Robotic Actuators Using Fluid-<br />

Filled Fiber-Reinforced Elastomeric Enclosures<br />

in Parallel Combinations<br />

Joshua Bishop-Moser, Girish Krishnan and Sridhar Kota<br />

Mechanical Engineering, University of Michigan, Ann Arbor, USA<br />

Charles Kim<br />

Mechanical Engineering, Bucknell University, USA<br />

• Fiber-Reinforced Elastomeric Enclosures<br />

(FREEs) that can perform translation,<br />

bending, rotation, and screw motions.<br />

• Mobility for all single and parallel<br />

combinations determined from geometry.<br />

• Experimental verification of predicted<br />

actuation directions.<br />

• Provides a key building block for soft<br />

robots and dexterous manipulators.<br />

Deformation of a parallel FREE under<br />

multiple actuation permutations<br />

11:15–11:30 WedCVT8.2<br />

Detailed Dynamics Modeling of BioBiped’s<br />

Monoarticular and Biarticular Tendon-Driven<br />

Actuation System<br />

Katayon Radkhah, Thomas Lens, and Oskar von Stryk<br />

Department of Computer Science, TU Darmstadt, Germany<br />

• Detailed mathematical models of the<br />

active and passive, mono- and biarticular<br />

structures of the BioBiped1 robot<br />

• Enable a systematic analysis of the design<br />

space and characteristic curves<br />

• Basis to study the effects of the<br />

musculoskeletal actuation system<br />

• Evaluation of actuator models by MBS<br />

dynamics simulations for 1D hopping with<br />

regard to various performance criteria<br />

11:45–12:00 WedCVT8.4<br />

Adaptive Bipedal Walking through Sensory-motor<br />

Coordination Yielded from Soft Deformable Feet<br />

Dai Owaki and Hiroki Fukuda<br />

Research Institute of Electrical Communication, Tohoku University, Japan<br />

Akio Ishiguro<br />

Research Institute of Electrical Communication, Tohoku University, Japan<br />

CREST, The Japan Science and Technology Agency, Japan<br />

• Adaptive bipedal walking control that<br />

exploits sensory information stemming<br />

from “soft deformable” feet.<br />

• An unconventional CPG-based control that<br />

exploits local force feedback generated<br />

from such deformation.<br />

• Remarkably adaptive walking ability in<br />

response to a change in walking velocity<br />

and external perturbations.<br />

• Deformation of robot’s body plays a pivotal<br />

role in the emergence of “sensory-motor<br />

coordination”.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–149–<br />

Bipedal robot with soft<br />

deformable feet<br />

12:15–12:20 WedCVT8.6<br />

Intrinsically Elastic Robots: The Key to Human-<br />

Like Performance<br />

S. Haddadin, F. Huber, K. Krieger, R. Weitschat, A. Albu-Schäffer, S.<br />

Wolf, W. Friedl, M. Grebenstein, F. Petit, J. Reinecke, R. Lampariello<br />

Robotics and Mechatronics Center<br />

• Exploiting inherent capabilities for<br />

dynamic motions of VSA robots based on<br />

temporary energy storage<br />

• Come closer to human-like performance<br />

in terms of speed, robustness, and safety<br />

• Model based approaches for optimal<br />

excitation and explicit use of elastic<br />

energy tanks<br />

• Framework for generating dynamic nearoptimal<br />

motions in real-time<br />

• Framework can be used not only for<br />

explosive or cyclic motion, but also for<br />

classical tracking or reaching tasks.<br />

Safe interaction Cyclic manipulation<br />

Explosive motions<br />

Optimality in real-time


<strong>Session</strong> WedCVT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Soft Robots<br />

Chair Sungchul Kang, Korea Inst. of Science & Tech.<br />

Co-Chair Barbara Mazzolai, Istituto Italiano di Tecnologia<br />

12:20–12:25 WedCVT8.7<br />

“Can ants inspire robots?“<br />

Self-organized decision making in robotic swarms<br />

Arne Brutschy, Eliseo Ferrante, Marco Dorigo, Mauro Birattari<br />

IRIDIA, CoDE, Université Libre de Bruxelles, Belgium<br />

Alexander Scheidler<br />

Fraunhofer Institute for Energy System Technology, Germany<br />

� The k-unanimity rule allows a group to find<br />

the shortest among several actions<br />

� The method is based solely on local<br />

interactions<br />

� Robots observe the opinion of other robots<br />

and change their own if necessary<br />

� The opinion of the group converges with a<br />

high probability to the shortest action<br />

� No teams need to be formed and the<br />

accuracy of the decision can be adjusted<br />

� The video shows an experiment with a<br />

swarm of <strong>10</strong> robots<br />

12:25–12:30 WedCVT8.8<br />

A Single Motor Actuated Miniature Steerable<br />

Jumping Robot<br />

Jianguo Zhao, Ning Xi<br />

Dept. of Electrical and Computer Engineering, Michigan State University, USA<br />

Fernando J. Cintrón, Matt W. Mutka, and Li Xiao<br />

Dept. of Computer Science and Engineering, Michigan State University, USA<br />

• A miniature steerable jumping robot is<br />

designed and developed.<br />

• The robot can jump, steer, and self-right<br />

using a single motor;<br />

• The robot has a maximum length 6.5cm<br />

and weighs 23.5 grams with battery;<br />

• Experimental results show the robot can<br />

jump 90cm in height;<br />

• The robot has wide applications ranging<br />

from search and rescue in disasters,<br />

environmental monitoring, to military<br />

surveillance.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–150–<br />

Jumping robot in action


<strong>Session</strong> WedDT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Omnidirectional Vision and Aerial Robotics I<br />

Chair Friedrich Fraundorfer, ETH Zurich<br />

Co-Chair Vincenzo Lippiello, Univ. di Napoli Federico II<br />

14:00–14:15 WedDT1.1<br />

Full Scaled 3D Visual Odometry from a Single<br />

Wearable Omnidirectional Camera<br />

Daniel Gutiérrez-Gómez, Luis Puig and J.J. Guerrero<br />

Departamento de Informática e Ingeniería de Sistemas (DIIS) -<br />

Instituto de Investigación en Ingeniería de Aragón (I3A),<br />

Universidad de Zaragoza, Spain<br />

• Monocular SLAM present an scale<br />

ambiguity due to depth unobservability.<br />

• In SLAM with a helmet-mounted<br />

omnidirectional camera, head oscillations<br />

during walking are visible.<br />

• Extracting the frequency of oscillation, the<br />

walking speed can be approximated.<br />

• Knowing the walking speed, scale factor<br />

and true scaled 3D motion estimation can<br />

be computed.<br />

Our approach is able to cope with<br />

scale drift.<br />

14:30–14:45 WedDT1.3<br />

Topological segmentation of indoors/outdoors<br />

Sequences of spherical views<br />

Alexandre Chapoulie and Patrick Rives<br />

INRIA Sophia Antipolis, France<br />

David Filliat<br />

ENSTA-ParisTech, France<br />

• New approach for topological mapping of<br />

indoors/outdoors environments.<br />

• Online segmentation of sequences of<br />

spherical views.<br />

• Use of a global GIST descriptor for<br />

encoding the spherical view content.<br />

• Real-time change-point detection based<br />

algorithm<br />

• Experimental validation on indoors and<br />

outdoors datasets<br />

14:15–14:30 WedDT1.2<br />

3-line RANSAC for Orthogonal Vanishing<br />

Point Detection<br />

Jean-Charles Bazin and Marc Pollefeys<br />

Computer Vision and Geometry Laboratory<br />

ETH Zurich, Switzerland<br />

• Vanishing points (VPs) are useful for<br />

rotation estimation, robot stabilization,<br />

3D reconstruction, etc…<br />

• Orthogonal VPs constitute an important<br />

information but the orthogonality<br />

constraint is hard to impose<br />

• A new approach to generate a model of<br />

orthogonal VPs from three lines<br />

• Procedure incorporated in RANSAC for<br />

robust and fast estimation<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–151–<br />

Example of line clustering and<br />

vanishing point estimation<br />

14:45–15:00 WedDT1.4<br />

Wall Inspection Control of a VTOL Unmanned<br />

Aerial Vehicle Based on a Stereo Optical Flow<br />

Vincenzo Lippiello and Bruno Siciliano<br />

Dipartimento di Informatica e Sistemistica<br />

Università di Napoli Federico II – Italy<br />

• An autonomous wall inspection control,<br />

which employs the information provided by<br />

a stereo camera system to generate a<br />

virtual stereo OF is proposed.<br />

• A virtual spherical camera is considered at<br />

the center of gravity of the UAV.<br />

• An iterative algorithm is used to lead the<br />

acquisition process of the stereo pair<br />

generating both the stereo optical flow and<br />

the estimation of the 3D planar surface<br />

parameters (orientation and relative<br />

distance).<br />

• An average translational OF is employed<br />

to estimate the absolute vehicle velocity.<br />

Virtual spherical camera (top)<br />

and Stereo optical flow


<strong>Session</strong> WedDT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Physical Human-Robot Interaction III<br />

Chair Chris Melhuish, BRL<br />

Co-Chair Alessandro De Luca, Univ. di Roma<br />

14:00–14:15 WedDT2.1<br />

Kinematic synthesis, optimization and<br />

analysis of a non-anthropomorphic 2-DOFs<br />

wearable orthosis for gait assistance<br />

Fabrizio Sergi<br />

MEMS Department, Rice University, USA<br />

Dino Accoto, Nevio Luigi Tagliamonte, Giorgio Carpino,<br />

Simone Galzerano, Eugenio Guglielmelli<br />

CIR, Università Campus Bio-Medico di Roma, Italy<br />

• AIM: This paper describes the optimization of<br />

a planar wearable active orthosis for hip and<br />

knee assistance during walking<br />

• METHODS: A systematic enumeration<br />

algorithm is used to derive the whole set of<br />

admissible solutions and optimization is<br />

carried out to reduce actuators torque<br />

requirements.<br />

• RESULTS: The optimized design allows to<br />

conveniently re-distribute mechanical power<br />

in the actuated joints and to modulate<br />

apparent inertia, relative to the<br />

anthropomorphic designs<br />

• CONCLUSIONS: This paper gives a first<br />

preliminary evidence of the advantages of a<br />

non-anthropomorphic design in terms of<br />

actuation requirements.<br />

Figure: (A) Optimized torque profiles<br />

required to robot actuators, vs. torques<br />

applied to human joints. (B) Validation<br />

of the position-control scheme for the<br />

optimized design, through both<br />

simulations and experiments<br />

14:30–14:45 WedDT2.3<br />

Counteracting Modeling Errors for Sensitive<br />

Observer-Based Manipulator Collision Detection<br />

Vahid Sotoudehnejad, Amir Takhmar,<br />

Mehrdad R. Kermani and Ilia G. Polushin<br />

Electrical and Computer Engineering, The University of Western Ontario,<br />

Canada<br />

• Modeling errors responsible for<br />

deficiencies in sensorless<br />

collision detection of robotic<br />

systems are studied.<br />

• A time-variant threshold for<br />

observer residues in joint space<br />

is presented for the purpose of<br />

collision detection.<br />

• Simulation results using real-life<br />

collision forces on PUMA 560<br />

and experiments on the<br />

Phantom Omni device show that<br />

the time-variant threshold works<br />

better than constant thresholds.<br />

A<br />

B<br />

14:15–14:30 WedDT2.2<br />

Investigation of Safety in HRI for a Series<br />

Elastic, Tendon-Driven Robot Arm<br />

Thomas Lens and Oskar von Stryk<br />

Department of Computer Science, Technische Universität Darmstadt, Germany<br />

• Elastic tendon actuation in all four joints of<br />

the BioRob arm reduces link weights to a<br />

minimum<br />

• Design enables end-effector velocities up<br />

to 7 m/s<br />

• Analytic worst case safety estimation of<br />

dynamic impact peak forces and static<br />

clamping forces<br />

• Experimental validation of maximum peak<br />

forces, maximum clamping forces, and<br />

danger potential of energy stored in the<br />

springs<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–152–<br />

Impact and clamping experiment<br />

with the BioRob-X4 arm.<br />

14:45–15:00 WedDT2.4<br />

When Shared Plans go Wrong: From Atomic- to<br />

Composite Actions and Back<br />

Alexander Lenz 1 , Stephane Lallee 2 , Sergey Skachek 1 ,<br />

Anthony G. Pipe 1 , Chris Melhuish 1 and Peter Ford Dominey 2<br />

1 Bristol Robotics Laboratory, Bristol, UK<br />

2 Stem Cell and Brain Research Institute, INSERM U846, Bron, France<br />

� HRI: cognitive system with composite actions, as a sequence of atomic<br />

actions.<br />

� Shared plans between human and robot (BERT2) consisting of<br />

composite actions.<br />

• Graceful recovery from 'behavioural faults' during shared plan execution<br />

guided by human using error codes.<br />

• Plan expansion into atomic action allows robot to skip or repeat<br />

interrupted atomic action.<br />

Human behaviour stops the execution of the shared plan: (a) human stop gesture;<br />

(b) human turns away from robot (lack of attention); (c) close human-robot proximity<br />

during robot motion.


<strong>Session</strong> WedDT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Outdoor, Search and Rescue Robotics I<br />

Chair Robin Murphy, Texas A&M<br />

Co-Chair Shigeo Hirose, Tokyo Inst. of Tech.<br />

14:00–14:15 WedDT3.1<br />

LineScout Power Line Robot: Characterization of<br />

a UTM-30LX LIDAR for Obstacle Detection<br />

Nicolas Pouliot, Pierre-Luc Richard, and Serge Montambault<br />

Hydro-Québec’s Research Institute (IREQ), Canada<br />

• Although popular, LIDAR obstacle<br />

detection was never before applied<br />

for Power Line Robot applications<br />

• Before its introduction onto<br />

LineScout plateform, formal<br />

characterization of the Hokuyo<br />

UTM-30LX LIDAR is realized<br />

• LIDAR’s performances while<br />

scanning cylindrical objects and<br />

power line cable samples are also<br />

assessed.<br />

• Preliminary analysis and detection<br />

criteria appear as promising<br />

14:30–14:45 WedDT3.3<br />

Vehicle Localization in Mountainous<br />

Gravelled Paths<br />

Yoichi Morales and Takashi Tsubouchi<br />

Systems and Information Engineering University of Tsukuba, Japan<br />

Shigeru Sarata<br />

Field Systems Research, Advanced Industrial Science and Technology, Japan<br />

• Vehicle localization in real mountainous<br />

non-paved roads of a quarry<br />

• For 3D elevation map elaboration using<br />

data from dead reckoning, RTK-GPS and<br />

a laser sensor<br />

• Terrain traversability analysis is performed<br />

and scan points are voted in an elevation<br />

map which is probabilistically updated<br />

• Localization experiments were evaluated<br />

towards RTK-GPS used as ground truth.<br />

Localization in Mountainous<br />

Gravelled Paths<br />

14:15–14:30 WedDT3.2<br />

Mobile Robotic Fabrication on Construction<br />

Sites: dimRob<br />

Volker Helm, Selen Ercan, Fabio Gramazio, and Matthias Kohler<br />

Architecture and Digital Fabrication, ETH Zürich, Switzerland<br />

• In dimRob (ECHORD - European Clearing<br />

House for Open Robotics Development,<br />

7 th Framework Programme) viable<br />

applications for mobile robotic units on<br />

construction sites are explored.<br />

• The aim is also to build upon innovative<br />

man-machine interaction paradigms via<br />

the cooperation of the precision of<br />

machine with innate cognitive human<br />

skills.<br />

• dimRob is intended as a first step in the<br />

evolution of mobile robotics for<br />

architecture on construction sites.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–153–<br />

The fabrication unit of dimRob:<br />

An industrial robot, ABB IRB<br />

4600, mounted on a compact<br />

mobile track system that is<br />

designed to fit through a standard<br />

door frame on construction sites<br />

14:45–15:00 WedDT3.4<br />

Casting Device for Search and Rescue<br />

Aiming Higher and Faster Access in Disaster Site<br />

Hideyuki Tsukagoshi, Eyri Watari, Kazutaka Fuchigami,<br />

and Ato Kitagawa Tokyo Institute of Technology, Japan<br />

• As a new tool for rescue<br />

operation in dangerous<br />

buildings, a method for casting<br />

and fixing a tube to collect<br />

information is proposed.<br />

• To realize it, a deformable<br />

anchor ball, a retrieving<br />

device, and a gondola robot<br />

are introduced.


<strong>Session</strong> WedDT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Humanoid Robots V<br />

Chair<br />

Co-Chair<br />

14:00–14:15 WedDT4.1<br />

Humanoid Push Recovery<br />

with Robust Convex Synthesis<br />

Jiuguang Wang<br />

Robotics Institute, Carnegie Mellon University<br />

robot@cmu.edu<br />

• Humanoid full-body push recovery<br />

• Robust control design – model bounded<br />

external disturbances<br />

• Simultaneously search for a controller and<br />

the associated domain of attraction using<br />

convex optimization<br />

• The controller guarantees stabilization<br />

under bounded disturbances as well as<br />

physical constraints on the robot<br />

14:30–14:45 WedDT4.3<br />

Lower Thigh Design of<br />

Detailed Musculoskeletal Humanoid “Kenshiro”<br />

Yuki Asano*, Hironori Mizoguchi**, Toyotaka Kozuki**,<br />

Yotaro Motegi**, Masahiko Osada**, Junichi Urata**,<br />

Yuto Nakanishi**, Kei Okada** and Masayuki Inaba**<br />

*Graduate School of Interdisciplinary Information Studies, Univ.of Tokyo, Japan<br />

**Dept. of Mechano-Informatics, Univ.of Tokyo, Japan<br />

• Design concept of Detailed<br />

Musculoskeletal Humanoid “Kenshiro”<br />

• Body Configuration<br />

• Joint Structure<br />

• Muscle Arrangement<br />

• Biomimetic Design of the Knee Joint<br />

• Kneecap<br />

• Cruciate Ligament<br />

• Screw-Home Mechanism<br />

• Detailed Muscle Arrangement Imitating<br />

Human<br />

• Experiment of Knee Rotation on the<br />

Ground<br />

Detailed musculoskeletal humanoid<br />

“Kenshiro”. Knee rotation experiment<br />

and muscle arrangemant of leg<br />

14:15–14:30 WedDT4.2<br />

Appearance-Based Traversability<br />

Classification in Monocular Images Using<br />

Iterative Ground Plane Estimation<br />

Daniel Maier and Maren Bennewitz<br />

Department of Computer Science, University of Freiburg, Germany<br />

• Traversability estimation from monocular<br />

camera images for robot navigation<br />

• Learning appearance-based classifiers for<br />

fast and dense classification of images<br />

• Classifiers are updated online in a selfsupervised<br />

fashion<br />

• Iterative detection and matching of sparse<br />

features on the ground plane under the<br />

homography constraint<br />

• Classified images are integrated into an<br />

occupancy grid map<br />

• Experiments with a real humanoid robot<br />

show high classification rates and robust<br />

obstacle detection<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–154–<br />

Top: Sparse Floor Features<br />

Bottom: Dense Classification<br />

14:45–15:00 WedDT4.4<br />

Optimization-based generation and experimental<br />

validation of walking trajectories for biped robots<br />

Alexander Werner, Roberto Lampariello and Christian Ott<br />

Institute of Robotics and Mechatronics, Deutsches Zentrum für Luft- und<br />

Raumfahrt (DLR), Germany<br />

• Generation of energy-optimal step<br />

trajectories through non-linear<br />

programming with full rigid-body robot<br />

model<br />

• Stability(ZMP), collisions and joint limits<br />

are respected<br />

• Efficient fixed-based calculation of the<br />

constraints and the cost function<br />

• Analysis and avoidance of global minima<br />

• Experimental testing and evaluation of the<br />

trajectories<br />

• Significant gain (55%) in the cost function<br />

with respect to capture-point based<br />

controller<br />

Optimal Walking Trajectories


<strong>Session</strong> WedDT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Human-Machine Interfaces<br />

Chair Fabio Paolo Bonsignorio, Heron Robots srl Univ. Carlos III de Madrid<br />

Co-Chair<br />

14:00–14:15 WedDT5.1<br />

Novel Equilibrium-Point Control of Agonist-<br />

Antagonist System with Pneumatic Artificial Muscles:<br />

II. Application to EMG-based Human-machine<br />

Interface for an Elbow-joint System<br />

Yohei Ariga, Daisuke Maeda, Hang T. T. Pham,<br />

Mitsunori Uemura, Hiroaki Hirai and Fumio Miyazaki<br />

Graduate School of Engineering Science, Osaka University, Japan<br />

• An EMG-based HMI for the agonist-antagonist system with<br />

pneumatic artificial muscles (PAMs) is proposed.<br />

• We introduce the novel concepts of agonist-antagonist<br />

muscle-pair ratio (A-A ratio) and agonist-antagonist<br />

muscle-pair activity (A-A activity).<br />

• These concepts enable us to linearly<br />

translate the equilibrium point of<br />

the human muscle system into that<br />

of the PAM system, linking these two<br />

systems in a simple way.<br />

EMG Signal<br />

14:30–14:45 WedDT5.3<br />

I’ll Keep You in Sight: Finding a Good Position<br />

to Observe a Person<br />

Jens Kessler and Daniel Iser and Horst-Michael Gross<br />

Neuroinformatics and Cognitive Robotics Lab, Ilmenau University of<br />

Technology, Germany<br />

• Hard and soft criteria are<br />

combined to find a good position<br />

(e.g. visibility, sensor distance)<br />

• Particle swarm optimization is<br />

used to find the position<br />

• A very computational expensive<br />

3D approach is reduced to an<br />

efficient 2D approach<br />

• First experiments show stable<br />

results in both cases<br />

Valid search space from hard criteria and<br />

the person occupancy distribution<br />

14:15–14:30 WedDT5.2<br />

Benchmarking Shared Control for Assistive<br />

Manipulators: From Controllability to the Speed-<br />

Accuracy Trade-Off<br />

Martin F. Stoelen, Virginia F. Tejada,<br />

Alberto Jardón Huete and Carlos Balaguer<br />

Roboticslab, Universidad Carlos III de Madrid (UC3M), Spain<br />

Fabio Bonsignorio<br />

UC3M and Heron Robots, Genova, Italy<br />

• Shared control for assistive manipulators<br />

• Predict intent and assist disabled user<br />

• So far mainly wheelchairs/mobile platf.<br />

• Benchmarking/experiments important<br />

• High-DOF human-robot systems<br />

• Sharing and replication of results<br />

• System models and metrics proposed<br />

• Controllability from user’s perspective<br />

• Speed-accuracy trade-off<br />

• Case study on adaptive shared control<br />

showed improvement in both metrics<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–155–<br />

Experimental setup used for case<br />

study on adaptive shared control<br />

14:45–15:00 WedDT5.4<br />

Embedding Imperceptible Codes into Video<br />

Projection and Applications in Robotics<br />

Jingwen Dai and Ronald Chung<br />

Department of Mechanical and Automation Engineering,<br />

The Chinese University of Hong Kong, Hong Kong<br />

• A novel system of embedding<br />

imperceptible structured codes<br />

into normal projection.<br />

• In coding end: noise-tolerant<br />

schemes (specifically designed<br />

shapes and large hamming<br />

distance) are employed.<br />

• In decoding end: pre-trained<br />

primitive shape detectors are used<br />

to detect and identify the weakly<br />

embedded codes.<br />

• Some potential applications to a<br />

robotic system are demonstrated.


<strong>Session</strong> WedDT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Mapping III<br />

Chair<br />

Co-Chair<br />

14:00–14:15 WedDT6.1<br />

Sensor Fusion for Flexible Human-Portable<br />

Building-Scale Mapping<br />

Maurice F. Fallon, Hordur Johannsson,<br />

Jonathan Brookshire, Seth Teller, John J. Leonard<br />

Computer Science and Artificial Intelligence Laboratory, MIT, USA<br />

• Man-portable sensor rig for<br />

Biohazard Site Assessment teams<br />

• LIDAR based multi-floor mapping<br />

algorithm using iSAM<br />

• Re-localization using visual<br />

appearance<br />

• Floor tracking using a pressure<br />

sensor<br />

14:30–14:45 WedDT6.3<br />

Efficient Map Merging Using a Probabilistic<br />

Generalized Voronoi Diagram<br />

Sajad Saeedi ♦ , Liam Paull ♦ , Michael Trentini ♦♦ , Mae Seto ♦♦ and<br />

Howard Li ♦<br />

♦ Electrical and Computer Engineering, University of New Brunswick, Canada<br />

♦♦ Defence Research and Development Canada, Canada<br />

• One of the problems for multi-robot SLAM<br />

is that the robots only know their positions<br />

in their own local coordinate frames, so<br />

fusing map data can be challenging.<br />

• In this research, a probabilistic version of<br />

the Generalized Voronoi Diagram (GVD),<br />

called the PGVD, is used to determine the<br />

relative transformation between maps and<br />

fuse them.<br />

• The new method is effective for finding<br />

relative transformations quickly and<br />

reliably. In addition, the novel approach<br />

accounts for all map uncertainties in the<br />

fusion process.<br />

Probabilistic GVD of two partial<br />

maps which are used for map<br />

fusion<br />

14:15–14:30 WedDT6.2<br />

Fast Voxel Maps with Counting Bloom Filters<br />

Julian Ryde and Jason J. Corso<br />

Computer Science and Engineering, University at Buffalo, USA<br />

• Bloom filters applied to accelerate look up<br />

speed of voxel occupancy in maps for<br />

mobile robots<br />

• Probabilistic data structure<br />

•Small probability of false positive<br />

•False negatives always correct<br />

• Fast sparse voxel occupancy lookup<br />

•3 times faster than efficient hash table<br />

•Within <strong>10</strong>% speed of dense array<br />

• Tested for 3D SLAM with point cloud data<br />

and no impact on mapping accuracy<br />

observed<br />

• Works with very large maps that do not fit<br />

in computer RAM<br />

14:45–15:00 WedDT6.4<br />

A Pipeline for Structured Light Bathymetric<br />

Mapping<br />

Gabrielle Inglis, Clara Smart, Ian Vaughn and Chris Roman<br />

Department of Ocean Engineering, University of Rhode Island, USA<br />

• A method for creating micro-bathymetric<br />

maps using structured light imaging is<br />

presented<br />

• Algorithms for segmentation of the laser<br />

image and in-situ calibration of the<br />

imaging sensor are developed<br />

• Sub-map based simultaneous localization<br />

and mapping (SLAM ) is adapted to solve<br />

for navigation<br />

• High resolution maps meet or exceed<br />

standards of state of the art acoustic<br />

methods<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–156–<br />

Archaeological structured light<br />

survey gridded at 1 cm.


<strong>Session</strong> WedDT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Motion and Path Planning VI<br />

Chair Songhwai Oh, Seoul National Univ.<br />

Co-Chair<br />

14:00–14:15 WedDT7.1<br />

Sampling-based Nonholonomic Motion Planning in Belief<br />

Space via Dynamic Feedback Linearization-based FIRM<br />

Ali-akbar Agha-mohammadi 1 , Suman Chakravorty 2 ,<br />

Nancy M. Amato 1<br />

1 Dept. of Computer Science and Engineering, Texas A&M University, USA<br />

2 Dept. of Aerospace Engineering, Texas A&M University, USA<br />

• Sampling-based motion planning<br />

in belief space for nonholonomic<br />

systems with FIRM (Feedbackbased<br />

Information RoadMap)<br />

• Using a Dynamic Feedback<br />

Linearization-based controller<br />

along with a stationary Kalman<br />

filter to perform belief stabilization<br />

• Robust feedback motion planning<br />

in belief space with real-time<br />

replanning capabilities<br />

Feedback solution in belief<br />

space obtained by DFL-based<br />

FIRM in a simple environment<br />

Unicycle<br />

14:30–14:45 WedDT7.3<br />

Task-oriented Design of Concentric Tube Robots<br />

using Mechanics-based Models<br />

Luis G. Torres and Ron Alterovitz<br />

Department of Computer Science, UNC-Chapel Hill, USA<br />

Robert J. Webster III<br />

Department of Mechanical Engineering, Vanderbilt University, USA<br />

• New task-oriented approach for designing<br />

concentric tube robots on a surgery- and<br />

patient-specific basis<br />

• Uses mechanics-based kinematic model<br />

for more accuracy than prior design<br />

methods<br />

• Combines search in design space with<br />

motion planning in configuration space for<br />

probabilistic completeness in design space<br />

• Leverages design coherence to accelerate<br />

design process<br />

• Applied design method to medically<br />

motivated bronchial surgery scenario<br />

A concentric tube robot designed<br />

by our method reaching two<br />

surgical targets in the lung<br />

14:15–14:30 WedDT7.2<br />

Local Randomization in Neighbor Selection<br />

Improves PRM Roadmap Quality<br />

Troy McMahon, Sam Jacobs, Bryan Boyd and Nancy M. Amato<br />

Parasol Lab, Dept of Computer Science and Engineering,<br />

Texas A&M University, USA<br />

Lydia Tapia<br />

Dept of Computer Science, University of New Mexico, USA<br />

• Proposes a candidate neighbor<br />

selection policy, LocalRand(k,k’),<br />

which identifies k’ local nodes then<br />

selects k of those nodes at random.<br />

• LocalRand yields many benefits<br />

associated with randomized methods<br />

while maintaining the advantages<br />

inherent to a localized method like kclosest.<br />

• Experimental evaluation shows that<br />

LocalRand produces better roadmaps<br />

than k-closest at a comparable cost.<br />

14:45–15:00 WedDT7.4<br />

Sampling-Based Sweep Planning to Exploit<br />

Local Planarity in the Inspection of Complex 3D<br />

Structures<br />

Brendan Englot and Franz S. Hover<br />

Department of Mechanical Engineering,<br />

Massachusetts Institute of Technology,<br />

USA<br />

• Hybrid algorithm for planning a fullcoverage<br />

inspection of complex 3D<br />

structures<br />

• Rectangular, back-and-forth sweep paths<br />

cover the open, planar areas<br />

• Randomized configurations cover the<br />

confined, occluded areas<br />

• Used to plan ship hull inspection routes for<br />

an autonomous underwater vehicle<br />

• Back-and-forth sweep paths are seeded<br />

through random sampling<br />

• We show probabilistic completeness and<br />

fast algorithm convergence<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–157–<br />

A full-coverage AUV inspection<br />

route for a ship’s stern using both<br />

regularized and randomized<br />

configurations


<strong>Session</strong> WedDT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Space Robotics<br />

Chair Paolo Fiorini, Univ. of Verona<br />

Co-Chair Kazuya Yoshida, Tohoku Univ.<br />

14:00–14:15 WedDT8.1<br />

Emulating Self-reconfigurable Robots<br />

- Design of the SMORES System<br />

Jay Davey and Ngai Ming Kwok<br />

School of Mechanical and Manufacturing Engineering,<br />

University of New South Wales, Australia<br />

Mark Yim<br />

Mechanical Engineering and Applied Science,<br />

University of Pennsylvania, USA<br />

• S.M.O.R.E.S. - Self-assembling<br />

MOdular Robot for Extreme<br />

Shape-shifting)<br />

• Development of a Universal robot<br />

• Emulation of existing selfreconfigurable<br />

modular robots<br />

• Utilizing chain, lattice and mobile<br />

self-reconfiguration strategies<br />

• Self-assembly<br />

SMORES<br />

14:30–14:45 WedDT8.3<br />

Impedance-Based Contact Control of a Free-<br />

Flying Space Robot with a Compliant Wrist for<br />

Non-cooperative Satellite Capture<br />

Naohiro Uyama, Kenji Nagaoka, and Kazuya Yoshida<br />

Department of Aerospace Engineering, Tohoku University, Japan<br />

Hiroki Nakanishi<br />

Innovative Technology Research Center, JAXA, Japan<br />

• Impedance-based contact control method<br />

from a viewpoint of coefficient of restitution<br />

is presented.<br />

• The dominant contact dynamics<br />

parameters are approximated by wrist’s<br />

spring and damper elements.<br />

• Impedance parameter tuning method is<br />

proposed utilizing the coefficient of<br />

restitution.<br />

• Experimental verification concludes the<br />

validity and availability of the proposed<br />

method.<br />

Proposed impedance parameter<br />

tuning method from desired<br />

coefficient of restitution<br />

14:15–14:30 WedDT8.2<br />

Slope Traversability Analysis of<br />

Reconfigurable Planetary Rovers<br />

Hiroaki Inotsume, Masataku Sutoh,<br />

Kenji Nagaoka, Keiji Nagatani, and Kazuya Yoshida<br />

Department of Aerospace Engineering, Tohoku University, Japan<br />

• Effect of attitude changes<br />

of a reconfigurable rover<br />

on its slippages over<br />

sandy slopes is analyzed<br />

• Wheel-soil interaction is<br />

modeled based on<br />

terramechanics<br />

• Slope-traversing<br />

experiments and<br />

numerical simulations are<br />

conducted<br />

14:45–15:00 WedDT8.4<br />

Tracking complex targets for space rendezvous<br />

and debris removal applications<br />

Antoine Petit<br />

Lagadic Team, INRIA Rennes, France<br />

Eric Marchand<br />

Lagadic Team, IRISA, Université Rennes 1, France<br />

Keyvan Kanani<br />

Astrium, Toulouse, France<br />

• A generic 3D model-based tracking<br />

method to fully localize space targets.<br />

• It processes in real time the complete 3D<br />

model of complex objects, avoiding any<br />

manual redesign of the model.<br />

• Model projection performed through GPU<br />

rendering.<br />

• Extraction of both salient geometrical<br />

edges and textures edges from the<br />

rendered scene.<br />

• Classical edge-based tracking and pose<br />

estimation techniques.<br />

• Better robustness with a multiple<br />

hypothesis tracking solution.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–158–<br />

Some experimental results, on<br />

real and synthetic images


<strong>Session</strong> WedDT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Tactile Exploration<br />

Chair Shinichi Hirai, Ritsumeikan Univ.<br />

Co-Chair<br />

14:00–14:15 WedDT<strong>10</strong>.1<br />

Online Spatio-Temporal Gaussian Process<br />

Experts with Application to Tactile Classification<br />

Harold Soh, Yanyu Su and Yiannis Demiris<br />

Personal Robotics Laboratory,<br />

Imperial College London, United Kingdom<br />

• Problem: Learning and Predicting<br />

Multivariate Time-series (e.g. sensor data).<br />

• Proposed Solution: STORK-GP, Sparse<br />

Online GP with novel Recursive Kernel<br />

based on relevance detection.<br />

• Application: Tactile classification.<br />

• Benefits: Method creates new models<br />

“on-the-fly” and refines existing models.<br />

• Experiments: High Accuracy comparable<br />

to extensively-optimised offline classifiers.<br />

• Download STORKG-GP:<br />

www.haroldsoh.com<br />

Online Tactile Classifier using<br />

STORK-GP Online Experts<br />

14:30–14:45 WedDT<strong>10</strong>.3<br />

3D Surface Reconstruction for Robotic Body<br />

Parts with Artificial Skins<br />

Philipp Mittendorfer and Gordon Cheng<br />

Institute for Cognitive Systems, Technische Universität München, Germany<br />

www.ics.ei.tum.de<br />

• We only utilize a-priori knowledge given by the elemental skin unit cell<br />

• We calculate relative positions and orientations of all unit cells in a patch<br />

• We utilize network relationships and gravity measurements in ≥ 2 poses<br />

• We only depend on sensor skin features ⇒ transferable between robots<br />

14:15–14:30 WedDT<strong>10</strong>.2<br />

Experimental Investigation of Surface<br />

Identification Ability of a Low-Profile Fabric<br />

Tactile Sensor<br />

Van Anh Ho, Masaaki Makikawa and Shinichi Hirai<br />

Department of Robotics, Ritsumeikan University, Japan<br />

Takahiro Araki<br />

Research and Development Department, Okamoto Corp., Japan<br />

• Design a fabric sensor with loops on the<br />

surface to enhance the slip detection, as<br />

well as to capture stck-slip events during<br />

sliding motion.<br />

• Three methods have been employed to<br />

evaluate recognition ability of the sensor<br />

over several typical texture.<br />

• Results show that ANN-based<br />

classification using Discrete Wavelet<br />

Transformation (DWT) of sensor’s<br />

signal outperformed the others.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–159–<br />

Sensor’s construction and DWT<br />

signals over textures<br />

14:45–15:00 WedDT<strong>10</strong>.4<br />

A Novel Dynamic Slip Prediction and Compensation<br />

Approach Based on Haptic Surface Exploration<br />

Xiaojing Song, Hongbin Liu, Joao Bimbo, Kaspar Althoefer<br />

and Lakmal D Seneviratne<br />

Department of Informatics, King’s College London, UK<br />

• Efficient haptic surface<br />

exploration to identify friction<br />

properties of object surfaces.<br />

• Slip threshold is predicted<br />

online based on identified<br />

friction property.<br />

• Slip compensator is<br />

implemented to prevent<br />

slippage during a dynamic<br />

grasping.


<strong>Session</strong> WedDVT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 14:00–15:00<br />

Range Sensing<br />

Chair Kazunori Umeda, Chuo Univ.<br />

Co-Chair<br />

14:00–14:15 WedDVT9.1<br />

Construction of a Compact Range Image Sensor<br />

Using a Multi-slit Laser Projector<br />

Suitable for a Robot Hand<br />

Kazuya Iwasaki<br />

Course of Precision Engineering, Chuo University, Japan<br />

Kenji Terabayashi and Kazunori Umeda<br />

Department of Precision Mechanics, Chuo University, Japan<br />

• A compact range image sensor used for<br />

short-range (<strong>10</strong>0-300mm) measurements<br />

is constructed<br />

• The sensor consists of a multi-slit laser<br />

projector and a small CMOS camera<br />

• 1800 measurement points, 15fps,<br />

measurement in a dynamic environment<br />

(experiments for 300mm/s moving object)<br />

• The sensor is compact enough (weight:<br />

40g) to be attached to a robot's hand<br />

A compact range image sensor<br />

using a multi-slit laser projector<br />

14:30–14:45 WedDVT9.3<br />

Fast Incremental 3D Plane Extraction from a<br />

Collection of 2D Line Segments for 3D Mapping<br />

Su-Yong An, Lae-Kyoung Lee, and Se-Young Oh<br />

Electrial Engineering, POSTECH, Korea<br />

• 2D line segments<br />

are extracted from<br />

every scan slice<br />

and then clustered<br />

according to their<br />

orientation<br />

• A 3D plane is<br />

modeled by a set<br />

of these line<br />

segments<br />

Plane extraction results<br />

• Reduced the number of scan points that are actually accessed<br />

• Compared with state of the art methods<br />

14:50–14:55 WedDVT9.5<br />

An Autonomous 9-DOF Mobile-manipulator<br />

System for in situ 3D Object Modeling<br />

Liila Torabi and Kamal Gupta<br />

Engineering Science, Simon Fraser University, Canada<br />

• The system consists of a mobile base<br />

with a 6-DOF arm mounted on it.<br />

Both the arm, and the mobile base are<br />

equipped with a line-scan range sensor.<br />

• The task is to autonomously build<br />

3D model of an object in situ.<br />

• The system assumes no knowledge<br />

of either the object or the rest of the<br />

workspace of the robot.<br />

• The overall planner integrates two next best view<br />

(NBV) algorithms, one for modeling and the other<br />

for exploration, along with sensor-based path planner.<br />

14:15–14:30 WedDVT9.2<br />

Fast Nearest Neighbor Search using<br />

Approximate Cached k-d Tree<br />

Won-Seok Choi and Se-Young Oh<br />

Electrical Engineering, the Pohang university of Science and<br />

TECHnology(POSTECH), KOREA<br />

• Key idea: ‘cache’ and ‘approximate’<br />

• The search starts from the cached leaf<br />

node, not the root node.<br />

• Suitable for low dimensional data<br />

• The Indexing sequence implies a clue of<br />

the cached node<br />

• Property I : If the indexes of two<br />

query points are close, these are<br />

likely close to each other<br />

• Property II: If the indexes of a query<br />

point and a model point are close,<br />

these are likely close to each other<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–160–<br />

…<br />

…<br />

…<br />

… …<br />

… …<br />

…<br />

(a) the standard k-d tree<br />

…<br />

…<br />

… …<br />

… …<br />

(b) the proposed algorithm<br />

Figure. Search traversal path<br />

14:45–14:50 WedDVT9.4<br />

Thermal 3D Modeling of Indoor Environments<br />

For Saving Energy<br />

Dorit Borrmann, Hassan Afzal, Jan Elseberg,<br />

and Andreas Nüchter<br />

School of Engineering and Science, Jacobs University Bremen, Germany<br />

• Heat and air conditioning<br />

losses lead to a large amount<br />

of wasted energy<br />

• A complete 3D model of heat<br />

distribution enables one to<br />

detect sources of wasted<br />

energy<br />

• Architects can use this model<br />

to modify buildings to reach<br />

energy savings<br />

• This video presents our approach to<br />

automatically generate 3D models from<br />

laser range images, thermal images, and<br />

photos<br />

3D model enhanced with thermal,<br />

color, and reflectance information<br />

14:55–15:00 WedDVT9.6<br />

Collision Avoidance of Industrial Robot Arms<br />

using an Invisible Sensitive Skin<br />

Tin Lun Lam, Hoi Wut Yip, Huihuan Qian and Yangsheng Xu<br />

Mechanical and Automation Engineering, The Chinese University of Hong<br />

Kong, Hong Kong<br />

• Cost-effective invisible sensitive<br />

skin<br />

• Cover a large area without utilizing<br />

a large number of sensors<br />

• Built inside the robot arm<br />

• Collision avoidance of a 6-DOF<br />

industrial robot arm by 5<br />

contactless capacitive sensors and<br />

specially designed antennas


<strong>Session</strong> WedET1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Omnidirectional Vision and Aerial Robotics II<br />

Chair Friedrich Fraundorfer, ETH Zurich<br />

Co-Chair Vincenzo Lippiello, Univ. di Napoli Federico II<br />

15:00–15:15 WedET1.1<br />

Vision-only estimation of wind field strength and<br />

direction from an aerial platform<br />

Richard J. D. Moore, Saul Thurrowgood<br />

and Mandyam V. Srinivasan<br />

Queensland Brain Institute, University of Queensland, Australia<br />

• Novel method for estimating wind<br />

field strength and direction from a<br />

moving airborne platform using only<br />

visual information.<br />

• Iterative optimisation allows wind<br />

field properties to be determined<br />

from successive measurements of<br />

aircraft ground track and heading<br />

direction.<br />

• Results from simulated and realworld<br />

flight tests demonstrate<br />

accuracy and robustness of<br />

proposed approach and practicality<br />

for measuring wind in real-world<br />

environments.<br />

Attitude, heading, and ground track are<br />

estimated from omnidirectional visual<br />

input. Relationship between ground track<br />

and heading direction is used to compute<br />

wind field strength and direction.<br />

15:30–15:45 WedET1.3<br />

Vision-Based Autonomous Mapping and<br />

Exploration Using a Quadrotor MAV<br />

Friedrich Fraundorfer, Lionel Heng, Dominik Honegger,<br />

Gim Hee Lee, Lorenz Meier, Petri Tanskanen, Marc Pollefeys<br />

Computer Vision and Geometry Lab, ETH Zürich, Switzerland<br />

• We show vision-based autonomous<br />

mapping and exploration with our MAV in<br />

unknown environments.<br />

• A downward looking optical flow camera<br />

and a front looking stereo camera are the<br />

main sensors.<br />

• All algorithms necessary for autonomous<br />

mapping and exploration run on-board the<br />

MAV.<br />

• Off-board large scale pose-graph SLAM<br />

and loop closure with images transmitted<br />

via Wi-Fi to ground-station.<br />

Visualization of obstacle map and path<br />

planning planning along a corridor<br />

15:15–15:30 WedET1.2<br />

Predicting Micro Air Vehicle Landing Behaviour<br />

from Visual Texture<br />

John Bartholomew, Andrew Calway and Walterio Mayol-Cuevas<br />

Computer Science, University of Bristol, UK<br />

• Motivation: Predicting landing<br />

behaviour enables autonomous<br />

choice of landing site.<br />

• Characteristics of motion during<br />

touch-down on different surfaces are<br />

found experimentally.<br />

• General Regression is used to<br />

predict motion for new surfaces, from<br />

training data.<br />

• We test a known texture descriptor<br />

on challenging imagery from the<br />

MAV.<br />

15:45–16:00 WedET1.4<br />

A Geometrical Approach For Vision Based Attitude And<br />

Altitude Estimation For UAVs In Dark Environments<br />

Ashutosh Natraj 1,3<br />

1 MIS Lab & Le2i Lab, University of Picardie Jules Verne<br />

Peter Sturm 2 , Cedric Demonceaux 3 & Pascal Vasseur 4<br />

2 INRIA Rhone Alpes, Grenoble, France,<br />

3 Le2i Lab, University of Bourgogne, France,<br />

4 Litis Lab, University of Rouen, France.<br />

• We present a single fish-eye<br />

camera-laser projector system on a<br />

fixed base line to estimate altitude &<br />

attitude of UAVs as in fig 1.<br />

• Our system is cheap, light weight<br />

and computationally less expensive<br />

compared over commercial sensors.<br />

Applications:<br />

• Altitude and attitude estimation of<br />

UAVs for vertical take off and landing<br />

(VTOL) and maneuvering in the low<br />

light to dark indoor/outdoor, GPS<br />

deficient unknown environment with<br />

no prebuilt map as shown in figure 2.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–161–<br />

Fig 1. Our setup on the UAV-Pelican.<br />

Fig 2. An application in low light-dark<br />

indoor GPS insufficient environment.


<strong>Session</strong> WedET2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Emotion Detection and Expression<br />

Chair C. S. George Lee, Purdue Univ.<br />

Co-Chair Ren Luo, National Taiwan Univ.<br />

15:00–15:15 WedET2.1<br />

An NARX-Based Approach for Human Emotion<br />

Identification<br />

Rami Alazrai and C.S. George Lee<br />

School of Electrical and Computer Engineering, Purdue University, U.S.A<br />

• Propose an NARX-based approach to<br />

capture the spatial and temporal dynamics<br />

of facial expressions.<br />

• Temporal phases of facial expressions are<br />

identified using the proposed MIBDIA<br />

algorithm.<br />

• The proposed human-emotion recognition<br />

system achieved 91.5% average<br />

recognition rate over the CK+ dataset.<br />

15:30–15:45 WedET2.3<br />

Development of Expressive Robotic Head<br />

for Bipedal Humanoid<br />

Robot<br />

Tatsuhiro Kishi, Takuya Otani, Nobutsuna Endo,<br />

Przmyslaw Kryczka, Kenji Hashimoto, Kei Nakata<br />

and Atsuo Takanishi<br />

Faculty of Science and Engineering, Waseda University, Japan<br />

• Robotic head was developped to increase<br />

the facial expression ability of bipedal<br />

humanoid robot.<br />

• Representative facial expressions for 6<br />

basic emotion were designed by<br />

cartoonists.<br />

• Compact Mechanisms were developped to<br />

build the robotic head as small as<br />

Japanese female’s head.<br />

• Evaluations with pictures and videos show<br />

that the robotic head has extensive facial<br />

expression ability.<br />

15:15–15:30 WedET2.2<br />

A Design Methodology for<br />

Expressing Emotion on Robot Faces<br />

Mohammad Shayganfar, Charles Rich and Candace L. Sidner<br />

Computer Science Dept<br />

Worcester Polytechnic Institute, USA<br />

• Methodology is grounded in the<br />

psychological literature (Ekman FACS)<br />

• Four steps: (i) assign action units to robot<br />

DOF, (ii) apply mapping from basic action<br />

units to emotions, (iii) predict confusions,<br />

(iv) add optional action units and cartoon<br />

ideas to reduce confusion<br />

• Demonstrated and evaluated by applying<br />

methodology to a recent humanoid robot<br />

(see figure)<br />

• The experimentaly observed emotion<br />

confusion matrix agrees qualitatively with<br />

the predictions of the methodology<br />

15:45–16:00 WedET2.4<br />

Confidence Fusion Based Emotion Recognition<br />

of Multiple Persons for Human-Robot Interaction<br />

Ren C. Luo, Pei Hsien Lin, Li Wen Chang<br />

Center for Intelligent Robotics and Automation Research, National Taiwan University<br />

• We propose an integrated system which<br />

has the ability to track multiple users at one<br />

time, to recognize their facial expressions,<br />

and to identify the indoor ambient<br />

atmosphere.<br />

• In our facial expression recognition<br />

scheme, we fuse Feature Vectors based<br />

Approach (FVA) and Differential-Active<br />

Appearance Model Features based<br />

Approach (DAFA) to obtain not only<br />

apposite positions of feature points, but<br />

also more information about texture and<br />

appearance.<br />

• With our system, our intelligent robot with<br />

vision systems are able to acquire the<br />

ambient atmosphere information and<br />

interacts with people properly.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–162–<br />

A surprise situation of humanrobot<br />

interaction


<strong>Session</strong> WedET3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Outdoor, Search and Rescue Robotics II<br />

Chair Robin Murphy, Texas A&M<br />

Co-Chair Shigeo Hirose, Tokyo Inst. of Tech.<br />

15:00–15:15 WedET3.1<br />

Design and Calibration of Large Microphone<br />

Arrays for Robotic Applications<br />

Florian Perrodin, Janosch Nikolic, Joël Busset and<br />

Roland Siegwart<br />

Autonomous System Lab, ETH Zürich, Switzerland<br />

• Presentation of a modular design<br />

for large embedded microphone<br />

arrays, example of a 64-elements<br />

microphone array.<br />

• Automatic shape calibration<br />

algorithm for 2D or 3D arrays in<br />

non-controlled (reverberant)<br />

conditions.<br />

• Application in precise acoustic<br />

imaging as well as high-gain<br />

sound amplification.<br />

• Robotics applications include<br />

search and rescue, complex<br />

acoustic scenes understanding<br />

and sound-based multi-user<br />

interface.<br />

Design of 64-elements<br />

planar microphone array<br />

15:30–15:45 WedET3.3<br />

Crank-wheel:<br />

A Brand New Mobile Base for Field Robots<br />

Hisami Nakano and Shigeo Hirose<br />

Dept. of Mechano-Aerospace Engineering,<br />

Tokyo Institute of Technoloty, Japan<br />

• We propose crank-wheel mechanism ,<br />

which is simple and has high mobility on<br />

rough terrain.<br />

• The coupler link is connected to front-rear<br />

wheel by revolute joints.<br />

• We developed the prototype<br />

• Basic performance of the prototype is<br />

discussed<br />

Crank-wheel robot<br />

15:15–15:30 WedET3.2<br />

Development of Multi-wheeled Snake-like<br />

Rescue Robots with Active Elastic Trunk<br />

Kousuke Suzuki, Atsushi Nakano, Gen Endo and Shigeo Hirose<br />

Dept. of Mechano-Aerospace Engineering,<br />

Tokyo Institute of Technoloty, Japan<br />

• We propose multi-wheeled snake-like<br />

rescue robots with active elastic trunk<br />

• The trunk driven by cables satisfies both<br />

active bending motion and passive<br />

compliance<br />

• We developed three prototypes “Souryu-<br />

VII”, “Souryu-VIII” and “Souryu-IX”<br />

• Basic performance of each prototype is<br />

compared and discussed<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–163–<br />

Souryu-VII, VIII and IX<br />

15:45–16:00 WedET3.4<br />

Initial Deployment of a Robotic Team<br />

A Hierarchical Approach Under Communication<br />

Constraints Verified on Low-Cost Platforms<br />

Micael S. Couceiro, David Portugal and Rui P. Rocha<br />

Institute of Systems and Robotics, University of Coimbra, Portugal<br />

Carlos M. Figueiredo and Nuno M. F. Ferreira<br />

Dep. of Electrical Engineering, Engineering Institute of Coimbra, Portugal<br />

• Systematic method for hierarchically deploying swarm agents in an<br />

unknown scenario under communication constraints<br />

• Extension of the TraxBot platforms to support the transportation of 5<br />

eSwarBots<br />

• Despite odometry errors,<br />

eSwarBots turn out to be<br />

uniformly deployed within the<br />

test scenario<br />

Experimental setup with 1<br />

TraxBot and 5 eSwarBots


<strong>Session</strong> WedET4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Control of Bio-Inspired Robots II<br />

Chair Giorgio Metta, Istituto Italiano di Tecnologia (IIT)<br />

Co-Chair<br />

15:00–15:15 WedET4.1<br />

Iterative Learning Control for a Musculoskeletal Arm:<br />

Utilizing Multiple Space Variables to Improve the Robustness<br />

Kenji Tahara, Yuta Kuboyama and Ryo Kurazume<br />

Kyushu University, Japan<br />

• Proposing a new iterative learning<br />

control method which is composed of<br />

multiple space variables<br />

• Conducting numerical simulations to<br />

show the theoretical validity of<br />

proposed method<br />

• Performing experiments to<br />

demonstrate the practical usefulness<br />

of the controller<br />

Experimental setup of the two-link six-muscle<br />

wire-driven planar arm system<br />

15:30–15:45 WedET4.3<br />

Biologically Inspired Reactive Climbing Behavior<br />

of Hexapod Robots<br />

Dennis Goldschmidt, Frank Hesse,<br />

Florentin Wörgötter and Poramate Manoonpong<br />

III Physikalisches Institut - Biophysik, Georg-August-Universität Göttingen,<br />

Germany<br />

• A biologically-inspired reactive climbing<br />

controller is presented. It is composed of<br />

three neural modules: Backbone Joint<br />

Control (BJC), Leg Reflex Control (LRC),<br />

and Neural Locomotion Control (NLC).<br />

• The BJC and LRC control climbing key<br />

behavior while basic walking behavior<br />

including omnidirectional walking is<br />

achieved by NLC.<br />

• Experimental results show that the<br />

developed controller allows the robot to<br />

surmount obstacles with a maximum<br />

height of 13 cm which equals 75% of its<br />

leg length.<br />

Control architecture of the robot AMOS II (top)<br />

and the comparison of the climbing behavior<br />

of a cockroach and the robot (bottom)<br />

15:15–15:30 WedET4.2<br />

A Generic Software Architecture for Control of<br />

Parallel Kinematics Designed for Reduced<br />

Computing Hardware<br />

Franz Dietrich, Sven Grüner and Annika Raatz<br />

Institute of Machine Tools and Production Technology (IWF),<br />

TU Braunschweig, Germany<br />

• An object oriented software architecture<br />

dedicated to lean microcontrollers, highly<br />

scalable, versatile and powerful enough to<br />

control parallel kinematics<br />

• Adopts a variety of kinematics, actuators,<br />

sensors and communication interfaces as<br />

well as advanced functionalities and<br />

control concepts<br />

• A case study demonstrates the<br />

architecture’s deployment for a<br />

miniaturized five-bar robot, designed for<br />

biotech lab automation<br />

15:45–16:00 WedET4.4<br />

Embodied hyperacuity from Bayesian perception:<br />

Shape and position discrimination<br />

with an iCub fingertip sensor<br />

1 1 1 1<br />

2<br />

1<br />

N Lepora, U Martinez, H Barron, M Evans, G Metta, T Prescott<br />

1) University of Sheffield, UK; 2) Italian Institute of Technology, Italy<br />

• First demonstration of hyperacuity with a<br />

tactile sensor, in that the accuracy is finer<br />

than the taxel spacing<br />

• Simultaneous classification of shape and<br />

position, which are useful percepts for<br />

grasping and manipulation<br />

• Fingertip-object relative position to submillimeter<br />

resolution (over 16mm range),<br />

compared with 4mm taxel spacing<br />

• Rod diameter to less than 2mm resolution<br />

(over 4-12 mm range)<br />

• Bayesian perception methodology based on<br />

models of animal perception in neuroscience<br />

• Novel testing rig using a Cartesian robot for<br />

systematic testing of sensing capabilities<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–164–<br />

Test Fingertip<br />

. objects<br />

geometry


<strong>Session</strong> WedET5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Personal Robots II<br />

Chair Adriana Tapus, ENSTA-ParisTech<br />

Co-Chair<br />

15:00–15:15 WedET5.1<br />

Semantic Object Maps for Robotic Housework –<br />

Representation, Acquisition and Use<br />

Dejan Pangercic¹, Moritz Tenorth², Benjamin Pitzer¹, Michael Beetz³<br />

Robert Bosch LLC-USA¹, IRC ATR-Japan², University of Bremen-Germany³<br />

� End-to-end system for autonomous<br />

building of Semantic Object Maps (SOMs)<br />

for service robots operating in household<br />

environments<br />

� SOMs store functional properties of<br />

objects, articulation models and textured<br />

3D meshes<br />

� SOMs are represented as symbolic<br />

knowledge bases with facts about the<br />

objects in the environment<br />

� SOMs are expressive and powerful<br />

resource of knowledge for service robots<br />

� Tested on PR2 robot + Kinect<br />

15:30–15:45 WedET5.3<br />

Playmate Robots That Can Act<br />

According to a Child's Mental State<br />

Kasumi Abe, Akiko Iwasaki, Tomoaki Nakamura<br />

and Takayuki Nagai<br />

Department of Mechanical Engineering and Intelligent Systems,<br />

The University of Electro-Communications, Japan<br />

Ayami Yokoyama, Takayuki Shimotomai,<br />

Hiroyuki Okada and Takashi Omori<br />

Department of Electrical Engineering, Tamagawa University, Japan<br />

•We propose a playmate robot that<br />

can play with a child.<br />

•The playmate robot should sustain a<br />

child‘s interest in play and forge a<br />

good relationship between the robot<br />

and the child.<br />

•We observed child’s play with a<br />

kindergarten teacher.<br />

Action selection model<br />

•The robot estimates the child's inner<br />

state and select an appropriate action<br />

according to the model.<br />

model<br />

15:15–15:30 WedET5.2<br />

A framework for the design of person following<br />

behaviors for social mobile robots<br />

Consuelo Granata and Philippe Bidaud<br />

Institut des Systèmes Intelligents et<br />

Robotique , UPMC-CNRS UMR 7222<br />

• Framework architecture combining three<br />

layers: perception, decision and control<br />

layers.<br />

• Laser+camera based detector and<br />

estimation of person’s state by EKF<br />

• Decision-making engine exploiting fuzzy<br />

rules to define robot behavior when<br />

dealing with uncertainties<br />

• System experimented in everyday-life<br />

environments to test performance in<br />

different contexts<br />

15:45–16:00 WedET5.4<br />

Planar Segmentation from Depth Images using<br />

Gradient of Depth Feature<br />

Bashar Enjarini and Axel Gräser<br />

IAT Institute, Bremen University, Germany<br />

• Introducing Gradient of Depth Feature<br />

(GoD) used for segmenting planar<br />

regions.<br />

• The GoD feature is parameter-less<br />

• Implementing the GoD feature in robust<br />

algorithm for segmenting depth images.<br />

• The proposed algorithm is robust to<br />

parameter changes.<br />

• The proposed algorithm is able to<br />

segment planar regions from non-planar<br />

objects as well (bottles, cylinders)<br />

• The proposed algorithm meet the<br />

segmentation quality of other stat-of-art<br />

algorithms<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–165–<br />

Example of segmenting a<br />

clustered scene using the<br />

proposed algorithm


<strong>Session</strong> WedET6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Mapping IV<br />

Chair<br />

Co-Chair<br />

15:00–15:15 WedET6.1<br />

What can we learn from 38,000 rooms?<br />

Reasoning about unexplored space in indoor<br />

environments<br />

Alper Aydemir, Patric Jensfelt and John Folkesson<br />

Center for Autonomous Systems, KTH, Sweden<br />

• Reasoning about unexplored space is a<br />

key part of spatial understanding<br />

• We report statistical properties of indoor<br />

environments by investigating the two floor<br />

plan data sets from different parts of the<br />

SHAFT<br />

world, namely the KTH and MIT datasets.<br />

• We present two methods for predicting<br />

indoor topologies given a partial map of<br />

the environment.<br />

• We make the KTH campus data set, our<br />

annotation tool and software library<br />

developed during this work publicly<br />

available at<br />

http://www.cas.kth.se/floorplans<br />

OFF SV<br />

SECY/R<br />

CONF<br />

LAB SV TELE<br />

CORR<br />

M LAV<br />

OFF<br />

ELEC<br />

CORR<br />

STAIR<br />

LAB SV<br />

CLASS<br />

RS LAB<br />

P CIRC<br />

OFF<br />

OFF<br />

ELEV<br />

U/M<br />

OFF<br />

OFF<br />

OFF<br />

OFF<br />

CONF<br />

OFF<br />

CORR<br />

OFF<br />

OFF<br />

Topological representation<br />

of a building floor<br />

15:30–15:45 WedET6.3<br />

OFF<br />

OFF SV<br />

OFF<br />

OFF<br />

Creating and Using Probabilistic Costmaps from<br />

Vehicle Experience<br />

Liz Murphy, Steven Martin and Peter Corke<br />

CyPhy Lab, Queensland University of Technology, Australia<br />

• Probabilistic costmaps, unlike the<br />

predominant assumptive costmaps, allow<br />

a representation of the uncertainty in the<br />

robot's environment model to be used in<br />

path planning<br />

• We show how probabilistic costmaps can<br />

be learned in a self-supervised manner by<br />

robots navigating outdoors<br />

• Traversability estimates are garnered from<br />

onboard sensing<br />

• Gaussian processes are used to<br />

extrapolate sparse these sparse<br />

traversability estimates and account for<br />

heteroscedastic noise<br />

CONF<br />

OFF<br />

ELEC<br />

OFF SV<br />

RS LAB<br />

RS LAB<br />

OFF<br />

OFF<br />

OFF<br />

U/M<br />

OFF<br />

CLA SV<br />

OFF SV<br />

OFF<br />

RS LAB<br />

OFF SV<br />

OFF<br />

P CIRC<br />

OFF<br />

LAB SV<br />

OFF SV<br />

OFF SV<br />

15:15–15:30 WedET6.2<br />

Map Merging Using Hough Peak Matching<br />

Sajad Saeedi ♦ , Liam Paull ♦ , Michael Trentini ♦♦ , Mae Seto ♦♦ and<br />

Howard Li ♦<br />

♦ Electrical and Computer Engineering, University of New Brunswick, Canada<br />

♦♦ Defence Research and Development Canada, Canada<br />

• One of the major problems for multi-robot<br />

SLAM is that the robots only know their<br />

positions in their own local coordinate<br />

frames, so fusing map data can be<br />

challenging.<br />

• In this research, map fusion is achieved by<br />

transforming individual maps into the<br />

Hough space where they are represented<br />

in an abstract form.<br />

• Properties of the Hough transform are<br />

used to find the common regions in the<br />

maps, which are then used to calculate<br />

the unknown transformation between the<br />

maps.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–166–<br />

Three partial maps (a, b and c)<br />

are fused to generate a global<br />

map (d) using Hough peak<br />

matching<br />

15:45–16:00 WedET6.4<br />

Dynamic Visual Understanding of the Local<br />

Environment for an Indoor Navigating Robot<br />

Grace Tsai and Benjamin Kuipers<br />

Electrical Engineering and Computer Science,<br />

University of Michigan, Ann Arbor<br />

• Represent indoor environment by<br />

a set of meaningful planes –<br />

ground and walls.<br />

• Generate qualitatively distinct 3D<br />

structural hypotheses from image<br />

features incrementally.<br />

• Evaluate a set of qualitatively<br />

distinct hypotheses through a<br />

Bayesian filter while refining the<br />

quantitative precision of each<br />

hypothesis based on current<br />

observations.<br />

• Runs in real-time, without the<br />

need for prior training data or the<br />

Manhattan-world assumption.


<strong>Session</strong> WedET8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Search and Rescue – Modeling<br />

Chair Pedro Lima, Inst. Superior Técnico - Inst. for Systems and Robotics<br />

Co-Chair<br />

15:00–15:15 WedET8.1<br />

Multi-sensor ATTenuation Estimation (MATTE):<br />

Signal-strength prediction for teams of robots<br />

Johannes Strom and Edwin Olson<br />

Computer Science and Engineering<br />

University of Michigan USA<br />

Attenuation estimate<br />

after exploration mission<br />

15:30–15:45 WedET8.3<br />

A Markov semi-supervised clustering approach<br />

and its application in topological map extraction<br />

Ming Liu and Francis Colas and Francois Pomerleau<br />

and Roland Siegwart<br />

Autonomous Systems Lab, ETH Zurich, Switzerlandr<br />

• Propose a semi-supervised<br />

clustering approach by k-NN<br />

reasoning<br />

• Compared with related semisupervised<br />

algorithms on two-ring<br />

dataset<br />

• Topological segmentation by<br />

human sparse input using the<br />

proposed clustering model<br />

15:15–15:30 WedET8.2<br />

Robust Acoustic Source Localization of<br />

Emergency Signals from Micro Air Vehicles<br />

Meysam Basiri 1,2 , Felix Schill 1 , Pedro Lima 2 and Dario Floreano 1<br />

1. Laboratory of Intelligent Systems,<br />

Ecole Polytechnique Federale de Lausanne, Switzerland<br />

2. Institute for Systems and Robotics,<br />

Instituto Superior Técnico, Lisboa, Portugal<br />

• Sound source localization system for<br />

micro air vehicles capable of localizing<br />

narrow band sound sources on the<br />

ground.<br />

• Two types of acoustic information (Time<br />

delay of arrival and Doppler shift) and<br />

vehicle dynamics are used in a particle<br />

filter.<br />

• Potential application in search and<br />

rescue missions to locate people with<br />

emergency whistles or personal alarms is<br />

experimentally demonstrated to be<br />

feasible and effective.<br />

15:45–16:00 WedET8.4<br />

Search-theoretic and Ocean Models for<br />

Localizing Drifting Objects<br />

Joses Yau<br />

Oceanography Department, Naval Postgraduate School, USA<br />

Timothy H. Chung<br />

Systems Engineering Department, Naval Postgraduate School, USA<br />

• Improve the probability of detecting a<br />

near-surface, drifting object over time<br />

using a UAV<br />

• Integrate ocean models, i.e., idealized<br />

current flows, and search models, i.e.,<br />

expanding area and discrete myopic<br />

search<br />

• Conduct experimental design for extensive<br />

simulation studies to identify significant<br />

factors for improved search<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–168–<br />

Simulated expanding area for<br />

search of a drifting object in<br />

idealized surface current flow


<strong>Session</strong> WedET<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Multifingered Hands<br />

Chair Jianwei Zhang, Univ. of Hamburg<br />

Co-Chair<br />

15:00–15:15 WedET<strong>10</strong>.1<br />

Tactile Sensor Based Varying Contact Point<br />

Manipulation Strategy for Dexterous Robot Hand<br />

Manipulating Unknown Objects<br />

Yuan-Fei Zhang<br />

State Key Laboratory of Robotics and System, HIT, China<br />

Hong Liu<br />

Institute of Robotics and Mechatronics, DLR, Germany<br />

• A tactile sensor based varying contact<br />

point manipulation strategy is proposed;<br />

• This strategy utilizes the simple<br />

mathematical models based on the<br />

assumption of fixed contact points;<br />

• This strategy is proposed to manipulate<br />

unknown objects with rolling contact for<br />

dexterous robot hand;<br />

• Experimental results show that the<br />

strategy can effectively improve the<br />

manipulation performance.<br />

Experimental results of dexterous<br />

robot hand manipulating<br />

unknown objects<br />

15:30–15:45 WedET<strong>10</strong>.3<br />

Action Gist Based Automatic Segmentation for<br />

Periodic In-hand Manipulation Movement Learning<br />

Gang Cheng, Norman Hendrich and Jianwei Zhang<br />

Department of Informatics, University of Hamburg, Germany<br />

• A short introduction to the techniques of<br />

in-hand manipulation action gist<br />

• Based on the action gist extraction, a<br />

segmentation algorithm is proposed<br />

• The possibility of fusing the segmentation<br />

result with tactile information is discussed<br />

• Experiments are taken with respect to<br />

some practical cases, and then follows the<br />

corresponding analysis<br />

Periodic In-hand<br />

manipulation movement<br />

Data-glove Tactile Sensor<br />

Action gist<br />

techniques<br />

Segmentation 1<br />

Segmentation 2<br />

Overall Segmentation<br />

15:15–15:30 WedET<strong>10</strong>.2<br />

Card Manipulation using a High-speed Robot System<br />

with High-speed Visual Feedback<br />

Yuji Yamakawa and Masatoshi Ishikawa<br />

Dept. of Creative Informatics, Univ. of Tokyo, Japan<br />

Akio Namiki<br />

Dept. of Mechanical Engineering, Chiba Univ., Japan<br />

• We performed card flicking and card<br />

catching by using a high-speed robot<br />

system.<br />

• Card flicking was carried out by using the<br />

strain energy occurring with card<br />

deformation. And, we proposed a strategy<br />

(utilizing finger vibration) for flicking the<br />

card.<br />

• we suggested a high-speed visual<br />

feedback control method in order to catch<br />

the card flicked by the robot hand.<br />

• We analyzed the slip condition and the<br />

transition from the strain energy to the<br />

kinetic energy during card flicking.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–169–<br />

Card flicking<br />

15:45–16:00 WedET<strong>10</strong>.4<br />

Development of a Low Cost Anthropomorphic<br />

Robot Hand with High Capability<br />

Ji-Hun Bae, Sung-Woo Park, Jae-Han Park, Moon-Hong Baeg<br />

Robot Convergence R&D Group, KITECH, Korea<br />

Doik Kim and Sang-Rok Oh<br />

Interaction and Robotics Research Center, KIST, Korea<br />

• Portable adaptive four-fingered robot hand<br />

• Light-weight anthropomorphic design<br />

• Back-drivable joint for compliant contact<br />

with objects<br />

• Capable of handling objects up to 1.5kg<br />

• Modular type structure for easy repair and<br />

modification<br />

• Low cost less than <strong>10</strong> thousand dollars<br />

KITECH Robotic Hand


<strong>Session</strong> WedEVT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 15:00–16:00<br />

Vision for Segmentation and Servoing<br />

Chair Aaron Bobick, Georgia Tech.<br />

Co-Chair Dongheui Lee, Tech. Univ. of Munich<br />

15:00–15:15 WedEVT9.1<br />

Guided Pushing for Object Singulation<br />

Tucker Hermans James M. Rehg<br />

Aaron F. Bobick<br />

Robotics and Intelligent Machines<br />

School of Interactive Computing<br />

Georgia Institute of Technology, USA<br />

• Desire to separate and segment all<br />

objects in the scene<br />

• Accomplish through pushing<br />

• Object boundaries lie on image edges<br />

• Objects split at the vertical plane<br />

through the edge<br />

• Push parallel to thisedge through the<br />

predicted objectcentroid<br />

• Quantize candidate boundary<br />

orientations for each point cloud cluster<br />

• Stop pushing when all non-empty<br />

directions have been tested<br />

15:30–15:45 WedEVT9.3<br />

Robust Visual Servoing for Object Manipulation<br />

with Large Time-Delays of Visual Information<br />

Akihiro Kawamura, Kenji Tahara, Ryo Kurazume<br />

and Tsutomu Hasegawa<br />

• New visual servoing method for object<br />

manipulation robust to time-delays is proposed<br />

• That is robust to<br />

Kyushu University, Japan<br />

� Time-delay caused by low sampling rate<br />

� Time-delay caused by image processing<br />

and data transmission latency<br />

• Virtual object frame is utilized effectively<br />

• Usefulness is demonstrated by numerical<br />

simulations and experiments.<br />

Actual frame and virtual frame<br />

15:50–15:55 WedEVT9.5<br />

Ascending Stairway Modeling: A First Step<br />

Toward Autonomous Multi-Floor Exploration<br />

Jeffrey Delmerico, Jason Corso and Julian Ryde<br />

Dept. of Computer Science and Engineering, SUNY Buffalo, USA<br />

David Baran and Philip David<br />

US Army Research Laboratory, USA<br />

• Goal: autonomous multi-floor<br />

exploration by ground robots.<br />

• Discover stairways in environment<br />

during mapping and assess their<br />

traversability.<br />

• System will permit path planning to<br />

consider stairways among<br />

traversable terrain.<br />

• Robust detector and accurate<br />

modeling procedure; sufficient to<br />

determine step dimensions to<br />

within 2cm.<br />

Overview of stairway modeling system: stair<br />

detection in depth imagery, aggregation of<br />

extracted step edge point clouds, and<br />

generative model fitting.<br />

15:15–15:30 WedEVT9.2<br />

Segmentation of Unknown Objects<br />

in Indoor Environments<br />

A. Richtsfeld, T. Mörwald, J. Prankl, M. Zillich, M.Vincze<br />

Automation and Control Institute (ACIN),<br />

Vienna University of Technology, Austria<br />

• Learning segmentation of unknown<br />

objects from RGB-D images in a<br />

hierarchical data abstraction framework<br />

• Data abstraction of RGB-D images to<br />

parametric surface patches.<br />

• Learning of perceptual grouping rules with<br />

support vector machines (SVM)<br />

• Globally optimal segmentation using<br />

Graph-Cut on SVM predictions<br />

15:45–15:50 WedEVT9.4<br />

Tire Mounting on a Car Using the Real-Time<br />

Control Architecture ARCADE<br />

Thomas Nierhoff, Lei Lou, Vasiliki Koropouli, Martin Eggers,<br />

Timo Fritzsch, Omiros Kourakos, Kolja Kühnlenz<br />

Dongheui Lee, Bernd Radig, Martin Buss, Sandra Hirche<br />

Technische Universität München, Germany<br />

• ARCADE architecture:<br />

- hard real-time scheduling,<br />

- asynchronous remote<br />

procedure call,<br />

- hardware-in-the-loop<br />

development<br />

• Increasing functional modules:<br />

- autonomous navigation,<br />

- haptic interaction/perception,<br />

- human-robot/multi-robot cooperation<br />

• Challenges for control:<br />

- stability of closed kinematic chains, transfer of human motions to robots<br />

• Challenges for perception<br />

- robust object detection in large rooms, whole environment perception<br />

15:55–16:00 WedEVT9.6<br />

Low cost MAV platform AR-drone in experimental verifications<br />

of methods for vision based autonomous navigation<br />

Martin Saska, Tomáš Krajník, Jan Faigl, Vojtěch Vonásek and Libor Přeučil<br />

Department of Cybernetics, Faculty of Electrical Engineering,<br />

Czech Technical University in Prague, Czech Republic.<br />

• Video: a sequence of various Micro Aerial<br />

Vehicle (MAV) applications and research<br />

experiments with AR-Drone playing the<br />

main role. http://imr.felk.cvut.cz/demos/videos/drone/<br />

• The presented methods rely on visual<br />

navigation and localization using on-board<br />

cameras of the AR-drone employed in the<br />

control feedback.<br />

• The aim is to demonstrate flight performance<br />

of this platform in real world tasks.<br />

• Applications: 1) MAV-UGV teams for<br />

inspection of inaccessible areas; 2) Autonomous<br />

flight in outdoor environment with<br />

human-MAV interaction; 3) AR-Drone as<br />

an ad-hoc external localization unit for<br />

multi-robot applications; 4) MAV for<br />

verification of localization uncertainty<br />

decrease in an autonomous inspection<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–170–<br />

Mobile<br />

heliport<br />

UGV camera<br />

UAV<br />

UAV<br />

camera<br />

Autonomous inspection of inaccessible<br />

areas by MAV-UGV teams


<strong>Session</strong> WedFT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Estimation and Sensor Fusion<br />

Chair Li-Chen Fu, National Taiwan Univ.<br />

Co-Chair<br />

16:15–16:30 WedFT1.1<br />

Contactless deflection sensing of concave and<br />

convex shapes assisted by soft mirrors<br />

Michal Karol Dobrzynski, Ionut Halasz, Ramon Pericet-Camara<br />

and Dario Floreano<br />

Laboratory of Intelligent Systems,<br />

Ecole Polytechnique Federale de Lausanne, Switzerland<br />

• Deflection sensor capable of concave<br />

and convex shape estimation with no<br />

impact on the deflected substrate<br />

softness.<br />

• Dynamic range of 130° with 0.8°<br />

resolution and 400Hz data acquisition.<br />

• Analytical model in good agreement<br />

with measurements (average error of<br />

8%)<br />

• Novel quick manufacturing method for<br />

soft PDMS mirrors based<br />

on surface tension.<br />

Top: Sensor in standard configuration perceives<br />

concave deflections only; Middle and bottom:<br />

By attaching a customized mirror the range can<br />

be extended towards convex deflections.<br />

16:45–17:00 WedFT1.3<br />

Manipulator State Estimation with Low Cost<br />

Accelerometers and Gyroscopes<br />

Philip Roan and Nikhil Deshpande<br />

Robert Bosch LLC, USA and North Carolina State University, USA<br />

Yizhou Wang and Benjamin Pitzer<br />

University of California, Berkeley, USA and Robert Bosch LLC, USA<br />

• Estimate manipulator joint angles using<br />

triaxial accelerometers and uniaxial<br />

gyroscopes<br />

• Comparing three different compensation<br />

strategies:<br />

• Complementary Filter<br />

• Time Varying Complementary Filter<br />

• Extended Kalman Filter<br />

• Mean error of 1.3° over the joints<br />

estimated, resulting in end-effector errors<br />

of 6.1 mm or less<br />

A generic joint between two links<br />

showing how accelerometers and<br />

gyroscopes are mounted.<br />

17:15–17:30 WedFT1.5<br />

Sensor Fusion Based Human Detection and<br />

Tracking System for Human-Robot Interaction<br />

Kai Siang Ong, Yuan Han Hsu, and Li Chen Fu, Fellow, IEEE<br />

Department of Computer Science & Information Engineering,<br />

Department of Electrical Engineering<br />

National Taiwan University, Taiwan, R.O.C.<br />

• Integrating the information of laser<br />

range finder and that from vision<br />

sensor using Covariance<br />

Intersection algorithm<br />

• Propose a behavior-response<br />

system:<br />

(1) human behavior‘s inference<br />

(2) robot reaction<br />

• For Human Behavior Intention<br />

Inference, we take the proxemics<br />

framework into consideration.<br />

Public Space<br />

Interaction Space<br />

vlrf<br />

d<br />

f<br />

x f , y f<br />

θia , θ fd<br />

16:30–16:45 WedFT1.2<br />

Deformable Structure From Motion by Fusing<br />

Visual and Inertial Measurement Data<br />

Stamatia Giannarou, Zhiqiang Zhang, Guang-Zhong Yang<br />

Hamlyn Centre for Robotic Surgery, Imperial College London, UK<br />

• 3D reconstruction of a deforming surgical<br />

environment in MIS is important for intraoperative<br />

guidance.<br />

• A novel adaptive UKF parameterization<br />

scheme is proposed to fuse vision<br />

information with data from an Inertial<br />

Measurement Unit for accurate 3D<br />

reconstruction.<br />

• A direct application of the proposed<br />

framework is free-form deformation<br />

recovery to enable adaptive motion<br />

stabilization and visual servoing in<br />

robotically assisted laparoscopic surgery.<br />

17:00–17:15 WedFT1.4<br />

Vision-Aided Inertial Navigation<br />

Using Virtual Features<br />

Chiara Troiani and Agostino Martinelli<br />

INRIA Rhone Alpes, France<br />

• MAV equipped with a monocular camera<br />

and a laser pointer mounted on a fixed<br />

baseline and an IMU<br />

• Unique point feature used: laser spot<br />

projected on a planar surface and<br />

observed by the monocular camera<br />

• Analytical derivation of all the observable<br />

modes<br />

• Local decomposition and recursive<br />

estimation of the observable modes<br />

performed using an Extended Kalman<br />

Filter<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–171–


<strong>Session</strong> WedFT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Intelligent Transportation Systems<br />

Chair Christian Laugier, INRIA Rhône-Alpes<br />

Co-Chair<br />

16:15–16:30 WedFT3.1<br />

Evaluating Risk at Road Intersections by<br />

Detecting Conflicting Intentions<br />

Stéphanie Lefèvre, Christian Laugier<br />

Inria Grenoble Rhône-Alpes, France<br />

Javier Ibañez-Guzmán<br />

Renault S.A.S., France<br />

• Risk assessment at road intersections by<br />

comparing driver intention with driver<br />

expectation: no need to predict the future<br />

trajectories of vehicles.<br />

• Intention and expectation are inferred from<br />

a probabilistic motion model incorporating<br />

contextual information.<br />

• Evaluation in simulation: analysis of the<br />

collision prediction horizon and of the<br />

potential impact on accident avoidance.<br />

Dangerous situation detected:<br />

priority violation at a road<br />

intersection<br />

16:45–17:00 WedFT3.3<br />

Driver Assistance System for Backward<br />

Maneuvers in Passive Multi-trailer Vehicles<br />

Jesús Morales, Anthony Mandow,<br />

Jorge L. Martínez and Alfonso J. García Cerezo<br />

Dpto. Ingeniería de Sistemas y Automática,<br />

Universidad de Málaga, Spain<br />

• Advanced driver assistance (ADAS).<br />

• Avoids jackknife and lack of visibility in<br />

backward multi-trailer driving.<br />

• In reverse, vehicle driven as from the back<br />

of the last trailer aided by rear camera.<br />

• Application of steering limitation method<br />

on virtual tractor.<br />

• Curvature limitations are felt through force<br />

feedback in steering-wheel.<br />

• Drive-by-wire controls & hitch angle<br />

sensors.<br />

• Case study: 2-trailer robotic vehicle.<br />

17:15–17:30 WedFT3.5<br />

Communication Coverage for Independently<br />

Moving Robots<br />

Stephanie Gil and Dan Feldman and Daniela Rus<br />

CSAIL, MIT, USA<br />

• We provide communication coverage over a<br />

group of sensing vehicles via placement of<br />

mobile base stations where<br />

• Sensors move over unknown trajectories<br />

• No assumed cooperation from sensors<br />

• Only sensor-base station or base stationbase<br />

station communication over distance<br />

< R is reliable<br />

• We develop provable exact and approximate<br />

(faster) algorithms for finding optimal base<br />

station locations<br />

q<br />

p<br />

t+1<br />

Communication network for<br />

mobile agents moving over<br />

unknown trajectories with<br />

changing network topology<br />

16:30–16:45 WedFT3.2<br />

Contextual Scene Segmentation of Driving<br />

Behavior based on Double Articulation Analyzer<br />

Kazuhito Takenaka and Takashi Bando<br />

Corporate R&D Div.3, DENSO CORPORATION, Japan<br />

Shogo Nagasaka and Tadahiro Taniguchi<br />

College of Information Science and Engineering, Ritsumeikan Univ., Japan<br />

Kentarou Hitomi<br />

Technical Research Division, Toyota InfoTechnology Center Co.,Ltd., Japan<br />

• Segment driving behavior into meaningful chunks for driving scene<br />

recognition<br />

• Double articulation analyzer is used in a similar manner to natural<br />

language processing<br />

• The result of the segmentation is closer to the driving scene produced by<br />

human recognition<br />

Overview of double articulation analyzer (A), and Nested Pitman-Yor Language Model (B)<br />

17:00–17:15 WedFT3.4<br />

Investigation of Personal Mobility Vehicle<br />

Stability and Maneuverability under Various<br />

Road Scinarios<br />

Jawad Masood and Matteo Zoppi<br />

PMARlab DIME, University of Genoa, Italy<br />

Rezia Molfino<br />

PMARlab DIME, University of Genoa, Italy<br />

• This paper investigates Stability and<br />

manuuverability of personal mobility<br />

vehicle.<br />

• Suspension design concept of personal<br />

mobility vehicle is based on accomodating<br />

different roads in city centers.<br />

• We have performed passive dynamic<br />

analysis of hydraulic suspensions in order<br />

to study the behaviour of the full scale<br />

vehicle inertia.<br />

• Multi-body dynamic simulations are<br />

created to study angular velocity, angular<br />

acceleration, pitch, yaw and roll variations.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–172–


<strong>Session</strong> WedFT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Mechanism Design for Bio-Inspired Robots<br />

Chair Shigeki Sugano, Waseda Univ.<br />

Co-Chair Pål Liljebäck, SINTEF IKT<br />

16:15–16:30 WedFT4.1<br />

Reconsidering Inter- and Intra-limb Coordination<br />

Mechanisms in Quadruped Locomotion<br />

Takeshi Kano, Dai Owaki, and Akio Ishiguro<br />

Research Institute of Electrical Communication, Tohoku University<br />

Akio Ishiguro<br />

Japan Science and Technology Agency, CREST<br />

• We present an autonomous<br />

decentralized control scheme<br />

for quadruped locomotion<br />

wherein inter- and intra-limb<br />

coordination mechanisms are<br />

well coupled.<br />

• Simulation results show that the<br />

quadruped exhibits transitioning<br />

between walking and running<br />

and the ability to adapt to<br />

changes in body properties.<br />

Time<br />

Hind Fore<br />

(a) ω = 4 (b) ω = 12<br />

Flight<br />

phase<br />

16:45–17:00 WedFT4.3<br />

Harp plucking robotic finger<br />

Delphine Chadefaux, Jean-Loïc Le Carrou, Sylvère Billout and<br />

Laurent Quartier<br />

UPMC Univ Paris 06, UMR CNRS 7190, d'Alembert, Paris, France<br />

Marie-Aude Vitrani<br />

UPMC Univ Paris 06, UMR CNRS 7222, ISIR, Paris, France<br />

• Design of a configurable robotic finger to<br />

pluck harp string<br />

• Various silicone fingertips tested<br />

• Comparison with a real harpist<br />

performance<br />

• Validation through high-speed camera and<br />

vibrational measurements descriptors<br />

Robotic finger enhanced with a<br />

silicone fingertip while plucking a<br />

harp string.<br />

17:15–17:30 WedFT4.5<br />

A Modular and Waterproof Snake Robot Joint<br />

Mechanism with a Novel Force/Torque Sensor<br />

Pål Liljebäck* , **, Øyvind Stavdahl*, Kristin Y. Pettersen*, and<br />

Jan Tommy Gravdahl*<br />

* Dept. of Engineering Cybernetics, Norwegian University of Science and<br />

Technology, Norway<br />

** Dept. of Applied Cybernetics, SINTEF ICT, Norway<br />

• We present a waterproof and<br />

mechanically robust joint module for<br />

a snake robot.<br />

• The module contains a customdesigned<br />

force/torque sensor based<br />

on strain gauges.<br />

• The sensor will enable the snake<br />

robot to measure external contact<br />

forces from its environment.<br />

• Experimental results illustrate the<br />

performance of the force/torque<br />

sensor.<br />

16:30–16:45 WedFT4.2<br />

Materials and Mechanisms for<br />

Amorphous Robotic Construction<br />

Nils Napp and Jessica Wu and Radhika Nagpal<br />

Harvard University, MA, USA<br />

Olive R. Rappoli<br />

Worcester Polytechnic Institute, MA, USA<br />

• Amorphous materials conform to<br />

arbitrary obstacles and can be used<br />

to build in unstructured terrain<br />

• Biological systems successfully use<br />

many different types of amorphous<br />

materials to build in nature<br />

• Robotic mechanisms to deposit<br />

various types of amorphous materials<br />

are presented and compared<br />

• We compare material properties and<br />

evaluate them for use in research for<br />

autonomous robotic construction with<br />

amorphous materials<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–173–<br />

Small remote controlled robot<br />

building a foam ramp<br />

17:00–17:15 WedFT4.4<br />

Humanlike Shoulder Complex for<br />

Musculoskeletal Robot Arms<br />

Shuhei Ikemoto 1) , Fumiya Kannou 2) , and Koh Hosoda 1)<br />

1) Department of Multimedia Engineering, Osaka University, Japan<br />

2) Department of Adaptive Machine Systems, Osaka University, Japan<br />

• The approach of mimicking<br />

musculoskeletal systems of living<br />

organisms has attracted considerable<br />

attention in robotics.<br />

• The superior limb girdle is fundamental<br />

for the arm movements of humans.<br />

• In particular, the glenohumeral and<br />

scapulothoratic joints are difficult to<br />

mimic the shapes and functionalities.<br />

• We propose new designs of the two<br />

joints to develop a musculoskeletal<br />

robot arm.<br />

Developed musculoskeletal robot arm<br />

with humanlike shoulder complex


<strong>Session</strong> WedFT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Contact Modeling<br />

Chair Rui Cortesao, Univ. of Coimbra<br />

Co-Chair<br />

16:15–16:30 WedFT5.1<br />

Synthesis and Stabilization of Complex Behaviors<br />

through Online Trajectory Optimization<br />

Yuval Tassa, Tom Erez and Emanuel Todorov<br />

Computer Science & Engineering, University of Washington, USA<br />

• Online trajectory optimization<br />

method and software platform.<br />

• iterative-LQG trajectory optimizer.<br />

• Fast, in-house physics engine with<br />

novel contact models.<br />

• Full humanoid behavior generated in<br />

7 x slower than realtime, simpler<br />

problems already in real-time.<br />

• Nothing is precomputed, no heuristic<br />

approximations to the Value function<br />

are used.<br />

Humanoid getting up<br />

16:45–17:00 WedFT5.3<br />

Robust Sensing of Contact Information for<br />

Detection of the Physical Properties of an Object<br />

Takashi Takua, Ken Takamine, and Tatsuya Masuda<br />

Department of Electrical and Electronic Systems Engineering,<br />

Osaka Institute of Technology, Japan<br />

• Robust sensing that estimate the contact information, reaction force from<br />

the object and consequent joint displacement during pushing the object, is<br />

proposed<br />

• Using certain kinetic and kinematic relationships on 1-DoF joint<br />

mechanism and physical property of the McKibben pneumatic actuator,<br />

the relationships among magnitude of the force on the contact point, the<br />

joint angle, and the inner pressure of the actuator are derived.<br />

• The relationship is evaluated using<br />

physical 1-DoF joint mechanism, and,<br />

elasticity of the object is estimated by<br />

using the relationships.<br />

• Experiments show that the robust<br />

sensing of the contact information<br />

that avoids attaching the sensors on<br />

the contact point and the joint can be<br />

obtained, and that physical property<br />

of the object can be estimated.<br />

17:15–17:30 WedFT5.5<br />

Comparison of Position and Force-Based<br />

Techniques for Environment Stiffness<br />

Estimation in Robotic Tasks<br />

Fernanda Coutinho and Rui Cortesão<br />

Institute of Systems and Robotics, University of Coimbra, Portugal<br />

• In this paper, we compare the results of position-based stiffness<br />

estimation algorithms with those obtained by COBA.<br />

• COBA is an online stiffness estimation algorithm based on force data<br />

only.<br />

• COBA avoids some problems that can negatively affect the<br />

performance of position-based algorithms.<br />

16:30–16:45 WedFT5.2<br />

Trajectory optimization for domains with contacts<br />

using inverse dynamics<br />

Tom Erez and Emanuel Todorov<br />

Computer Science, UW-Seattle, USA<br />

• Trajectory optimization in domains with contact (e.g., ground reaction<br />

forces) is notoriously hard due to the discontinuities at impact.<br />

• We use a soft contact model that can be used with both forward and<br />

inverse dynamics, thereby formulating a continuous optimization problem.<br />

• The algorithm was applied to a 3D simulated humanoid running domain<br />

with 31 degrees of freedom. After ~2500 Newton steps (~<strong>10</strong> minutes on a<br />

standard desktop) a complex running gait emerges.<br />

• Almost all computation time is spent on finite-differencing the dynamics.<br />

Since this bottleneck is trivially parallelizable, our approach stands to<br />

benefit from Moore’s law and any other improvements in parallel CPU<br />

architecture.<br />

17:00–17:15 WedFT5.4<br />

Modeling and Simulation of Friction Forces during<br />

Needle Insertion Using Local Constraint Method<br />

Lijuan Wang, Zhongkui Wang, and Shinichi Hirai<br />

Department of Robotics, Ritsmeikan University, Japan<br />

• In the modern clinical practice, accurate<br />

orientation inside soft tissue is difficult to<br />

achieve, because of complicated tissue<br />

deformations and the interaction forces.<br />

• A dynamic model of needle insertion with<br />

friction forces is proposed based on the<br />

Finite Element Method (FEM).<br />

• The relative velocity and contact length are<br />

considered as the main factors of friction<br />

forces during needle insertion.<br />

• Simulations using Local Constraint Method<br />

(LCM) are proposed. Local constraints and<br />

friction forces are calculated and applied<br />

onto the tissue frame to avoid remeshing.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–174–<br />

A series of Local Regions along<br />

the needle insertion path.


<strong>Session</strong> WedFT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Navigation III<br />

Chair Rüdiger Dillmann, KIT Karlsruhe Inst. for Tech.<br />

Co-Chair Christian Pascal Connette, Fraunhofer IPA<br />

16:15–16:30 WedFT6.1<br />

On-line Road Boundary Estimation by Switching<br />

Multiple Road Models using Visual Features<br />

from a Stereo Camera<br />

Takeshi Chiku and Jun Miura<br />

Department of Computer Science and Engineering,<br />

Toyohashi University of Technology, Japan<br />

• On-line road boundary estimation for<br />

mobile robots.<br />

• Multiple road models to cope with a variety<br />

of road shapes.<br />

• Road model switching at the state<br />

transition step of particle filter-based<br />

estimation.<br />

• Use of multiple visual features (color,<br />

edge, height) from a stereo camera for<br />

robust estimation.<br />

• Applied to various road scenes in a<br />

campus environment.<br />

Estimated boundaries and<br />

road regions<br />

16:45–17:00 WedFT6.3<br />

Non-metric Navigation for Mobile Robot Using<br />

Optical Flow<br />

Yung Siang Liau,<br />

Department of ECE, National University of Singapore, Singapore<br />

Qun Zhang, Yanan Li<br />

Department of NGS, National University of Singapore, Singapore<br />

and Shuzhi Sam Ge<br />

IDMI and Department of ECE, National University of Singapore, Singapore<br />

• Mobile robot navigation using a single<br />

camera as the sole sensory device.<br />

• Time-to-collision measure for qualitative<br />

depth information extraction based on<br />

optical flow divergence.<br />

• Heading decision making framework<br />

based on the proposed “Qualitative<br />

Visually Admissible Regions” which takes<br />

into account of goal-oriented navigation.<br />

• Behaviour based design for increased<br />

robustness under different obstacle<br />

configurations.<br />

Figure caption is optional,<br />

use Arial Narrow 20pt<br />

17:15–17:30 WedFT6.5<br />

Shop floor based programming of assembly<br />

assistants for pick-and-place applications<br />

Sven Dose<br />

Robert Bosch GmbH, CR/APA3, Germany<br />

Rüdiger Dillmann<br />

Institute of Anthropomatics, Karlsruhe Institute of Technology, Germany<br />

• Fast programming and commissioning of<br />

flexible assembly assistants without<br />

requiring expert knowledge<br />

• Intuitive operator guidance for leading<br />

inexperienced users efficiently through the<br />

hole programming process<br />

• Easy parameterization of pick-and-place<br />

actions including complex work piece<br />

detection sequences<br />

• Fast and precise teaching of robot and<br />

gripper movements for industrial<br />

manipulation applications<br />

Operational chain for<br />

programming assembly<br />

assistants<br />

16:30–16:45 WedFT6.2<br />

Robot Navigation with<br />

Model Predictive Equilibrium Point Control<br />

Jong Jin Park<br />

Mechanical Engineering, University of Michigan, USA<br />

Collin Johnson and Benjamin Kuipers<br />

Computer Science and Engineering, University of Michigan, USA<br />

• An autonomous vehicle intended<br />

to carry passengers must be able<br />

to generate trajectories on-line<br />

that are safe, smooth and<br />

comfortable.<br />

• We formulate local navigation in<br />

dynamic environments as a<br />

continuous, low-dimensional and<br />

unconstrained optimization<br />

problem which is easy to solve.<br />

• Proposed MPEPC framework<br />

depends on compact<br />

parameterization of closed loop<br />

trajectories and use of expected<br />

values in the cost definition.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–175–<br />

MPEPC-based Navigation<br />

Top: A non-holonomic wheelchair robot<br />

and local trajectory evaluations.<br />

Bottom: Example navigation run in an<br />

open hall with multiple pedestrians<br />

17:00–17:15 WedFT6.4<br />

Singularity-Free State-Space Representation for<br />

Non-Holonomic, Omnidirectional Undercarriages<br />

by Means of Coordinate Switching<br />

Christian Connette, Martin Hägele, Alexander Verl<br />

Fraunhofer IPA, Stuttgart, Germany<br />

• Non-holonomic omnidirectional<br />

undercarriages promiss high flexibility and<br />

robustness at the same time<br />

• Such drive-chain kinematics are<br />

characterized by singular configurations<br />

and actuator concurrency<br />

• A state-space representation is proposed<br />

that allows to circumvene this hindrances<br />

by means of controller switching<br />

• W.r.t. this state-space a switching based<br />

controller for is implemented, validated<br />

and comparatively evaluated<br />

Prototype of undercarriage and<br />

switching borders in cart. and<br />

sph. State-space


<strong>Session</strong> WedFT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Motion Planning for Aerial Robotics<br />

Chair Paolo Robuffo Giordano, Max Planck Inst. for Biological Cybernetics<br />

Co-Chair<br />

16:15–16:30 WedFT7.1<br />

Cooperative Quadrocopter Ball Throwing and<br />

Catching<br />

Robin Ritz, Mark W. Müller,<br />

Markus Hehn and Raffaello D’Andrea<br />

Institute of Dynamic Systems and Control, ETH Zurich, Switzerland<br />

• Method to throw and catch balls<br />

using a net attached to a fleet of<br />

quadrocopters.<br />

• Dynamics and nominal inputs for<br />

all attached vehicles are derived.<br />

• Nonlinear trajectory generation<br />

for catching and throwing,<br />

respectively, is introduced.<br />

• Experimental results show validity<br />

of presented methods.<br />

Three quadrocopters throwing a ball<br />

16:45–17:00 WedFT7.3<br />

Aerial Grasping of a Moving Target with a<br />

Quadrotor UAV<br />

Riccardo, Spica 1 , Antonio Franchi 1 , Giuseppe Oriolo 2<br />

Heinrich H. Bülthoff 1,3 , and Paolo Robuffo Giordano 1<br />

1 Max Plank Institute for Biological Cybernetics, Germany<br />

2 Università di Roma La Sapienza, Italy<br />

3 Department of Brain and Cognitive Engineering, Korea University, Korea<br />

• Complete physical model in 6D<br />

(position/orientation)<br />

• Canonical maneuvers for a generic<br />

grasping (also non-hovering) taking<br />

into account the finite time needed<br />

for closing the gripper<br />

• Time-optimal concatenation of<br />

canonical maneuvers with spline<br />

trajectories under limited actuation<br />

for the UAV<br />

• Illustration of multiple<br />

pick&place operations<br />

Validation in a physically realistic<br />

simulation scenario<br />

17:15–17:30 WedFT7.5<br />

A New Utility Function for Smooth Transition<br />

Between Exploration and Exploitation of a Wind<br />

Energy Field<br />

Jen Jen Chung and Salah Sukkarieh<br />

Australian Centre for Field Robotics, The University of Sydney, Australia<br />

Miguel Angel Trujillo Soto<br />

Centre for Advanced Aerospace Technologies, Spain<br />

• Long endurance autonomous flight<br />

requires real-time energy capture.<br />

• In an unknown wind field this becomes an<br />

exploration-exploitation problem.<br />

• Our proposed utility function provides a<br />

continuous scale between exploration and<br />

exploitation.<br />

• Flight tests show a 47.7% reduction in<br />

loitering time compared to a pure<br />

information gain approach.<br />

The agent, Quad1, performs<br />

energy capture by circling above<br />

the energy source, Quad 2.<br />

16:30–16:45 WedFT7.2<br />

Real-Time Trajectory Generation<br />

for Interception Maneuvers with Quadrocopters<br />

Markus Hehn and Raffaello D’Andrea<br />

Institute for Dynamic Systems and Control, ETH Zurich, Switzerland<br />

• Optimality conditions for the interception maneuver that<br />

minimizes the time to rest after interception<br />

• Optimal interception maneuver is identical to timeoptimal<br />

maneuver to the position at which vehicle comes<br />

to rest after interception<br />

• Computationally efficient<br />

trajectory generation permits<br />

use as implicit feedback law<br />

• Experimental validation by<br />

intercepting balls mid-flight<br />

17:00–17:15 WedFT7.4<br />

Visual Tracking and Following of a<br />

Quadrocopter by another Quadrocopter<br />

Karl E. Wenzel, Andreas Masselli and Andreas Zell<br />

Chair of Cognitive System, University of Tübingen, Germany<br />

• Two autonomous quadrocopters of<br />

different types and configurations<br />

• Parrot AR.Drone as leader,<br />

controlled by an iPad<br />

• AscTec Hummingbird as follower,<br />

with low-cost<br />

onboard hardware<br />

• Our efficient solution of<br />

the perspective-3-point<br />

problem estimating 6DOF<br />

on a microcontroller<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–176–<br />

Leader (right) and follower (left) at a<br />

desired relative position of 2m


<strong>Session</strong> WedFT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Tools for Robot Control Design<br />

Chair Sadao Kawamura, Ritsumeikan Univ.<br />

Co-Chair<br />

16:15–16:30 WedFT8.1<br />

Minimum Angular Acceleration Control for<br />

Atrticulated Body Dynamics<br />

16:45–17:00 WedFT8.3<br />

A Framework for Realistic Simulation of<br />

Networked Multi-Robot Systems<br />

Michal Kudelski, Marco Cinus, Luca Gambardella<br />

and Gianni A. Di Caro<br />

Dalle Molle Institute for Artificial Intelligence (IDSIA), Lugano,Switzerland<br />

• We propose an integrated simulation<br />

framework that allows for realistic<br />

simulation of networked robotic systems<br />

• The proposed framework integrates two<br />

simulators: a network simulator and a<br />

multi-robot simulator<br />

• Two implementations are presented,<br />

combining ARGoS with ns-2 and ns-3<br />

• An extensive evaluation and validation of<br />

the integrated simulation framework has<br />

been performed<br />

• We demonstrate a case study that<br />

compares realistic network simulation with<br />

simplified communication models<br />

The architecture of the integrated<br />

simulation framework<br />

17:15–17:30 WedFT8.5<br />

Extensive Analysis of Linear Complementarity Problem<br />

(LCP) Solvers Performance on Randomly Generated<br />

Rigid Body Contact Problems<br />

Evan Drumwright<br />

Computer Science Department, George Washington University, USA<br />

Dylan A. Shell<br />

Computer Science and Engineering, Texas A&M University, USA<br />

• We determined a method to generate multi-rigid body contact problems<br />

randomly<br />

• We evaluate solver performance on these problems with Lemke’s<br />

Algorithm, a primal-dual interior point method, an iterative SOR method,<br />

and PATH<br />

• We exhaustively search for best parameters for each solver<br />

• Which one fares best? Come to the talk and find out!<br />

16:30–16:45 WedFT8.2<br />

A New Feedback Robot Control Method<br />

based on Position/Image Sensor Integration<br />

Ryosuke Nishida and Sadao Kawamura<br />

Department of Robotics College of Science and Engineering,<br />

Ritsumeikan University, Japan<br />

• This paper proposes a new feedback control method<br />

based on simultaneous use of position/image sensors.<br />

• The proposed control method does not need parameter<br />

calibration of cameras and a robot.<br />

• The motion stability is mathematically prove.<br />

• Experimental results demonstrate the usefulness and<br />

the robustness of the proposed method.<br />

17:00–17:15 WedFT8.4<br />

MuJoCo:<br />

A physics engine for model-based control<br />

Emanuel Todorov 1,2 , Tom Erez 1 and Yuval Tassa 1<br />

Computer Science and Engineering 1 , Applied Mathematics 2<br />

University of Washington, USA<br />

• 500 times faster than real-time in a single<br />

thread (18 dof humanoid with 6 contacts)<br />

• dynamics can be evaluated in parallel for<br />

different states and controls, facilitating<br />

sampling and finite differences<br />

• multi-joint dynamics represented in joint<br />

coordinates<br />

• hard contacts simulated with new contact<br />

solvers; multiple solvers provided<br />

• both forward and inverse dynamics can be<br />

computed, even in the presence of<br />

contacts and equality constraints<br />

• modeling of tendons, muscles, slidercranks,<br />

pneumatic cylinders<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–177–<br />

applications of MuJoCo<br />

in trajectory optimization


<strong>Session</strong> WedFT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Manipulation and Navigation in Space Applications<br />

Chair<br />

Co-Chair<br />

16:15–16:30 WedFT<strong>10</strong>.1<br />

Experimental Results for Image-based<br />

Geometrical Reconstruction for Spacecraft<br />

Rendezvous Navigation with Unknown and<br />

Uncooperative Target Spacecraft<br />

Frank Schnitzer and Klaus Janschek<br />

Institute of Automation, Technische Universität Dresden, Germany<br />

Georg Willich<br />

Astrium GmbH, Germany<br />

• unknown and uncooperative target<br />

objects observed with a camera only<br />

vision system<br />

• the target's 3D structure is reconstructed<br />

from a sparse point cloud extracted from<br />

a rendezvous-SLAM algorithm<br />

• 3D model can be used in a feedback<br />

manner for enhancing visual navigation<br />

processing tasks<br />

• demonstrated by experiments with real<br />

image data from in-house laboratory<br />

spacecraft rendezvous simulator<br />

16:45–17:00 WedFT<strong>10</strong>.3<br />

Launching Penetrator<br />

By Casting Manipulator System<br />

Hitoshi Arisumi<br />

Intelligent Systems Research Institute, AIST, Japan<br />

Masatsugu Otsuki and Shinichiro Nishida<br />

Institute of Space and Astronautical Science, JAXA, Japan<br />

• Proposal of a launching planner that<br />

compensates for the timing error of<br />

releasing a penetrator<br />

• Development of a system to launch the<br />

penetrator with keeping its orientation<br />

constant<br />

• Realization of the target motion with the<br />

hardware<br />

• Verification of the effectiveness of the<br />

proposed method by showing that the<br />

error of the landing position is less than<br />

3.6% of the launching distance through<br />

experiments<br />

Casting manipulator system<br />

to launch the penetrator<br />

17:15–17:30 WedFT<strong>10</strong>.5<br />

A Grouser Spacing Equation for Determining<br />

Appropriate Geometry of Planetary Rover Wheels<br />

Krzysztof Skonieczny, Scott J. Moreland,<br />

and David S. Wettergreen<br />

Robotics Institute, Carnegie Mellon University, USA<br />

• Wheel geometric and operating<br />

parameters are related, to predict<br />

minimum number of grousers<br />

• Wheels without enough grousers<br />

periodically induce forward soil<br />

flow ahead of the wheel<br />

• Forward soil flow is indicative of<br />

rolling resistance that reduces<br />

traction<br />

• Soil flow is observed through a<br />

glass sidewall and analyzed using<br />

computer vision<br />

Soil flow magnitude (top) and direction<br />

(bottom), showing periodic forward flow<br />

16:30–16:45 WedFT<strong>10</strong>.2<br />

Accuracy Improvement of<br />

Delay Time Compensation Based on<br />

the Coefficient of Restitution for a Hybrid Simulator<br />

Y. Satake, S. Abiko, X. Jiang, and M. Uchiyama<br />

Department of Mechanical Systems and Design, Tohoku University, Japan<br />

A. Konno<br />

Division of System Science and Informatics, Hokkaido University, Japan<br />

• A hybrid simulator is effective method to<br />

examine the space robotic on the ground<br />

• This simulator suffers from a problem of<br />

energy increase due to delay times<br />

• The aim of this paper is to improve<br />

accuracy of delay time compensation<br />

• A collision experiment is carried out to<br />

validate the accuracy of the proposed<br />

compensation<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–178–<br />

Overview of the Hybrid Simulator<br />

17:00–17:15 WedFT<strong>10</strong>.4<br />

Augmented Reality Environment with Virtual<br />

Fixtures for Robotic Telemanipulation in Space<br />

Tian Xia, Anton Deguet, Louis Whitcomb and Peter Kazanzides<br />

Laboratory for Computational Sensing and Robotics,<br />

Johns Hopkins University, USA<br />

Simon Leonard<br />

Children’s National Medical Center (Washington DC), USA<br />

• Developed an augmented reality<br />

framework that enables operator to design<br />

and implement assistive virtual fixtures for<br />

teleoperation with significant time delay<br />

• Validated technical approach for on-orbit<br />

satellite servicing tasks on ground<br />

simulation robotic platform<br />

• Approach reduces task completion time<br />

and eliminating manipulation error<br />

• Approach provides improved virtual<br />

telepresence for operator<br />

vf parameters<br />

modification Task Specific<br />

Virtual Fixture<br />

Models<br />

Task Specific<br />

Virtual Fixture<br />

Models<br />

Remote Sensing<br />

Operator<br />

constraints<br />

generation<br />

bi-directional communication<br />

model update delay<br />

constraints<br />

generation<br />

closed-loop<br />

control<br />

sensor<br />

data<br />

motion commands<br />

Master<br />

Controller<br />

motion commands<br />

Remote Slave<br />

Controller<br />

Remote<br />

Enviroment<br />

Figure: Virtual fixture for teleoperation<br />

system architecture<br />

Master Side<br />

Remote Side


<strong>Session</strong> WedFT11 Hidra <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Robots with Variable Impedance Actuation<br />

Chair Raffaella Carloni, Univ. of Twente<br />

Co-Chair Alin Albu-Schäffer, DLR - German Aerospace Center<br />

16:15–16:30 WedFT11.1<br />

A Simple Controller for a Variable Stiffness Joint<br />

with Uncertain Dynamics and Prescribed<br />

Performance Guarantees<br />

Efi Psomopoulou, Zoe Doulgeri and George A. Rovithakis<br />

Dept of Electrical & Computer Eng., Aristotle University of Thessaloniki, Greece<br />

Nikos G. Tsagarakis<br />

Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy<br />

• The CompAct-VSA joint is considered.<br />

• Transient and steady state performance<br />

for link q and stiffness motor position<br />

θ k is a priori specified and guaranteed.<br />

• A full state feedback tracking controller<br />

is proposed without requesting any<br />

knowledge of system nonlinearities.<br />

• Simulation results in a trajectory<br />

following task (figure) satisfy preset<br />

performance with reasonable control<br />

effort (max voltage 24 V)<br />

• The stiffness motor desired trajectory<br />

corresponds to a stiffness variation<br />

between the values of 170-582 Nm/rad.<br />

16:45–17:00 WedFT11.3<br />

Limit Cycles and Stiffness Control with Variable<br />

Stiffness Actuators<br />

Raffaella Carloni<br />

Dept. Electrical Engineering, University of Twente, Italy<br />

Lorenzo Marconi<br />

Dept. Electronics, Computer Science and Systems, University of Bologna, Italy<br />

• The inherent compliance of variable<br />

stiffness actuators is exploited to<br />

obtain a robust and energy-efficient<br />

behavior<br />

• The proposed control strategy<br />

guarantees a robust tracking of a limit<br />

cycle trajectory and of a desired<br />

stiffness for the actuator’s load<br />

• Experimental tests on the vsaUT-II<br />

validate the control design<br />

17:15–17:30 WedFT11.5<br />

Rigid vs. Elastic Actuation: Requirements &<br />

Performance<br />

S. Haddadin, N. Mansfeld, A. Albu-Schäffer<br />

Robotics and Mechatronics Center<br />

This paper answers following questions<br />

1)How does geometric scaling (i.e.<br />

systematic mass variation) influence the<br />

performance of a rigid joint?<br />

2)What are the requirements for an elastic<br />

joint that consists of a smaller motor and<br />

an elastic transmission in order to reach at<br />

least the maximum velocity of the rigid<br />

manipulator with equivalent overall mass?<br />

16:30–16:45 WedFT11.2<br />

On the Control of Redundant Robots<br />

with Variable Stiffness Actuation<br />

Gianluca Palli and Claudio Melchiorri<br />

Dipartimento di Elettronica, Informatica e Sistemistica, Università di Bologna<br />

Viale Risorgimento 2, 40136 Bologna, Italy<br />

email: {gianluca.palli, claudio.melchiorri}@unibo.it<br />

• The control of redundant manipulator<br />

with variable stiffness actuation is<br />

discussed<br />

• An output-based control approach is<br />

adopted<br />

• The actuators dynamics is decoupled<br />

from the arm dynamics by means of<br />

a singular perturbation approach<br />

• The designed controller presents<br />

several advantages with respect to<br />

previously proposed state-feedback<br />

controllers<br />

• The theoretical results are valideted<br />

by simulation of a three DOF planar<br />

manipulator<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–179–<br />

Working principle scheme of a VSA.<br />

17:00–17:15 WedFT11.4<br />

On Impact Decoupling Properties of Elastic<br />

Robots and Time Optimal Velocity Maximization<br />

on Joint Level<br />

S. Haddadin, K. Krieger, N. Mansfeld, A. Albu-Schäffer<br />

Robotics and Mechatronics Center<br />

This paper answers following questions<br />

1) What are suitable reflected stiffness and<br />

inertias for achieving impact decoupling and<br />

how is the maximum collision force affected<br />

by these?<br />

2) How does the most important real-world<br />

state constraint for elastic joints, the<br />

maximum deflection, affect the optimal<br />

excitation (execution of explosive motion)<br />

of the mechanism?


<strong>Session</strong> WedFVT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Telerobotics & Brain-Machine Interfaces<br />

Chair Susumu Tachi, Keio Univ.<br />

Co-Chair<br />

16:15–16:30 WedFVT2.1<br />

A Collaborative Control System for<br />

Telepresence Robots<br />

Douglas G. Macharet<br />

VeRLab, DCC, UFMG, Brazil<br />

Dinei Florêncio<br />

Microsoft Research, USA<br />

• A potential field based framework to<br />

facilitate the control of telepresence<br />

robots<br />

• Adjustable level of autonomy of the<br />

robot, giving it full control on mid-range<br />

navigation<br />

• Users had significantly fewer hits (none<br />

in most cases) and took less time to<br />

complete a given task<br />

• Most of the users agreed with the<br />

possible directions pointed out by the<br />

methodology<br />

16:45–17:00 WedFVT2.3<br />

Armrest Joystick<br />

-Mechanism Design and Basic Experiments-<br />

Hiroaki Ishida Tetsuo Hagiwara<br />

Koji Ueda and Shigeo Hirose<br />

Department of Mechanical and Aerospace Engineering,<br />

Tokyo Institute of Technology, Japan<br />

• We propose a robot arm’s controller,<br />

“Armrest Joystick,” with high portability<br />

and operability.<br />

• The Armrest Joystick can operate 3DOF<br />

position, 3DOF posture and a gripper<br />

with force feedback.<br />

• We study a design of the Armrest<br />

Joystick in this paper.<br />

• We conducted some experiments to<br />

verify operability.<br />

Armrest Joystick<br />

17:15–17:20 WedFVT2.5<br />

Towards Robotic Re-Embodiment using<br />

a Brain-and-Body-Computer Interface<br />

Nikolas Martens, Robert Jenke, Mohammad Abu-Alqumsan,<br />

Angelika Peer, and Martin Buss<br />

Institute of Automatic Control Engineering, TUM, Germany<br />

Christoph Kapeller, Christoph Hintermüller, Christoph Guger<br />

Guger Technologies, Austria<br />

• 3 basic scenarios of a BBCI controlled<br />

robot avatar: pick&place, door-opening,<br />

and social interaction<br />

• Development of task-adapted interfaces<br />

for P300 and SSVEP paradigms<br />

• High-level intentions determine what the<br />

robot should execute<br />

• Low-level intentions describe how those<br />

commands are executed<br />

Task-adapted BCI for (a) pickand-place<br />

and (b) door-opening<br />

16:30–16:45 WedFVT2.2<br />

Design of TELESAR V for Transferring Bodily<br />

Consciousness in Telexistence<br />

Charith Lasantha Fernando, Masahiro Furukawa, Tadatoshi<br />

Kurogi, Sho Kamuro, Katsunari Sato, Kouta Minamizawa and<br />

Susumu Tachi<br />

Graduate School of Media Design,<br />

Keio University. Japan<br />

• A 52 DOF robot for performing<br />

telexistence operations.<br />

• Un-grounded Master cockpit.<br />

• Independent Spinal, Head, Arm, Hand<br />

movements<br />

• Wide Angle HD Head Mounted Display.<br />

• Fingertip Force (Shearing, Vertical)<br />

Sensation<br />

• Fingertip Thermal Sensation<br />

• Extend the bodily border up to robot<br />

17:00–17:15 WedFVT2.4<br />

Networked Teleoperation with Non-Passive<br />

Environment: Application to Tele-Rehabilitation<br />

S. Farokh Atashzar, Ilia G. Polushin<br />

Dept. of Electrical and Computer Eng., University of Western Ontario, Canada<br />

Rajni V. Patel<br />

Dept. Electrical and Computer Engineering and Dept. of Surgery , University of<br />

Western Ontario , Canada<br />

• The problem of design of a master-slave<br />

tele-rehabilitation system for<br />

assistive/resistive therapy is addressed.<br />

• During assistive therapy, the therapist<br />

supplies the power to the teleoperator<br />

system thus behaving as an active (nonpassive)<br />

network.<br />

• Dissipation of the power generated by the<br />

therapist would defeat the purpose of the<br />

assistive therapy.<br />

• A small-gain approach is designed to<br />

analyze/maintain the stability in both<br />

assistive and resistive mode.<br />

Top: Velocity of the patient’s hand vs.<br />

therapist’s hand<br />

Bottom: Energy generated by the therapist<br />

17:20–17:25 WedFVT2.6<br />

Rock-Paper-Scissors Prediction Experiments<br />

using Muscle Activations<br />

Giho Jang and Youngjin Choi*<br />

Electronic Systems Engineering, Hanyang University, South Korea<br />

Zhihua Qu<br />

EECS, University of Central Florida, USA<br />

• Initial experimental results for<br />

hand posture prediciton are<br />

presented in the video.<br />

• Property: initial burst part of<br />

muscle activation (EMG) is<br />

prior to the oneset of acutal<br />

movement by dozens to<br />

hundreds milliseconds.<br />

• Using this property, the proposed method makes the ternary choice<br />

prediction among rock-paper- scissors as soon as <strong>10</strong>% motion variation<br />

of any finger is detected.<br />

• It is shown experimentally that the success rate of the proposed<br />

prediction method is over 95%.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–180–


<strong>Session</strong> WedFVT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Telerobotics & Brain-Machine Interfaces<br />

Chair Susumu Tachi, Keio Univ.<br />

Co-Chair<br />

17:25–17:30 WedFVT2.7<br />

RobChair: Experiments Evaluating Brain-<br />

Computer Interface to Steer<br />

a Semi-autonomous Wheelchair<br />

Ana C. Lopes, Gabriel Pires and Urbano Nunes<br />

Institute for Systems and Robotics<br />

University of Coimbra, Portugal<br />

• Experiments with a semi-autonomous<br />

wheelchair controlled by means of a<br />

Brain-Computer Interface (BCI);<br />

• Assistive navigation system based on<br />

a collaborative controller;<br />

• User intents are decoded from<br />

electroencephalographic signals<br />

evoked by a visual P300-based<br />

paradigm;<br />

• Experiments carried out with <strong>10</strong> ablebodied<br />

participants and a cerebral<br />

palsy participant with motor disabilities<br />

showed the effectiveness of the<br />

proposed approaches.<br />

Snapshot taken during an experiment<br />

with a participant with motor disabilities<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–181–


<strong>Session</strong> WedFVT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Multi-modal Learning II<br />

Chair<br />

Co-Chair Shingo Shimoda, RIKEN<br />

16:15–16:30 WedFVT9.1<br />

Experimental study on haptic communication of<br />

a human in a shared human-robot collaborative<br />

task<br />

Julie Dumora, Franck Geffard, Catherine Bidard<br />

LIST, CEA, France<br />

Thibaut Brouillet Philippe Fraisse<br />

EPSYLON, Montpellier, France LIRMM, Montpellier, France<br />

• Robot assistances and operator intention<br />

detection proposed to overcome<br />

limitations of backdrivable robot in long<br />

objects manipulation<br />

• The solution of analysing haptic cues to<br />

tackle the rotation/translation ambiguity is<br />

proposed<br />

• Relationships between operator intention<br />

of motion and haptic measurements are<br />

highlighted<br />

• Wrench measurements are shown to be<br />

an incomplete information for detection of<br />

operator’s intention of motion<br />

Rotation/translation ambiguity<br />

in joint human-robot manipulation<br />

of a long object<br />

(view from top down)<br />

16:45–17:00 WedFVT9.3<br />

Maximally Informative Interaction<br />

Learning for Scene Exploration<br />

Herke van Hoof, Oliver Kroemer, Heni Ben Amor and Jan Peters<br />

FG Intelligent Autonomous Systems, TU Darmstadt, Germany<br />

Max Planck Institute for Intelligent Systems, Germany<br />

• In dynamic environments, robots<br />

need to handle novel objects.<br />

• As annotated data is absent robots<br />

need to learn from the results of<br />

their actions.<br />

• Exploratory actions that maximize<br />

information gain allow more efficient<br />

learning.<br />

• This method allows a robot to<br />

efficiently learn with minimal prior<br />

information.<br />

17:15–17:20 WedFVT9.5<br />

Towards Robotic Calligraphy<br />

Nico Huebel, Elias Mueggler, Markus Waibel,<br />

and Raffaello D’Andrea<br />

Institute for Dynamic Systems and Control,<br />

ETH Zurich, Switzerland<br />

• We present a prototype of a robotic system that learns how to draw<br />

Chinese characters.<br />

• First, the context of the project is presented.<br />

• Then, the experimental setup and the overall approach is introduced.<br />

• Finally, experimental results are presented and discussed.<br />

This is the experimental setup of our prototype consisting of the<br />

KUKA Light Weight Robot, a Prosilica GC 655C camera, and a brush.<br />

16:30–16:45 WedFVT9.2<br />

Robots Move: Bootstrapping the Development of<br />

Object Representations using Sensorimotor<br />

Coordination<br />

Arren Glover and Gordon Wyeth<br />

Queensland University of Technology, Australia<br />

• This paper is concerned with the<br />

unsupervised generation of object<br />

models by fusing appearance and<br />

action<br />

• A FAB-MAP-based approach is<br />

combined with a partially<br />

observable semi-Markov decision<br />

process<br />

• Results indicate stronger bag-ofword<br />

object representations are<br />

formed under sensorimotor<br />

constraints<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–182–<br />

Representations are matched via<br />

self-motion prediction to become<br />

object specific<br />

17:00–17:15 WedFVT9.4<br />

Perceptual Development Triggered<br />

by its Self-Organization in Cognitive Learning<br />

Yuji Kawai, Yukie Nagai, and Minoru Asada<br />

Graduate School of Engineering, Osaka University, Japan<br />

• Our goal: To investigate the role of<br />

visual development triggered by selforganization<br />

in the visual space in a<br />

case of the mirror neuron system (MNS)<br />

• Key idea: The self-triggered visual<br />

development changes adaptively the<br />

developmental speed.<br />

• Result: The self-triggered development<br />

maintains a long enough period of<br />

immature vision which can inhibit selfother<br />

differences in observation. Thus<br />

the development enhances the<br />

acquisition of the association between<br />

self and other (i.e., the MNS).<br />

(a) Early stage<br />

of development<br />

(b) Later stage<br />

of development<br />

17:20–17:25 WedFVT9.6<br />

Learning Throwing and Catching Skills<br />

Jens Kober, Katharina Muelling, and Jan Peters<br />

AGBS, MPI for Intelligent Systems, Germany<br />

IAS, TU Darmstdat, Germany<br />

• Learning hitting skills by imitation and reinforcement<br />

learning<br />

• Generalizing hitting skills to catching skills<br />

• Learning to throw at targets<br />

• Combining throwing and catching skills to play catch<br />

A BioRob and a Barrett WAM playing catch.


<strong>Session</strong> WedFVT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 16:15–17:30<br />

Multi-modal Learning II<br />

Chair<br />

Co-Chair Shingo Shimoda, RIKEN<br />

17:25–17:30 WedFVT9.7<br />

NAO Walking Down a Ramp Autonomously<br />

Christian Lutz, Felix Atmanspacher,<br />

Armin Hornung, and Maren Bennewitz<br />

Dept. of Computer Science, University of Freiburg, Germany<br />

• We present a method to traverse<br />

ramps with humanoids using onboard<br />

vision and IMU sensors<br />

• Our NAO humanoid autonomously<br />

walks down a steep ramp of 2 m<br />

• Single statically stable steps on the<br />

ramp are learned with kinesthetic<br />

teaching<br />

• Start and end of the ramp are<br />

detected from straight lines in the<br />

camera image<br />

• The robot‘s orientation is inferred<br />

from the ramp inclination with roll<br />

and pitch angles of the IMU<br />

NAO on the ramp (top) and on-board camera<br />

images with detected lines (bottom)<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–183–


<strong>Session</strong> WedGT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Stereo Vision<br />

Chair Il Hong Suh, Hanyang Univ.<br />

Co-Chair Andreas Zell, Univ. of Tübingen<br />

17:30–17:45 WedGT1.1<br />

A New Feature Detector and Stereo Matching<br />

Method for Accurate High-Performance Sparse<br />

Stereo Matching<br />

Konstantin Schauwecker and Andreas Zell<br />

Department Cognitive Systems, University of Tübingen, Germany<br />

Reinhard Klette<br />

Computer Science Department, The University of Auckland, New Zealand<br />

• Computationally efficient sparse stereo<br />

matching system, achieving processing<br />

rates above 200 frames per second on a<br />

commodity dual-core CPU.<br />

• Although features are matched sparsely, a<br />

dense consistency check is applied, which<br />

drastically decreases the number of false<br />

matches.<br />

• A new FAST-based feature detector is<br />

used, which has a less clustered feature<br />

distribution and leads to an improved<br />

matching performance.<br />

18:00–18:15 WedGT1.3<br />

Can Stereo Vision replace a Laser Rangefinder?<br />

M. Antunes, J.P. Barreto, C. Premebida and U. Nunes<br />

Department of Electrical and Computer Engineering<br />

Institute of Systems and Robotics<br />

University of Coimbra, Portugal<br />

• We propose Stereo Rangefinding (SRF)<br />

for estimating depth along virtual scan<br />

planes<br />

• The SymStereo framework is used for<br />

quantifying the likelihood of pixel<br />

correspondences using induced symmetry<br />

• The depth estimates of SRF are compared<br />

against the data provided by a LRF<br />

• We show that passive stereo can be an<br />

alternative to LRF in certain robotic<br />

applications<br />

Our paper shows that it is<br />

possible to recover the profile cut<br />

(green) directly from two cameras<br />

17:45–18:00 WedGT1.2<br />

Real-time Velocity Estimation Based on<br />

Optical Flow and Disparity Matching<br />

Dominik Honegger, Pierre Greisen, Lorenz Meier,<br />

Petri Tanskanen and Marc Pollefeys<br />

ETH Zürich Switzerland<br />

• We present an image-based real-time<br />

metric velocity sensor for mobile robot<br />

navigation.<br />

• An FPGA-based stereo camera platform<br />

combines optical flow and disparity values<br />

at 127 fps and 376*240 resolution.<br />

• Radial undistortion, image rectification,<br />

disparity estimation and optical flow are<br />

performed on a single FPGA.<br />

• Suited for MAVs due to low-weight, lowpower<br />

and low-latency.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–184–<br />

System Setup, Rectified Input<br />

Image, Disparity Map,<br />

Flow Field<br />

18:15–18:30 WedGT1.4<br />

Dependable Dense Stereo Matching by Both<br />

Two-layer Recurrent Process and Chaining<br />

Search<br />

Sehyung Lee, Youngbin Park and Il Hong Suh<br />

Department of Electronics and Computer Engineering, Hanyang University,<br />

Korea<br />

• We propose a recurrent two-layer process<br />

and chaining search for dense stereo<br />

matching.<br />

• The disparity map is calculated through<br />

the iterative integration of pixel and region<br />

layers.<br />

• To estimate the precise disparities in<br />

occluded regions, reliable disparities are<br />

propagated by the chaining search.<br />

• To test our algorithm, it was compared<br />

with two outstanding algorithms in<br />

Middlebury benchmark using Gaussian<br />

noisy images.<br />

The first row shows the images with<br />

varying PSNR. The second, third, and<br />

fourth rows show the results of the CVF,<br />

the DBP, and the proposed method.


<strong>Session</strong> WedGT2 Fenix 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Telerobotics<br />

Chair Jordi Artigas, DLR - German Aerospace Center<br />

Co-Chair<br />

17:30–17:45 WedGT2.1<br />

Multi-Objective Optimization for Telerobotic<br />

Operations via the Internet<br />

Yunyi Jia 1 , Ning Xi 1,2 , Shuang Liu 2 , Huatao Zhang 1 and Sheng Bi 2<br />

1 Electrical and Computer Engineering Department,<br />

Michigan State University, USA<br />

2 Department of Mechanical and Biomedical Engineering,<br />

City University of Hong Kong, Hong Kong, China<br />

• Study the influence of the<br />

teleoperation condition variables<br />

on the telerobotic operations,<br />

including the quality of<br />

teleoperator, task dexterity and<br />

network quality<br />

• investigate the method to online<br />

identify these condition variables<br />

and employ them to enhance the<br />

telerobotic operations with multiple<br />

objectives.<br />

• Implemented on a mobile<br />

manipulator and verified its<br />

effectiveness.<br />

System framework<br />

Experiment implementation<br />

18:00–18:15 WedGT2.3<br />

Network Unfoldment and Application to Wave<br />

Variables using measured Forces<br />

J. Artigas<br />

Institute of Robotics and Mechatronic,<br />

DLR - German Aerospace Center, Germany<br />

• General network based<br />

representation, analysis and design.<br />

• A solution to the ambiguity of the<br />

channel causality.<br />

• Time Delay Power Networks: A new<br />

haptic channel representation<br />

paradigm.<br />

• Energy consistent application of the<br />

wave variables framework using<br />

sensed forces at the environment.<br />

17:45–18:00 WedGT2.2<br />

A master-slave robotic simulator based on<br />

GPUDirect<br />

Jianying Li, Yu Guo, Heye Zhang and Yongming Xie<br />

Shenzhen Institutes of Advanced Technology,<br />

Chinese Academy of Sciences, China<br />

• Surgery robotic training simulation<br />

• CUDA and GPUDirect version 1<br />

• Virtual surgery data transfer using<br />

GPUDirect between three<br />

computers by InfiniBand card<br />

• 247% performance improvement in<br />

data transmission speed in our<br />

master-slave robotic simulator<br />

18:15–18:30 WedGT2.4<br />

Unimodal Asymmetric Interface for<br />

Teleoperation of Mobile Manipulators:<br />

A User Study<br />

Alejandro Hernandez Herdocia, Azad Shademan,<br />

and Martin Jägersand<br />

Department of Computing Science, University of Alberta, Canada<br />

• Different methods to command one-arm<br />

mobile manipulator were studied:<br />

• Workspace Clutching (manipulator)<br />

• Differential End-Zone (manipulator)<br />

• Position/Rate Switching (manipulator)<br />

• Rate Control (mobile base)<br />

• Evaluated pick & place performance with<br />

7 subjects solving a Towers of Hanoi<br />

puzzle<br />

• Two case studies were presented:<br />

• Large-displacement pick & place<br />

• Opening a door and exiting a room<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–185–<br />

Mobile manipulator<br />

(SEGWAY+WAM) with<br />

asymmetric and haptics-enabled<br />

master-slave configuration.


<strong>Session</strong> WedGT3 <strong>Pegaso</strong> B <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Home Automation and Personal Robots<br />

Chair Stefano Mazzoleni, Scuola Superiore Sant'Anna<br />

Co-Chair<br />

17:30–17:45 WedGT3.1<br />

Acquisition and Use of Transferable,<br />

Spatio-Temporal Plan Representations for<br />

Human-Robot Interaction<br />

Michael Karg<br />

Institute for Advanced Study, Technische Universität München, Germany<br />

Alexandra Kirsch<br />

Department of Computer Science, University of Tübingen, Germany<br />

• Generation of semantically<br />

annototated spatial model by<br />

combining motion tracking data<br />

with information from semantic<br />

maps<br />

• Automatic segmentation of<br />

motion tracking data using<br />

spatial model<br />

• Generation of transferable,<br />

general, spatio-temporal plan<br />

representations for different<br />

tasks<br />

• Application: Passive plan<br />

supervision in different<br />

environments based on plan<br />

patterns and durations at<br />

semantically annotated locations<br />

18:00–18:15 WedGT3.3<br />

Context-aware Home Energy Saving based on<br />

Energy-Prone Context<br />

Mao-Yung Weng, Chao-Lin Wu, Ching-Hu Lu, Hui-Wen Yeh<br />

and Li-Chen Fu<br />

Department of Computer Science & Information Engineering,<br />

National Taiwan University, Taiwan, R.O.C.<br />

• Energy-prone context (EPC) is an<br />

activity with associated energy<br />

consumption.<br />

• EPC contains information about the<br />

necessity of an energy consumption to<br />

an activity.<br />

• We propose a systematic method to<br />

determine energy saving (ES) service<br />

based on EPC.<br />

• The potential of EPC-based ES system<br />

is 25% more effective than a locationbased<br />

one.<br />

An Energy-Prone Context<br />

using WatchTV as an example<br />

Activity: Watch TV<br />

Location: Livingroom<br />

Explicit:<br />

TV_livingroom|on|120watt|1.0<br />

Implicit:<br />

AC_livingroom|on|3000watt|0.75<br />

waterheater_bathroom|on|4000watt|<br />

0.56<br />

xbox_livingroom|standby|2watt|0.98<br />

lamp_livingroom|off|0watt|0.96<br />

light_hallway|off|0watt|0.88<br />

light_kitchen|off|0watt|1.0<br />

lamp_bedroom|off|0watt|0.86<br />

light_bedroom|off|0watt|0.9<br />

…..<br />

Living Room Light:<br />

on, 60w, 0.82<br />

TV:<br />

on, 120w, 1.0<br />

A/C:<br />

on, 3kw, 0.75<br />

Watch<br />

TV<br />

Water Heater:<br />

on, 4kw, 0.56<br />

Explicit power consumption<br />

Implicit power consumption<br />

xBox:<br />

standby, 2w, 0.98<br />

17:45–18:00 WedGT3.2<br />

Hierarchical Generalized Context Inference<br />

for Context-aware Smart Homes<br />

Chao-Lin Wu, Mao-Yuan Weng, Ching-Hu Lu and Li-Chen Fu<br />

Department of Computer Science & Information Engineering,<br />

National Taiwan University, Taiwan, R.O.C.<br />

• Hierarchical generalized context<br />

inference helps improve the<br />

performance of multi-user<br />

activity recognition.<br />

• A generalized context (GC) is an<br />

abstracted context composed of<br />

several contexts with common<br />

features.<br />

• This mechanism treats multiple<br />

users as an aggregated entity<br />

and hierarchically group<br />

contexts as GC.<br />

• Context-aware smart homes<br />

based on this method can<br />

provide appropriate services as<br />

much as possible.<br />

Context Labels<br />

Model Construction<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–186–<br />

GC(1)<br />

C (1,<br />

1)<br />

C (i,<br />

1)<br />

Training<br />

Phase<br />

Hierarchical Context<br />

Generalization<br />

GC(0)<br />

C (0, C (0, C (0, C(0, C (0,<br />

1) 2) 3) 4)<br />

…<br />

j)<br />

GC(i)<br />

…<br />

C (1,<br />

2)<br />

C(i,<br />

2)<br />

…<br />

DBN DBN DBN<br />

GC(0) GC(1)<br />

C (1,<br />

3)<br />

…<br />

…<br />

…<br />

GC(i)<br />

C (1,<br />

k)<br />

…<br />

C(i,<br />

m)<br />

Testing<br />

Phase<br />

Features<br />

Generalized<br />

Context<br />

Inference<br />

C(i,1) C(i,2) C(i,3) …<br />

…<br />

C(1,1<br />

)<br />

C(0,1<br />

)<br />

…<br />

C(1,2<br />

)<br />

C(0,2<br />

)<br />

…<br />

C(1,3<br />

)<br />

C(0,3<br />

)<br />

…<br />

…<br />

Generalized Contexts<br />

Hierarchical Generalized Context Inference Engine<br />

18:15–18:30 WedGT3.4<br />

Complex Task Learning from<br />

Unstructured Demonstrations<br />

Unstructured Demonstrations<br />

Scott Niekum, Sarah Osentoski, George Konidaris, and Andrew G. Barto<br />

• We present a novel method for segmenting<br />

demonstrations, recognizing repeated skills,<br />

and generalizing complex tasks from<br />

unstructured demonstrations.<br />

• This method combines many of the<br />

advantages of recent automatic segmentation<br />

methods for learning from demonstration into<br />

a single principled, integrated framework.<br />

• Specifically, we use the Beta Process<br />

Autoregressive Hidden Markov Model and<br />

Dynamic Movement Primitives to learn and<br />

generalize a multi-step task on the PR2<br />

mobile manipulator and to demonstrate the<br />

potential of our framework to learn a large<br />

library of skills over time.


<strong>Session</strong> WedGT5 Gemini 2 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Cooperating Robots<br />

Chair Mike Stilman, Georgia Tech.<br />

Co-Chair Pedro Lima, Inst. Superior Técnico - Inst. for Systems and Robotics<br />

17:30–17:45 WedGT5.1<br />

Weighted Synergy Graphs for Role Assignment<br />

in Ad Hoc Heterogeneous Robot Teams<br />

Somchaya Liemhetcharat and Manuela Veloso<br />

School of Computer Science, Carnegie Mellon University, USA<br />

• In ad hoc scenarios, robot capabilities and<br />

interactions are initially unknown.<br />

• The Weighted Synergy Graph for Role<br />

Assignment (WeSGRA) models:<br />

• capabilities of robots at different roles;<br />

• interactions between robots using a<br />

weighted graph structure.<br />

• We learn the WeSGRA from training<br />

examples and use it to approximate the<br />

optimal role assignment.<br />

• We extensively evaluate the WeSGRA<br />

model and algorithms with the RoboCup<br />

Rescue simulator and with real robots.<br />

An example WeSGRA with 3<br />

agent types and 2 roles.<br />

18:00–18:15 WedGT5.3<br />

On Mission-Dependent Coordination of Multiple<br />

Vehicles under Spatial and Temporal Constraints<br />

Federico Pecora, Marcello Cirillo and Dimitar Dimitrov<br />

Center for Applied Autonomous Sensor Systems,<br />

Örebro University, Sweden<br />

� We propose a constraint-based, least-commitment approach for<br />

coordinating multiple non-holonomic autonomous ground vehicles<br />

� all decisions on vehicle trajectories are seen as temporal or spatial<br />

constraints on trajectories<br />

� trajectories are not committed to until execution time<br />

� vehicles account for tracking performance of other vehicles and<br />

posted constraints during execution<br />

Temporal and spatial constraints on a vehicle's trajectory<br />

17:45–18:00 WedGT5.2<br />

Multi-Robot Multi-Object Rearrangement in<br />

Assignment Space<br />

Martin Levihn and Takeo Igarashi<br />

JST ERATO IGARASHI Design Interface Project, Japan<br />

Mike Stilman<br />

Interactive Computing, Georgia Institute of Technology, USA<br />

• We present Assignment Space<br />

Planning, a new efficient robot multiagent<br />

coordination algorithm<br />

• It yields optimal solutions for simple<br />

problems and novel emergent<br />

behavior for complex scenarios<br />

• Computation time is within seconds<br />

• We demonstrate results not just in<br />

simulation but also on real robot<br />

systems<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–187–<br />

Multiple robots push multiple<br />

objects to designated goals.<br />

18:15–18:30 WedGT5.4<br />

Multi-Robot Exploration and Rendezvous on<br />

Graphs<br />

Malika Meghjani and Gregory Dudek<br />

School of Computer Science, McGill University, Canada<br />

• We address the problem of multirobot<br />

rendezvous in an unknown<br />

bounded environment, starting at<br />

unknown locations, without any<br />

communication.<br />

• The goal is to meet in minimum<br />

time such that the robots can<br />

share resources to speed up<br />

global exploration.<br />

• We propose an energy efficient<br />

combination of exploration and<br />

rendezvous processes.<br />

• Our simulation results suggest a<br />

need for the energy efficient<br />

methods for much faster<br />

rendezvous time.<br />

Randomly generated<br />

environment and its<br />

performance with noisy<br />

robot sensors


<strong>Session</strong> WedGT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Localization and Mapping III<br />

Chair Jun Miura, Toyohashi Univ. of Tech.<br />

Co-Chair<br />

17:30–17:45 WedGT6.1<br />

Efficient Search for Correct and Useful<br />

Topological Maps<br />

Collin Johnson and Benjamin Kuipers<br />

EECS, University of Michigan, USA<br />

• We present an algorithm for probabilistic<br />

topological mapping.<br />

• Perform a heuristic search of a tree of<br />

maps.<br />

• Runs online.<br />

• Never prunes consistent topological map<br />

hypotheses so correct map can always be<br />

found.<br />

Top: Topological map built by our<br />

algorithm<br />

Bottom: Hypotheses expanded<br />

by our algorithm vs. brute force<br />

18:00–18:15 WedGT6.3<br />

Accurate 3D maps from depth images and<br />

motion sensors via nonlinear Kalman filtering<br />

Thibault Hervier, Silvère Bonnabel, François Goulette<br />

Centre de Robotique - CAOR, MINES ParisTech, France<br />

• Use of depth images as localization<br />

sensors<br />

• Combined with ICP<br />

• Analysis of ICP results<br />

• Data fusion with non-linear filtering:<br />

Invariant Extended Kalman Filter<br />

• Natural, robust, handles SE3<br />

• Experiments with Kinect sensor and<br />

gyros shows improvement in accuracy<br />

of localization and map building<br />

Depth images<br />

ICP<br />

Localization<br />

& covariance<br />

Non linear Kalman<br />

filtering (IEKF)<br />

3D maps<br />

Experimental setup<br />

Motion data<br />

(gyros)<br />

17:45–18:00 WedGT6.2<br />

Accurate On-Line 3D Occupancy Grids<br />

Using Manhattan World Constraints<br />

Brian Peasley and Stan Birchfield<br />

Electrical and Computer Engineering Dept, Clemson University, USA<br />

Alex Cunningham and Frank Dellaert<br />

School of Interactive Computing, Georgia Institute of Technology, USA<br />

• Large dense 3D occupancy grids<br />

are constructed from RGB-D data<br />

• Factor graphs are used to combine<br />

odometry and visual data<br />

constrained by a Manhattan World<br />

assumption<br />

• Manhattan World assumption<br />

removes rotational drift – no need<br />

for loop closure<br />

• Large 3D maps of environments<br />

efficiently stored using an octree<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–188–<br />

3D and 2D reconstruction of<br />

a large building environment<br />

18:15–18:30 WedGT6.4<br />

Fourier-based Registrations for Two-Dimensional<br />

Forward-Looking Sonar Image Mosaicing<br />

Natalia Hurtos, Xavier Cufí and Joaquim Salvi<br />

Computer Vision and Robotics Group, University of Girona, Spain<br />

Yvan Petillot<br />

Ocean Systems Laboratory, Heriot -Watt University, U.K.<br />

• Phase correlation method is used to<br />

address the registration of forward-looking<br />

sonar images.<br />

• Registrations from loop-closing situations<br />

and areas without abundant features are<br />

feasible.<br />

• By integrating the result of pairwise<br />

registrations into a pose-based graph<br />

optimization a consistent sonar mosaic is<br />

generated.<br />

• The vehicle motion in x,y and heading can<br />

be also estimated from the registrations.


<strong>Session</strong> WedGT7 Vega <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Mobile Manipulation<br />

Chair Masato Ishikawa, Osaka Univ.<br />

Co-Chair<br />

17:30–17:45 WedGT7.1<br />

Path Planning for Image-based Control of<br />

Wheeled Mobile Manipulators<br />

Moslem Kazemi, Kamal Gupta, Mehran Mehrandezh<br />

Carnegie Mellon University, Pittsburgh, USA<br />

Simon Fraser University, Burnaby, Canada<br />

University of Regina, Regina, Canada<br />

• Proposes a randomized kinodynamic<br />

planning approach for non-holonomic<br />

mobile manipulators<br />

• Accounts for visibility constraints<br />

(occlusion and field of view limits)<br />

• Coordinates the motion of the mobile base<br />

and on-board arm through weighted<br />

pseudo-inverse solutions and a null-space<br />

optimization technique<br />

• Proposes a decoupled trajectory tracking<br />

strategy: state feedback control of the<br />

mobile base + image-based control of the<br />

on-board arm<br />

18:00–18:15 WedGT7.3<br />

Modeling and Control<br />

of Cylindrical Mobile Robot<br />

Tetsurou Hirano, Masato Ishikawa and Koichi Osuka<br />

Dept. of Mech. Eng., Osaka Univ., Japan<br />

• Cylindrical mobile robot, driven by one<br />

eccentric rotor with single actuator<br />

• Two fundamental locomotion modes: edge<br />

rolling and lateral-side rolling<br />

• Modeled the robot using Lagrange’s<br />

E.O.M. and proposed control algorithm for<br />

the both modes<br />

• Achieved position control experiment only<br />

by internal sensor with a real cylindrical<br />

mobile robot<br />

Cylindrical Body<br />

Eccentric Rotor<br />

Cylindrical Mobile Robot<br />

17:45–18:00 WedGT7.2<br />

Mobile Manipulation Through<br />

An Assistive Home Robot<br />

Matei Ciocarlie, Kaijen Hsiao, and David Gossow<br />

Willow Garage Inc., USA<br />

Adam Leeper<br />

Willow Garage Inc., USA, and Stanford University, USA<br />

• We present a mobile manipulator operated by<br />

a motor-impaired person to perform varied<br />

and unscripted manipulation tasks<br />

• We describe the complete set of tools that<br />

enable the execution of complex tasks, and<br />

share the lessons learned from testing them<br />

in a real user’s home<br />

• In the context of grasping, we show how the<br />

use of autonomous sub-modules improves<br />

performance in complex, cluttered<br />

environments<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–189–<br />

A PR2 robot, operated by a motor-impaired<br />

user, performing a manipulation task in a<br />

real home<br />

18:15–18:30 WedGT7.4<br />

Sensor-based Redundancy Resolution for a<br />

Nonholonomic Mobile Manipulator<br />

Huatao Zhang, Yunyi Jia and Ning Xi<br />

Electrical and Computer Engineering Department,<br />

Michigan State University, USA<br />

• Provide a redundancy<br />

resolution method for online<br />

trajectory generation by using<br />

real-time sensor information.<br />

• Employs multi-objective<br />

functions to meet the<br />

constraints and requirements<br />

simultaneously.<br />

• Apply this method to a high<br />

DOF mobile manipulator, and<br />

the effectiveness is<br />

demonstrated by simulation<br />

results.<br />

System structure<br />

Simulation results


<strong>Session</strong> WedGT8 Gemini 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Control Design<br />

Chair Darwin G. Caldwell, Itituto Italiano di Tecnologia<br />

Co-Chair Carme Torras, CSIC - UPC<br />

17:30–17:45 WedGT8.1<br />

Redundant Inverse Kinematics:Experimental<br />

Comparative Review and Two Enhancements<br />

Adrià Colomé and Carme Torras<br />

Institut de Robòtica Industrial, UPC-CSIC, Barcelona, Spain<br />

• Review of Closed-Loop<br />

Inverse Kinematics<br />

algorithms (CLIK), pointing<br />

out their strengths and<br />

weaknesses.<br />

• New filtering of the Jacobian<br />

matrix that guarantees a<br />

good conditioning of the<br />

pseudoinverse.<br />

• New options to make CLIK algorithms smoother and robust, efficiently<br />

avoiding joint limits.<br />

18:00–18:15 WedGT8.3<br />

Internal Model Control for Improving the Gait<br />

tracking of a Compliant Humanoid Robot<br />

Luca Colasanto, Nikos G. Tsagarakis,<br />

Zhibin Li and Darwin G. Caldwell<br />

Department of Advanced Robotics,<br />

Istituto Italiano di Tecnologia, Italy<br />

• A 3-dimensional compliant model at<br />

the level of the Center of Mass of a<br />

compliant humanoid is presented and<br />

experimentally validated.<br />

• An Internal Model Controller is design<br />

and implemented on the COMAN<br />

robot.<br />

• Experimental results of dynamic<br />

walking performed using the control<br />

strategy are presented.<br />

COMAN robot<br />

17:45–18:00 WedGT8.2<br />

Hierarchical strategy for dynamic coverage<br />

C. Franco, G. Lopez-Nicolas and C. Sagues<br />

DIIS - I3A, Universidad de Zaragoza, Spain<br />

D. Paesa and S. Llorente<br />

BSH Electrodomésticos España, Spain<br />

• Hierarchical global strategy to perform<br />

dynamic coverage<br />

• Ordered coverage to reduce the path<br />

length<br />

• Combination of local gradient strategy with<br />

global hierarchical strategy to increase the<br />

efficiency avoiding local minima<br />

• Bounded actions imposed<br />

18:15–18:30 WedGT8.4<br />

Approximate Steering of a Plate-Ball System<br />

Under Bounded Model Perturbation Using<br />

Ensemble Control<br />

Aaron Becker 1 and Timothy Bretl 2<br />

University of Illinois at Urbana-Champaign, USA<br />

1 Electrical and Computer Engineering, 2 Aerospace Engineering<br />

• Revisit classical plate-ball system<br />

and prove still open-loop<br />

controllable when we do not know<br />

sphere radius<br />

• Algorithm for approximate<br />

steering to arbitrary orientations<br />

and positions<br />

• Hardware validation with new<br />

plate-ball system based on<br />

magnetic actuation---system is<br />

easy to implement<br />

• Enables simultaneous<br />

manipulation of multiple balls with<br />

different radii<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–190–<br />

(a) ensemble plate-ball system<br />

magnet array<br />

y−axis<br />

(b) underlying mechanism<br />

Hardware platform capable of rolling 15<br />

different-sized spheres to arbitrary orientations


<strong>Session</strong> WedGT9 Fenix 1 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Control of Wheeled Robots II<br />

Chair Franz Dietrich, Tech. Univ. Braunschweig<br />

Co-Chair<br />

17:30–17:45 WedGT9.1<br />

A Novel Approach For Steeringwheel<br />

Synchronization With Velocity/Acceleration<br />

Limits And Mechanical Constraints<br />

Ulrich Schwesinger, Cédric Pradalier and Roland Siegwart<br />

Autonomous Systems Lab, ETH Zurich, Switzerland<br />

• An algorithm for steeringwheel<br />

synchronization of over-actuated pseudoomnidirectional<br />

rovers is presented.<br />

• Constraints on velocity and acceleration of<br />

the steering units are taken into account.<br />

• The constraints are satisfied via a<br />

compliant control of the instantaneous<br />

center of rotation.<br />

• The performance of the synchronization<br />

algorithm is evaluated on a breadboard for<br />

the ExoMars mission.<br />

ExoMars rover - phase B1<br />

concept, source: ESA/Cluster<br />

18:00–18:15 WedGT9.3<br />

Disturbance Compensation in Pushing, Pulling,<br />

and Lifting for Load Transporting Control of a<br />

Wheeled Inverted Pendulum Type Assistant<br />

Robot Using The Extended State Observer<br />

Luis Canete and Takayuki Takahashi<br />

Graduate School of Symbiotic Systems Science,<br />

Fukushima University, Japan<br />

• The system is an Inverted<br />

PENdulum Type Assistant Robot<br />

(I-PENTAR).<br />

• The system is designed to use its<br />

balance to apply large torques and<br />

forces.<br />

• Uses the Extended State Observer<br />

to compensate for disturbances<br />

during performance of tasks.<br />

• Tests for impulse and step<br />

disturbances were applied to test<br />

the system robustness.<br />

• The robot is able to push and pull<br />

14kg loads up a ramp and lift up to<br />

7.5kg loads.<br />

I-PENTAR and the proposed<br />

pushing/pulling and lifting tasks<br />

17:45–18:00 WedGT9.2<br />

Wheeled Inverted-Pendulum-Type Personal Mobility Robot<br />

with Collaborative Control of Seat Slider and Leg Wheels<br />

Nobuyasu Tomokuni<br />

Department of Intelligent Mechanical Engineering,<br />

Faculty of Engineering, Kinki University, Japan.<br />

Motoki Shino<br />

Department of Mechanical Engineering, The University of Tokyo, Japan.<br />

• This paper describes a motion control that<br />

realizes more stability and comfortability<br />

for a personal mobility robot (PMR).<br />

• The PMR has a unique mechanism that<br />

consists of two independent leg wheels<br />

and a seat slider for inverted pendulum<br />

type mobility.<br />

• This mechanical features can achive more<br />

compactness and capacity to support both<br />

indoor and outdoor mobilities.<br />

• We propose whole body collaborative<br />

controler based on the linear-quadratic<br />

regulator from a three-dimensional<br />

kinematics model of the PMR.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–191–<br />

Personal mobility robot (PMR)<br />

18:15–18:30 WedGT9.4<br />

A 3D Dynamic Model of a Spherical Wheeled<br />

Self-Balancing Robot<br />

Ali Nail İnal and Ömer Morgül<br />

Dept. of Electrical & Electronics Eng., Bilkent University, Turkey<br />

Uluç Saranlı<br />

Dept. of Computer Eng., Middle East Technical University, Turkey<br />

• A new coupled 3D Ballbot model capable<br />

of capturing significant yaw rotations is<br />

introduced<br />

• Equations of motion for the new model are<br />

derived, incorporating Ballbot specific<br />

constraints<br />

• New inverse-dynamics controllers for<br />

accurately controlling attitude variables are<br />

investigated in simulation<br />

• Relations between circular motions in<br />

attitude variables and associated motions<br />

in positional variables is investigated,<br />

exposing increased expressivity of the<br />

new model.<br />

The coupled 3D Ballbot model


<strong>Session</strong> WedGT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Safety, Failure Handling and Recovery II<br />

Chair Erwin Prassler, Bonn-Rhein-Sieg Univ. of Applied Sciences<br />

Co-Chair<br />

17:30–17:45 WedGT<strong>10</strong>.1<br />

Dual back-stepping observer to anticipate the rollover<br />

risk in under/over-steering situations. Application to<br />

ATVs in off-road context.<br />

Mathieu Richier, Roland Lenain and Christophe Debain<br />

IRSTEA, France<br />

Benoit Thuilot,<br />

Institut Pascal, France<br />

• Rollover prevention thanks to the<br />

computation of a stability metric and a<br />

predictive algorithm<br />

• Observers estimating on-line the grip<br />

conditions and the slope: application to<br />

off-road conditions<br />

• Limitation to a low cost sensing<br />

equipment: allowing a practical<br />

applicability<br />

MF400H, Massey<br />

Fergusson quad bike used<br />

for experiments.<br />

18:00–18:15 WedGT<strong>10</strong>.3<br />

Psychological Experiments on Avoidance Action<br />

Characteristics for Estimating Avoidability<br />

of Harm to Eyes from Robots<br />

Takamasa Hattori 1 , Yoji Yamada 1 , Shuji Mori 2 ,<br />

Shogo Okamoto 1 , and Susumu Hara 1<br />

1 Graduate School of Engineering, Nagoya University, Japan<br />

2 Faculty of Information Science and Electrical Engineering,<br />

Kyushu University, Japan<br />

• Psychological experiments are conducted<br />

to investigate harm-avoidance action<br />

characteristics in humans in close contact<br />

with robotic devices.<br />

• A situation is created in which the sharp<br />

end-effector tip of a robot suddenly<br />

approaches the eyes of a facing participant.<br />

• The results suggest that the reaction time<br />

on avoidance actions does not depend on<br />

the type of work being performed but on the<br />

initial distance between the human’s eyes<br />

and the approaching object.<br />

Participant<br />

Motion capture system<br />

Bearing<br />

End effector<br />

Robot<br />

17:45–18:00 WedGT<strong>10</strong>.2<br />

Towards Learning of Safety Knowledge from<br />

Human Demonstrations<br />

P. Ertle 1 , M. Tokic 2,3 , R. Cubek 2 , H. Voos 4 , D. Söffker 1<br />

1 University of Duisburg-Essen<br />

2 University of Applied Sciences of Ravensburg-Weingarten<br />

3 University of Ulm<br />

4 University of Luxembourg<br />

• Future autonomous service robots shall<br />

operate in open and complex<br />

environments which implies complications<br />

ensuring safe operation.<br />

• Hazardous environmental object<br />

interactions can occur.<br />

• A safety procedure is described, learning<br />

safety knowledge from human<br />

demonstration .<br />

• Several supervised learning techniques<br />

are evaluated.<br />

• Results indicate that Decision Trees allow<br />

interesting opportunities.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–192–<br />

Learned decision tree for safely<br />

handling an iron in an ironing task<br />

18:15–18:30 WedGT<strong>10</strong>.4<br />

A truly safe robot has to know what injury it may<br />

cause<br />

S. Haddadin, S. Haddadin, A. Khoury, T. Rokahr, S. Parusel,<br />

R. Burgkart, A. Bicchi, and A. Albu-Schäffer<br />

Robotics and Mechatronics Center, DLR<br />

• Introduce soft-tissue injury classification<br />

to robotics<br />

• Generate missing medical data<br />

(mass,velocity,curvature)� injury via<br />

drop-testing<br />

• Associate primitive based decomposition<br />

of robot structure with drop-testing<br />

results<br />

• Take into account robot dynamics and<br />

curvature for biomechanically safe<br />

velocity control<br />

• Effect: even in case of unwanted collision<br />

with potentially dangerous curvatures the<br />

robot is not able to cause any harm<br />

• Relevance not only to robotics but also to<br />

medicine and biomechanics


<strong>Session</strong> WedGVT4 Fenix 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Haptics, Force Sensing and Manipulation<br />

Chair Toshiaki Tsuji, Saitama Univ.<br />

Co-Chair<br />

17:30–17:45 WedGVT4.1<br />

A supervisory control system for a multi- ?fingered<br />

robotic hand using ?datagloves and a haptic device<br />

Youtaro Yoshimura and Ryuta Ozawa<br />

Department of Robotics, Ritsumeikan University, JAPAN<br />

• This paper proposes a supervisory control<br />

system for a multi-fingered robotic hand<br />

for grasping an object in a remote<br />

environment in several ways, manipulating<br />

it, and mimicking several non-grasping<br />

motions.<br />

• The proposed control system consists of a<br />

grasping selector in the master system<br />

and motion controllers and a controller<br />

selector in the slave system.<br />

• The grasping selector learns to detect<br />

motions using datagloves.<br />

• The controller selector determines the<br />

current command and awaits a transition,<br />

while the motion controllers stably realize<br />

the currently commanded motion.<br />

Object manipulation<br />

18:00–18:15 WedGVT4.3<br />

Whole-body Force Sensation by Force Sensor<br />

with End-effector of Arbitrary Shape<br />

Naoyuki Kurita, Toshiaki Tsuji<br />

Graduate School of Science and Engineering<br />

Saitama University, Japan<br />

• The contact location can be calculated by<br />

a force sensor<br />

• A method for estimating the contact point<br />

on an end-effector of arbitrary shape is<br />

proposed<br />

• The method utilizes the property that the<br />

external force direction changes when the<br />

end-effector has contact<br />

• Experimental results show availability for<br />

a non-convex shaped end-effector<br />

Experimental image<br />

18:20–18:25 WedGVT4.5<br />

Autonomous Construction of a Roofed<br />

Structure: Synthesizing Planning and Stigmergy<br />

on a Mobile Robot<br />

Stefan Wismer, Gregory Hitz, Stéphane Magnenat<br />

Autonomous Systems Lab, ETH Zürich, Switzerland<br />

Michael Bonani<br />

Mobsya, Lausanne, Switzerland<br />

Alexey Gribovskiy<br />

Mobots group, LSRO, EPFL, Switzerland<br />

• A mobile robot, according to a plan, builds a structure that it can enter.<br />

• The robot interacts with the construction using local sensing.<br />

• This synthesis of planning and stigmergy opens the way to new<br />

construction techniques using mobile robots.<br />

17:45–18:00 WedGVT4.2<br />

Experiments in Quasi-Static Manipulation<br />

of a Planar Elastic Rod<br />

Dennis Matthews 1 and Timothy Bretl 2<br />

1 Department of Electrical and Computer Engineering<br />

2 Department of Aerospace Engineering<br />

University of Illinois at Urbana-Champaign<br />

• Equilibrium configurations of a planar<br />

elastic rod are solutions to a geometric<br />

optimal control problem.<br />

• We prove that the set of all solutions to<br />

this problem is a smooth 3-manifold that<br />

can be parameterized by a single chart.<br />

• This result leads to an algorithm for quasistatic<br />

manipulation planning that works<br />

well and is easy to implement.<br />

• Hardware experiments validate our<br />

approach when the “rod” is a thin, flexible<br />

strip of metal that has a fixed base and<br />

that is held at the other end by a robot.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–193–<br />

Equilibrium configuration of a planar<br />

elastic rod, and its coordinates in a<br />

slice of the 3-manifold we derive<br />

18:15–18:20 WedGVT4.4<br />

Robots for Humanity: User-Assisted Design for<br />

Assistive Mobile Manipulation<br />

T. Chen, P. Grice, K. Hawkins, C. Kemp, C. King, H. Nguyen<br />

Dept. Of Biomedical Engineering, Georgia Tech, USA<br />

M. Ciocarlie, S. Cousins, K. Hsiao, A. Leeper, A. Paepcke,<br />

C. Pantofaru, L. Takayama<br />

Willow Garage Inc., USA<br />

D. Lazewatsky, W. Smart<br />

Oregon State University<br />

• We aim to enable people with motor impairments to interact with the world<br />

through mobile manipulators<br />

• The video shows our collaborator and pilot tester, Henry Evans, who is<br />

quadriplegic, using a PR2 to interact physically and socially<br />

• The user interfaces developed allowed<br />

Henry to shave, retrieve objects, open<br />

drawers, and give out candy at<br />

Halloween<br />

• These results illustrate the potential of<br />

robots to increase the independence of<br />

people with motor impairments<br />

Henry scratches his cheek with a PR2<br />

robot.<br />

18:25–18:30 WedGVT4.6<br />

Additional Manipulating Function for Limited<br />

Narrow Space with Omnidirectional Driving Gear<br />

Kenjiro TADAKUMA 1) , Riichiro TADAKUMA 2) ,<br />

Kyohei IOKA 2) , Takeshi KUDO 2) ,<br />

Minoru TAKAGI 2) , Yuichi TSUMAKI 2)<br />

Mitsuru HIGASHIMORI 1) and Makoto KANEKO 1)<br />

1)Department of Mechanical Engineering, Osaka University, Japan<br />

2)Department of Mechanical Systems Engineering, Yamagata University,<br />

Japan<br />

• Manipulating function is added<br />

to the end effector with<br />

omnidirectional driving gear.<br />

• This manipulating function can<br />

be useful especially for limited<br />

narrow space.<br />

• The input gear mechanisms<br />

with passive rollers for<br />

smooth power transmission<br />

were examined.<br />

Parallel gripper with omnidirectional gear


<strong>Session</strong> WedGJT11 Hidra <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Jubilee Videos I<br />

Chair T. J. Tarn, Washington Univ.<br />

Co-Chair Hong Zhang, Univ. of Alberta<br />

17:30–17:40 WedGJT11.1<br />

Telexistence — from 1980 to <strong>2012</strong><br />

Susumu Tachi, Kouta Minamizawa,<br />

Masahiro Furukawa and Charith Fernando<br />

Graduate School of Media Design, Keio University, Japan<br />

• Telexistence allows a human<br />

being to experience a realtime<br />

sensation of being in a<br />

place other than his/her<br />

actual location and to interact<br />

with the remote environment,<br />

which may be real, virtual, or<br />

a combination of both.<br />

• Telexistence in the real<br />

environment through a virtual<br />

environment is possible.<br />

• 32 years of telexistence<br />

development are historically<br />

reviewed in this jubilee video.<br />

17:50–18:00 WedGJT11.3<br />

The Birth of the Brain-Controlled Wheelchair<br />

Tom Carlson, Robert Leeb, Ricardo Chavarriaga<br />

and José del R. Millán<br />

Chair in Non-Invasive Brain Machine Interface,<br />

École polytechnique fédérale de Lausanne, Switzerland<br />

• We have striven to push BCI technology<br />

out of the lab, into the real world<br />

• We describe the evolution of BCI<br />

applications from cursors to games,<br />

telepresence robots and wheelchairs<br />

• Coupling BCIs with shared control<br />

reduces workload and results in safe<br />

and reliable control<br />

• We culminate with the first patient trial<br />

of a motor-imagery based BCI<br />

wheelchair<br />

18:<strong>10</strong>–18:20 WedGJT11.5<br />

A Decade of Rescue Robots<br />

Robin R. Murphy<br />

Center for Robot-Assisted Search and Rescue, Texas A&M, USA<br />

• Land, marine and aerial robotics have<br />

been reported at 26 disasters, starting with<br />

2001 World Trade Center through the<br />

Tohoku Tsunami and Fukushima Nuclear<br />

Event<br />

• Used for: searching for victims,<br />

reconnaissance and mapping, inspection<br />

of building and bridges<br />

• Very successful though have not found<br />

any living survivors<br />

• Open research questions in: human-robot<br />

interaction, mobile manipulation, reliable<br />

wireless networks, and obstacle avoidance<br />

for small UAVs and UMVs<br />

One of the 26 deployments: Use<br />

of UMVs for Tohoku Tsunami<br />

17:40–17:50 WedGJT11.2<br />

The Dynamo Project:<br />

The World’s First Robot Soccer Players<br />

Alan K. Mackworth<br />

Computer Science, University of British Columbia, Canada<br />

• Video showing the world’s first autonomous robot soccer players<br />

• Outcomes of experiments using small scale radio controlled trucks and<br />

cars to play robot soccer<br />

• Pioneering work carried out over the period 1992-1994 in the Dynamo<br />

project at the UBC Laboratory for Computational Intelligence<br />

• Precursors to the RoboCup robot soccer competitions that started in 1997<br />

18:00–18:<strong>10</strong> WedGJT11.4<br />

CoBots: Collaborative Robots<br />

Servicing Multi-Floor Buildings<br />

Manuela Veloso, Joydeep Biswas, Brian Coltin,<br />

Stephanie Rosenthal, Tom Kollar, Cetin Mericli,<br />

Mehdi Samadi, and Susana Brandao<br />

School of Computer Science, Carnegie Mellon University, United States<br />

Rodrigo Ventura<br />

ECE Department, Instituto Superior Tecnico, Portugal<br />

• CoBots (Collaborative Robots) perform<br />

service tasks for humans indoors<br />

• Building on the past 25 years of robotics<br />

research, this video showcases research<br />

with CoBot, including:<br />

• A visitor-companion robot & tour guide<br />

• Wifi and kinect localization<br />

• Navigation and obstacle avoidance<br />

• Telepresence and simulation<br />

• Symbiotic autonomy and using the web<br />

• Sending message and making<br />

deliveries for humans<br />

• Riding the elevator with human help<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–194–

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!