07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Session</strong> WedBVT6 Gemini 3 <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 09:30–<strong>10</strong>:30<br />

Mapping II<br />

Chair Edwin Olson, Univ. of Michigan<br />

Co-Chair<br />

09:30–09:45 WedBVT6.1<br />

Variable reordering strategies for SLAM<br />

• We have evaluated existing<br />

reordering strategies on standard<br />

SLAM datasets.<br />

Pratik Agarwal and Edwin Olson<br />

Computer Science and Engineering,<br />

University of Michigan, USA<br />

• We propose an easy to implement<br />

reordering algorithm called BHAMD<br />

which yields competitive performance.<br />

• We provide evidence showing that<br />

few gains remain with respect to<br />

variants of minimum degree ordering.<br />

Reorder and solve time for<br />

different reordering algorithms<br />

<strong>10</strong>:00–<strong>10</strong>:15 WedBVT6.3<br />

Planar Polygon Extraction and Merging from<br />

Depth Images<br />

Joydeep Biswas and Manuela Veloso<br />

School of Computer Science, Carnegie Mellon University, USA<br />

We introduce an approach to building 3D<br />

maps of indoor environments modelled as<br />

planar surfaces using depth cameras.<br />

•Neighborhoods of plane filtered points are<br />

extracted from each observed depth image<br />

and fitted to convex polygons.<br />

•The polygons from each frame are<br />

matched to existing map polygons using<br />

OpenGL accelerated ray casting.<br />

•Matched polygons are then merged over<br />

time using sequential update and merging of<br />

the scatter matrix of observed polygons.<br />

The polygon extraction and merging<br />

algorithms take on an average 2.5 ms for<br />

each depth image of size 640x480 pixels.<br />

Plane Filtered and fitted convex polygons<br />

shown in blue from each frame (top) are<br />

merged across successive frames to<br />

generate complete map (bottom).<br />

<strong>10</strong>:20–<strong>10</strong>:25 WedBVT6.5<br />

2D PCA-based Localization for Mobile Robots<br />

in Unstructured Environments<br />

Fernando Carreira (1) , Camilo Christo (2) , Duarte Valério (2) ,<br />

Mário Ramalho (2) , Carlos Cardeira (2) , João Calado (1,2)<br />

and Paulo Oliveira (3)<br />

(1) ADEM / ISEL, Polytechnic Institute of Lisbon, Portugal<br />

(2) IDMEC / IST, Technical University of Lisbon, Portugal<br />

(3) ISR / IST, Technical University of Lisbon, Portugal<br />

• Self-localization system for mobile<br />

robots to operate in indoor<br />

environment, with only onboard<br />

sensors<br />

• The database of images stored<br />

onboard is of reduced size, when<br />

compared with acquired images<br />

• No hypothesis is made about specific<br />

features in the environment<br />

• The localization system estimates in<br />

real time the position and slippage<br />

with global stable error dynamics.<br />

Results of stability tests considering<br />

wrong initial position and attitude<br />

09:45–<strong>10</strong>:00 WedBVT6.2<br />

An Object-Based Semantic World Model for Long-<br />

Term Change Detection and Semantic Querying<br />

• RGB-D based mapping<br />

aboard a localized robot<br />

• Uniquely weak perceptual<br />

assumptions<br />

• Scales to large (1600 m 2 ),<br />

long-term (six weeks)<br />

operation<br />

• Supports change detection<br />

and semantic querying<br />

• 326 GB dataset available!<br />

Julian Mason and Bhaskara Marthi<br />

Duke University and Willow Garage<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–138–<br />

Example of semantic query for<br />

“medium-sized things in the cafeteria.”<br />

<strong>10</strong>:15–<strong>10</strong>:20 WedBVT6.4<br />

Reconfigurable Intelligent Space, R+iSpace,<br />

and Mobile Module, MoMo<br />

JongSeung Park<br />

Graduate School of Sci. and Eng, Ritsumeikan University, Japan<br />

Joo-Ho Lee<br />

College of Information Sci. and Eng, Ritsumeikan University, Japan<br />

• In this video and paper, the new concept<br />

of Intelligent space, ‘R+iSpace’ is<br />

introduced.<br />

• The whole devices in the R+iSpace can<br />

rearrange themselves automatically<br />

according to situation.<br />

• To achieve R+iSpace in real environment,<br />

we propose a mobile module, ‘MoMo’.<br />

• The devices can move on the wall and the<br />

ceiling through mounting on the MoMo.<br />

The R+iSpace system<br />

and The MoMo<br />

<strong>10</strong>:25–<strong>10</strong>:30 WedBVT6.6<br />

Deformable Soft Wheel Robot using<br />

Hybrid Actuation<br />

Je-sung Koh, Dae-young Lee, Seung-won Kim<br />

and Kyu-jin Cho<br />

Dept. Mechanical and Aerospace Eng., Seoul National University, South Korea<br />

• Multimodal motion ; Three driving modes<br />

for penetrating obstacles.<br />

• Caterpillar motion : passing through<br />

the gap<br />

• Wheel driving : fast movement on the<br />

ground<br />

• Legged wheel motion : climbing the<br />

stair<br />

• Hybrid actuation system<br />

• Deformable wheel : Shape memory<br />

alloy coil spring actuator<br />

• Wheel driving : DC motor<br />

Three driving modes of<br />

Deformable wheel robot

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!