07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Session</strong> WedCT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Object Detection and Tracking<br />

Chair<br />

Co-Chair<br />

11:00–11:15 WedCT1.1<br />

Reliable Object Detection<br />

and Segmentation using Inpainting<br />

Ji Hoon Joung 1 , M. S. Ryoo 2<br />

Sunglok Choi 3 and Sung-Rak Kim 1<br />

1 Robotics Research Department, Hyundai Heavy Industries, South Korea<br />

2 Mobility and Robotic Systems Section, Jet Propulsion Laboratory, USA<br />

3 Robot Research Department, ETRI, South Korea<br />

• This paper presents a novel object detection and segmentation method<br />

utilizing an inpainting algorithm. We newly utilize inpainting to judge<br />

whether an object candidate region includes the foreground object or not.<br />

• The key idea is that if we erase a certain region from an image, the<br />

inpainting algorithm is expected to recover the erased image only when it<br />

belongs a background area.<br />

• We illustrate how our<br />

inpainting-based detection /<br />

segmentation approach<br />

benefits the object detection<br />

using two different pedestrian<br />

datasets<br />

Concept of the proposed method<br />

11:30–11:45 WedCT1.3<br />

Exploiting and modeling local 3D structure for<br />

predicting object locations<br />

Alper Aydemir and Patric Jensfelt<br />

Center for Autonomous Systems, KTH, Sweden<br />

• We propose the use of local 3D shape<br />

around objects in everyday scenes as a<br />

strong indicator of the placement of these<br />

objects. We call this the 3D context of an<br />

object.<br />

• We propose a conceptually simple and<br />

effective method to capture this<br />

information.<br />

• Our results show that 3D contextual<br />

information is a strong indicator of object<br />

placement in everyday scenes.<br />

• An RGB-D data set from five different<br />

countries in Europe was collected and<br />

used for evaluation.<br />

Top figure shows a kitchen<br />

scene, and the bottom figure the<br />

method’s response for the object<br />

cup.<br />

12:00–12:15 WedCT1.5<br />

Fast High Resolution 3D Laser Scanning<br />

by Real-Time Object Tracking and Segmentation<br />

Jens T. Thielemann, Asbjørn Berge, Øystein Skotheim<br />

and Trine Kirkhus<br />

SINTEF ICT, Norway<br />

E-mail: {jtt,trk,asbe,osk}@sintef.no<br />

• This paper presents a real-time contour tracking and object<br />

segmentation algorithm for 3D range images. The algorithm<br />

is used to control a novel micro-mirror based imaging laser<br />

scanner, which provides a dynamic trade-off between<br />

resolution and frame rate. The micro-mirrors are<br />

controllable, enabling us to speed up acquisition<br />

significantly by only sampling on the object that is tracked<br />

and of interest. As the hardware is under development, we<br />

benchmark our algorithms on data from a SICK LMS<strong>10</strong>0-<br />

<strong>10</strong>000 laser scanner mounted on a tilting platform. We find<br />

that objects are tracked and segmented well on pixel-level;<br />

that frame rate/resolution can be increased 3-4 times<br />

through our approach compared to scanners having static<br />

scan trajectories, and that the algorithm runs in 30<br />

ms/image on a Intel Core i7 CPU using a single core.<br />

11:15–11:30 WedCT1.2<br />

3D Textureless Object Detection and Tracking:<br />

An Edge-based Approach<br />

Changhyun Choi and Henrik I. Christensen<br />

College of Computing, Georgia Institute of Technology, USA<br />

Example frames from our detection and tracking results<br />

• An approach to textureless object detection and tracking of the 3D pose<br />

• Detection and tracking schemes are coherently integrated in a particle<br />

filtering framework on the SE(3)<br />

• For object detection, an efficient chamfer matching is employed<br />

• A set of coarse poses is estimated from the chamfer matching results<br />

• Particles are initialized from the coarse pose hypotheses by randomly<br />

drawing based on costs of the matching<br />

• To ensure the initialized particles are at or close to the global optimum, an<br />

annealing process is performed after the initialization<br />

• Comparative results for several image sequences with clutter are shown<br />

to validate the effectiveness of our approach<br />

11:45–12:00 WedCT1.4<br />

Birth Intensity Online Estimation in GM-PHD<br />

Filter for Multi-Target Visual Tracking<br />

Xiaolong Zhou, Y.F. Li, Tianxiang Bai and Yazhe Tang<br />

Dept. of MBE, City University of Hong Kong, Hong Kong, China<br />

Bingwei He<br />

Dept. of Mechanical Engineering and Automation, Fuzhou University, China<br />

• A multi-target visual tracking system that<br />

combines object detection and GM-PHD<br />

filter is developed<br />

• An improved measurement-dependent<br />

birth intensity online estimation method<br />

that based on the entropy distribution and<br />

the coverage rate is proposed<br />

• Entropy distribution based birth intensity<br />

update is proposed to remove those<br />

Gaussian components like noises within<br />

the birth intensity which are irrelevant with<br />

the birth measurements<br />

• Coverage rate based birth intensity update<br />

is proposed to further eliminate the noises<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–139–<br />

Tracking results comparison for<br />

data sets PETS 2000 and<br />

CAVIAR. (a) Detection results. (b)<br />

Tracking results with birth process<br />

proposed in [13]. (c) Tracking<br />

results with birth process proposed<br />

in this paper.<br />

12:15–12:30 WedCT1.6<br />

A Heteroscedastic Approach to Independent<br />

Motion Detection for Actuated Visual Sensors<br />

Carlo Ciliberto, Sean Ryan Fanello, Lorenzo Natale<br />

and Giorgio Metta<br />

Robotics Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Italy<br />

• A compositional framework is presented to<br />

perform real-time independent motion<br />

detection for applications in Robotics.<br />

• The algorithm can be easily adapted to a<br />

wide range of robotic platforms thanks to<br />

the flexibility granted by its modular<br />

structure.<br />

• The proposed method overcomes the<br />

shortcomings of the current state of the art<br />

approaches by exploiting the known robot<br />

kinematics to predict egomotion rather<br />

than relying on vision alone.<br />

• A heteroscedastic learning layer is<br />

employed to tune the egomotion predictive<br />

capabilities of the system.<br />

Experiments were conducted on<br />

the iCub humanoid robot.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!