07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Session</strong> WedCT<strong>10</strong> Lince <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 11:00–12:30<br />

Visual Learning II<br />

Chair Edwin Olson, Univ. of Michigan<br />

Co-Chair<br />

11:00–11:15 WedCT<strong>10</strong>.1<br />

Clustering-based Discriminative Locality<br />

Alignment for Face Gender Recognition<br />

Duo Chen<br />

College of Communication Engineering, Chongqing University, China<br />

Jun Cheng<br />

Shenzhen Institutes of Advanced Technology, CAS, China<br />

The Chinese University of Hong Kong<br />

Dacheng Tao<br />

Faculty of Engineering and Information Technology, University of Technology<br />

Sydney, Australia<br />

• To facilitate human-robot interactions,<br />

human gender information is very<br />

important.<br />

• It is essential to develop a simple and fast<br />

way based on dimensional reduction to<br />

recognize gender.<br />

• Both global geometry and local geometry<br />

of data are essential to estimate the lower<br />

dimensional projection.<br />

• CDLA exploits global geometry, local<br />

geometry and discriminative information.<br />

p1<br />

A<br />

B<br />

p2<br />

CDLA makes the connected<br />

points in the k1 nearest graph<br />

closer. By k-means clustering<br />

(taking the global geometry into<br />

count), it avoids making the far<br />

away points connected.<br />

11:30–11:45 WedCT<strong>10</strong>.3<br />

A System of Automated Training Sample<br />

Generation for Visual-based Car Detection<br />

Chao Wang, Huijing Zhao and Hongbin Zha<br />

Key Lab of Machine Perception (MOE), Peking Univ., China<br />

Franck Davoine<br />

CNRS and LIAMA Sino French Laboratory, Beijing, China<br />

• This paper presents a system to automatically<br />

generate car image sample dataset.<br />

• The dataset contains multi-view car image<br />

samples with car’s pose information.<br />

• A system of detecting and tracking onroad<br />

vehicles using multiple single-layer<br />

lasers is developed.<br />

• Multi-view car samples are generated<br />

based on the tracking results and multiview<br />

camera data.<br />

Car samples divided into 8 subcategories<br />

12:00–12:15 WedCT<strong>10</strong>.5<br />

On-line semantic perception using uncertainty<br />

Roderick de Nijs, Juan Sebastian Ramos Pachón, Kolja Kühnlenz<br />

Institute of Automatic Control Engineering, Technische Universität München,<br />

Germany<br />

Gemma Roig, Xavier Boix, Luc van Gool<br />

Computer Vision Laboratory, ETH Zurich, Switzerland<br />

Can a semantic labeling algorithm<br />

benefit from uncertainty?<br />

• Buffer of images for on-line semantic<br />

segmentation<br />

• Perturb-and-MAP random fields to<br />

compute uncertainty<br />

• Spend more computation time on<br />

uncertain regions<br />

Above: Urban scene<br />

Below: Class undertainty<br />

11:15–11:30 WedCT<strong>10</strong>.2<br />

IEEE/RSJ IROS <strong>2012</strong> Digest Template<br />

Incorporating Geometric Information into Gaussian<br />

Process Terrain Models from Monocular Images<br />

Tariq Abuhashim and Salah Sukkarieh<br />

Australian Centre for Field Robotics<br />

The University of Sydney<br />

NSW 206, Australia<br />

• This paper presents a novel approach to<br />

incorporate differential geometry into<br />

depth estimation from monocular images<br />

that is based on the Gaussian Process<br />

Derivative Observations (GDP)<br />

formulation.<br />

• Experimental results are presented using<br />

synthesized examples and real monocular<br />

images captured from an Unmanned<br />

Aerial Vehicle (UAV).<br />

• Results show improvement in depth<br />

estimation over standard Gaussian<br />

Process Regression (GPR).<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–146–<br />

Ground and aerial robotics used<br />

to reconstruct 3D maps<br />

11:45–12:00 WedCT<strong>10</strong>.4<br />

Learning and Recognition of Objects Inspired by<br />

Early Cognition<br />

Maja Rudinac and Pieter Jonker<br />

Biorobotics Lab, Delft University of Technology, The Netherlands<br />

Gert Kootstra and Danica Kragic<br />

Computer Vision and Active Perception lab, KTH Royal Institute of Technology,<br />

Sweden<br />

• We present a unifying approach for learning<br />

and recognition of objects in unstructured<br />

environments through exploration. We<br />

establish 4 principles for object learning.<br />

• First, early object detection is based on an<br />

attention mechanism detecting salient parts in<br />

the scene.<br />

• Second, motion of the object allows more<br />

accurate object localization,<br />

• Next, acquiring multiple observations of the<br />

object through manipulation allows a more<br />

robust representation of the object.<br />

• And last, object recognition benefits from a<br />

multi-modal representation.<br />

• This approach shows significant improvement<br />

of the system when multiple observations are<br />

acquired from active object manipulation.<br />

Cognitive model for object<br />

learning and recognition<br />

12:15–12:30 WedCT<strong>10</strong>.6<br />

A High-Accuracy Visual Marker<br />

Based on a Microlens Array<br />

Hideyuki Tanaka, Yasushi Sumi, and Yoshio Matsumoto<br />

Intelligent Systems Research Institute, AIST, Japan<br />

• ArrayMark: A novel AR marker utilizing a 2-D moiré pattern<br />

based on a microlens array<br />

• Accurate pose estimation (

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!