07.02.2013 Views

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

Session WedAT1 Pegaso A Wednesday, October 10, 2012 ... - Lirmm

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Session</strong> WedGT1 <strong>Pegaso</strong> A <strong>Wednesday</strong>, <strong>October</strong> <strong>10</strong>, <strong>2012</strong>, 17:30–18:30<br />

Stereo Vision<br />

Chair Il Hong Suh, Hanyang Univ.<br />

Co-Chair Andreas Zell, Univ. of Tübingen<br />

17:30–17:45 WedGT1.1<br />

A New Feature Detector and Stereo Matching<br />

Method for Accurate High-Performance Sparse<br />

Stereo Matching<br />

Konstantin Schauwecker and Andreas Zell<br />

Department Cognitive Systems, University of Tübingen, Germany<br />

Reinhard Klette<br />

Computer Science Department, The University of Auckland, New Zealand<br />

• Computationally efficient sparse stereo<br />

matching system, achieving processing<br />

rates above 200 frames per second on a<br />

commodity dual-core CPU.<br />

• Although features are matched sparsely, a<br />

dense consistency check is applied, which<br />

drastically decreases the number of false<br />

matches.<br />

• A new FAST-based feature detector is<br />

used, which has a less clustered feature<br />

distribution and leads to an improved<br />

matching performance.<br />

18:00–18:15 WedGT1.3<br />

Can Stereo Vision replace a Laser Rangefinder?<br />

M. Antunes, J.P. Barreto, C. Premebida and U. Nunes<br />

Department of Electrical and Computer Engineering<br />

Institute of Systems and Robotics<br />

University of Coimbra, Portugal<br />

• We propose Stereo Rangefinding (SRF)<br />

for estimating depth along virtual scan<br />

planes<br />

• The SymStereo framework is used for<br />

quantifying the likelihood of pixel<br />

correspondences using induced symmetry<br />

• The depth estimates of SRF are compared<br />

against the data provided by a LRF<br />

• We show that passive stereo can be an<br />

alternative to LRF in certain robotic<br />

applications<br />

Our paper shows that it is<br />

possible to recover the profile cut<br />

(green) directly from two cameras<br />

17:45–18:00 WedGT1.2<br />

Real-time Velocity Estimation Based on<br />

Optical Flow and Disparity Matching<br />

Dominik Honegger, Pierre Greisen, Lorenz Meier,<br />

Petri Tanskanen and Marc Pollefeys<br />

ETH Zürich Switzerland<br />

• We present an image-based real-time<br />

metric velocity sensor for mobile robot<br />

navigation.<br />

• An FPGA-based stereo camera platform<br />

combines optical flow and disparity values<br />

at 127 fps and 376*240 resolution.<br />

• Radial undistortion, image rectification,<br />

disparity estimation and optical flow are<br />

performed on a single FPGA.<br />

• Suited for MAVs due to low-weight, lowpower<br />

and low-latency.<br />

<strong>2012</strong> IEEE/RSJ International Conference on Intelligent Robots and Systems<br />

–184–<br />

System Setup, Rectified Input<br />

Image, Disparity Map,<br />

Flow Field<br />

18:15–18:30 WedGT1.4<br />

Dependable Dense Stereo Matching by Both<br />

Two-layer Recurrent Process and Chaining<br />

Search<br />

Sehyung Lee, Youngbin Park and Il Hong Suh<br />

Department of Electronics and Computer Engineering, Hanyang University,<br />

Korea<br />

• We propose a recurrent two-layer process<br />

and chaining search for dense stereo<br />

matching.<br />

• The disparity map is calculated through<br />

the iterative integration of pixel and region<br />

layers.<br />

• To estimate the precise disparities in<br />

occluded regions, reliable disparities are<br />

propagated by the chaining search.<br />

• To test our algorithm, it was compared<br />

with two outstanding algorithms in<br />

Middlebury benchmark using Gaussian<br />

noisy images.<br />

The first row shows the images with<br />

varying PSNR. The second, third, and<br />

fourth rows show the results of the CVF,<br />

the DBP, and the proposed method.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!