05.03.2014 Views

measurement eccentricties of circular targets in vision metrology

measurement eccentricties of circular targets in vision metrology

measurement eccentricties of circular targets in vision metrology

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

MEASUREMENT ECCENTRICTIES OF CIRCULAR TARGETS IN<br />

VISION METROLOGY<br />

Johannes Otepka a , Clive S. Fraser b<br />

a Institute <strong>of</strong> Photogrammetry & Remote Sens<strong>in</strong>g, Technical University <strong>of</strong> Vienna,<br />

Gusshausstrasse 27-29/E122, 1040 Vienna, Austria<br />

Email: jo@ipf.tuwien.ac.at<br />

b Department <strong>of</strong> Geomatics, University <strong>of</strong> Melbourne, Victoria 3010, Australia<br />

Email: c.fraser@unimelb.edu.au<br />

Abstract: In digital close-range photogrammetry, commonly referred to as <strong>vision</strong> <strong>metrology</strong>,<br />

<strong>circular</strong> <strong>targets</strong> are <strong>of</strong>ten used for high precision applications. The most common target type<br />

used consists <strong>of</strong> retro-reflective material, which provides a high contrast image with flash<br />

photography. The <strong>measurement</strong> <strong>of</strong> image coord<strong>in</strong>ates <strong>of</strong> signalised <strong>targets</strong> cont<strong>in</strong>ues to be a<br />

factor limit<strong>in</strong>g the achievable accuracy <strong>of</strong> high-precision <strong>vision</strong> <strong>metrology</strong> systems.<br />

Mathematical algorithms are used to determ<strong>in</strong>e the centres <strong>of</strong> imaged <strong>targets</strong> <strong>in</strong> 2D space.<br />

These 2D centroids are then used <strong>in</strong> a triangulation process to calculate the target position <strong>in</strong><br />

3D space. This computational process assumes that the <strong>targets</strong> represent perfect ‘po<strong>in</strong>ts’ <strong>in</strong><br />

space. However, <strong>in</strong> practice target-thickness and target-diameter can adversely effect this<br />

assumption, lead<strong>in</strong>g to the <strong>in</strong>troduction <strong>of</strong> systematic errors and to <strong>in</strong>correct calculation <strong>of</strong> 3D<br />

position. The paper presents mathematical formulae which rigorously describe these errors<br />

and allow the computation <strong>of</strong> corrections for the triangulation process. Moreover, the<br />

distortion effect <strong>of</strong> the eccentricities is analysed for different network simulations, as well as<br />

for a real test field. It will be shown that these <strong>measurement</strong> eccentricities can lead to<br />

<strong>in</strong>correct object coord<strong>in</strong>ates <strong>in</strong> the case <strong>of</strong> high precision applications.<br />

1. Introduction<br />

Nowadays, <strong>vision</strong> <strong>metrology</strong> (VM) is regularly used <strong>in</strong> large-scale <strong>in</strong>dustrial manufactur<strong>in</strong>g<br />

and precision eng<strong>in</strong>eer<strong>in</strong>g. VM strategies employ triangulation to determ<strong>in</strong>e 3D object po<strong>in</strong>t<br />

coord<strong>in</strong>ates. To achieve accuracy to 1 part <strong>in</strong> 100,000 and better, special <strong>targets</strong> are used to<br />

mark po<strong>in</strong>ts <strong>of</strong> <strong>in</strong>terest. Various <strong>in</strong>vestigations have shown that <strong>circular</strong> <strong>targets</strong> deliver the<br />

most satisfactory results regard<strong>in</strong>g accuracy and automated centroid recognition and<br />

mensuration. For such <strong>targets</strong>, retro-reflective material is widely used because on-axis<br />

illum<strong>in</strong>ation <strong>of</strong> these <strong>targets</strong> returns many times more light than a normal textured surface.<br />

This results <strong>in</strong> high contrast images, which are a key requirement for high precision <strong>in</strong> digital<br />

close-range photogrammetry.<br />

The perspective properties <strong>of</strong> <strong>circular</strong> <strong>targets</strong> are such that a circle which is parallel to the<br />

image plane will appear as a circle. In all other cases the <strong>circular</strong> target will project as an<br />

ellipse or <strong>in</strong> general as a conic section, as <strong>in</strong>dicated <strong>in</strong> Figure 1. Parabolic and hyperbolic<br />

curves appear only if the circle touches or <strong>in</strong>tersects the “vanish<strong>in</strong>g plane” (the plane parallel


to the image plane, which <strong>in</strong>cludes the projection centre). Because <strong>of</strong> the typically small size<br />

<strong>of</strong> <strong>targets</strong> employed and the limited field <strong>of</strong> view <strong>of</strong> the measur<strong>in</strong>g devices, it is unlikely that<br />

these <strong>circular</strong> <strong>targets</strong> will appear as parabolic or hyperbolic curves. Therefore, only elliptical<br />

images are considered <strong>in</strong> the follow<strong>in</strong>g.<br />

Figure 1: Perspective view <strong>of</strong> circle<br />

VM systems use centroid<strong>in</strong>g algorithms to compute the centres <strong>of</strong> each ellipse, these centroids<br />

becom<strong>in</strong>g the actual observations for the triangulation process. However, it is well known that<br />

the centre <strong>of</strong> a circle does not project onto the centre <strong>of</strong> the ellipse, which thus <strong>in</strong>troduces a<br />

systematic error <strong>in</strong> any triangulation process. There are two reasons why this error is<br />

universally ignored <strong>in</strong> today’s VM systems. First, the effect is small, especially <strong>in</strong> the case <strong>of</strong><br />

small <strong>targets</strong> (e.g. [2] and [3]). Second, prior to the method developed by the first author [7],<br />

there had been no practical approach to automatically determ<strong>in</strong><strong>in</strong>g the target plane, which is a<br />

basic requirement for the correction <strong>of</strong> this error.<br />

This paper presents the rigorous mathematical connection between a <strong>circular</strong> target and its<br />

perspective image. The derived formulae can be employed both to determ<strong>in</strong>e the plane <strong>of</strong> the<br />

<strong>circular</strong> <strong>targets</strong> and to compensate image observations (which are the ellipse centres) for the<br />

above mentioned eccentricity. Additionally, the distortion effect <strong>of</strong> the eccentricity will be<br />

analysed for different networks configurations. Moreover, experimental results will be<br />

presented, where the object po<strong>in</strong>t coord<strong>in</strong>ates are distorted to a magnitude well exceed<strong>in</strong>g the<br />

<strong>measurement</strong> precision if the eccentricities are ignored with<strong>in</strong> the bundle adjustment.<br />

The derived formulae have been implemented and evaluated <strong>in</strong> Australis, a photogrammetric<br />

s<strong>of</strong>tware package for <strong>of</strong>f-l<strong>in</strong>e VM (compare [4] and [8]). Hence, Australis is the ideal tool to<br />

analyse the distortion effect <strong>of</strong> the eccentricity s<strong>in</strong>ce the s<strong>of</strong>tware is capable <strong>of</strong> determ<strong>in</strong><strong>in</strong>g<br />

the target plane <strong>in</strong>formation and compensat<strong>in</strong>g for it with<strong>in</strong> the triangulation process.<br />

2. Mathematical Connection <strong>of</strong> a Circular Target and its Perspective Image<br />

In [1] a solution is given to calculate and correct the eccentricity if the elements <strong>of</strong> the target<br />

circle are given. This mathematical model describes the relationship between the ellipse<br />

centre and the circle parameters, consider<strong>in</strong>g a certa<strong>in</strong> target-image configuration. In a<br />

previously derived solution by Kager [5], it was demonstrated how to calculate a 3D circle<br />

from a given imaged ellipse by analys<strong>in</strong>g result<strong>in</strong>g quadratic matrices. However, both<br />

solutions are unable to describe the parameters <strong>of</strong> the imaged conic section explicitly by the<br />

circle elements as derived <strong>in</strong> this section. Such formulae allow the derivation <strong>of</strong> an adjustment<br />

model <strong>of</strong> <strong>in</strong>direct observations us<strong>in</strong>g multiple images to determ<strong>in</strong>e the correspond<strong>in</strong>g <strong>circular</strong><br />

target <strong>in</strong> space.


As illustrated <strong>in</strong> Figure 2, a cone is given which touches the <strong>circular</strong> target and the apex <strong>of</strong> the<br />

cone is positioned at the projection centre C <strong>of</strong> the image. Then the cone is <strong>in</strong>tersected with<br />

the image plane. If the result<strong>in</strong>g section figure can be put <strong>in</strong>to the same mathematical form as<br />

an implicit ellipse equation (general polynomial <strong>of</strong> second degree) the problem is solved.<br />

Figure 2: View cone which touches the <strong>circular</strong> target<br />

In object space, the <strong>circular</strong> target can be described as<br />

P= M+ r ⋅(cosα⋅ e1<br />

+ s<strong>in</strong>α⋅e 2 )<br />

(1)<br />

where M is the centre <strong>of</strong> the circle, r def<strong>in</strong>es the circle radius, e 1 and e 2 are arbitrary<br />

orthogonal unit vectors with<strong>in</strong> the circle plane and α is the circle parameter. Based on<br />

equation (1) the cone <strong>of</strong> Figure 2 is given by<br />

( )<br />

X= C+ λ⋅r<br />

⋅ cosα⋅ e + s<strong>in</strong> α⋅ e + λ⋅( M−C )<br />

(2)<br />

1 2<br />

where α and λ are the cone parameters. To transform the cone <strong>in</strong>to image space the wellknown<br />

coll<strong>in</strong>earity condition is used.<br />

⎛ x ⎞ ⎛R1⎞ ⎛R11 R12 R13⎞ ⎛ Xx<br />

− C<br />

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜<br />

x= y = R⋅ X− C = R ⋅ X− C = R R R ⋅ X −C<br />

( ) ( )<br />

2<br />

21 22 23<br />

⎜ c⎟ ⎜ ⎟ ⎜<br />

3<br />

R31 R ⎟ ⎜<br />

⎝−<br />

⎠ ⎝R<br />

⎠ ⎝ 32 33⎠ ⎝<br />

R X −C<br />

where x and y are the images coord<strong>in</strong>ates, c is the focal length, R is the rotation matrix and X<br />

are coord<strong>in</strong>ates <strong>in</strong> object space. Hence, the cone <strong>in</strong> image space follows as<br />

( )<br />

x= λ⋅r<br />

⋅R⋅ cosα⋅ e + s<strong>in</strong> α⋅ e + λ⋅R⋅( M −C)<br />

(4)<br />

1 2<br />

The <strong>in</strong>tersection <strong>of</strong> equation (4) with the image plane is determ<strong>in</strong>ed <strong>in</strong> a simple manner. The<br />

plane is def<strong>in</strong>ed by z = -c. Thus, λ can be described by<br />

c<br />

λ =−<br />

R ⋅( r⋅cosα⋅ e + r⋅s<strong>in</strong> α⋅ e + M−C)<br />

3<br />

1 2<br />

and the coord<strong>in</strong>ates <strong>of</strong> the <strong>in</strong>tersection figure follow as<br />

R1 ⋅( r⋅cosα<br />

⋅ e1 + r⋅s<strong>in</strong> α ⋅ e2<br />

+ M −C)<br />

R1<br />

⋅g<br />

x =− c =−c<br />

R ⋅( r⋅cosα<br />

⋅ e + r⋅s<strong>in</strong> α ⋅ e + M −C)<br />

R ⋅g<br />

3 1 2<br />

3<br />

R2 ⋅( r⋅cosα<br />

⋅ e1 + r⋅s<strong>in</strong> α ⋅ e2<br />

+ M −C)<br />

R2<br />

⋅g<br />

y =− c =−c<br />

R ⋅( r⋅cosα<br />

⋅ e + r⋅s<strong>in</strong> α ⋅ e + M −C)<br />

R ⋅g<br />

3 1 2<br />

3<br />

y<br />

z<br />

x<br />

y<br />

z<br />

⎞<br />

⎟<br />

⎟<br />

⎠<br />

(3)<br />

(5)<br />

(6)


If the equations (6) can be transformed so that the parameter α is elim<strong>in</strong>ated, the problem is<br />

solved. First, the equations are transformed as shown below:<br />

−R3⋅g ⋅ x = c⋅R1⋅g<br />

⇒<br />

−R ⋅g ⋅ y = c⋅R ⋅g<br />

3 2<br />

( 1 1) α( 2 2)<br />

( ) α( )<br />

rcosα<br />

−x⋅R ⋅e −c⋅R ⋅ e + rs<strong>in</strong> −x⋅R ⋅e −c⋅R ⋅e = ( x⋅ R + c⋅R ) ⋅( M−C)<br />

3 1 3 1 3 1<br />

rcosα<br />

−y⋅R ⋅e −c⋅R ⋅ e + rs<strong>in</strong> −y⋅R ⋅e −c⋅R ⋅e = ( y⋅ R + c⋅R ) ⋅( M−C)<br />

Equation (8) can also be described as<br />

3 1 2 1 3 2 2 2<br />

3 2<br />

rcosα⋅ x1+ rs<strong>in</strong>α⋅y1 = c1 ⋅x2<br />

⎫⎪ ⋅− y2⎫<br />

⎬+ ⎬ +<br />

(9)<br />

rcosα⋅ x2 + rs<strong>in</strong>α⋅y2= c2 ⋅−x1⎪⎭<br />

⋅y1<br />

⎭<br />

or, after the proposed transformations, the follow<strong>in</strong>g two equations are generated:<br />

( )<br />

( )<br />

rs<strong>in</strong>α<br />

y x y x c x c x<br />

2<br />

1 2− 2 1<br />

=<br />

1 2− 2 1 ⎫<br />

2 ⎬ +<br />

1 2 2 1 2 1 2 2<br />

rcosα<br />

y x − y x = c y −c y ⎭<br />

After squar<strong>in</strong>g and add<strong>in</strong>g equations (10) the f<strong>in</strong>al sought-after equation without an α term is<br />

found:<br />

2 2<br />

( ) ( ) (<br />

2<br />

r y1x2 y2x1 c1x2 c2x1 c2y1 c2y2<br />

(7)<br />

(8)<br />

(10)<br />

− = − + − ) 2 (11)<br />

This equation can be transformed <strong>in</strong>to a general polynomial <strong>of</strong> the second degree <strong>in</strong> x and y as<br />

a ⋅ x + a ⋅ xy+ a ⋅ y + a ⋅ x+ a ⋅y− = (12)<br />

2 2<br />

1 2 3 4 5<br />

1 0<br />

where the correspond<strong>in</strong>g coefficients are<br />

r ⋅i − j −k<br />

a1<br />

=<br />

d<br />

2 2 2 2<br />

1 1 1<br />

2<br />

r ⋅i1⋅i2 − j1⋅ j2 −k1⋅k2<br />

a2<br />

= 2<br />

d<br />

2 2 2 2<br />

r ⋅i2 − j2 −k2<br />

a3<br />

=<br />

d<br />

2<br />

j1⋅ j3 + k1⋅k3−r ⋅i1⋅i3<br />

a4<br />

= 2⋅c<br />

d<br />

2<br />

j2⋅ j3 + k2⋅k3−r ⋅i2⋅i3<br />

a5<br />

= 2⋅c<br />

d<br />

us<strong>in</strong>g the follow<strong>in</strong>g auxiliary variables<br />

( 3 3 3 )<br />

d c j k r i<br />

2 2 2 2 2<br />

= + − ⋅ (14)<br />

T<br />

( i i i ) ( ) ( ) ( )<br />

i = = R⋅ e × R⋅ e = R⋅ e × e = R⋅n<br />

1 2 3 1<br />

1 2 3 1 ( ) ( 1 ( )) ( 1×<br />

)<br />

T<br />

( k1 k2 k ) ( 2) ( ( )) ( 2 ( )) ( 2×<br />

)<br />

T<br />

( j j j ) ( ) ( )<br />

j = = R⋅ e × R⋅ C− M = R⋅ e × C− M = R⋅<br />

e v<br />

k = = R⋅ e × R⋅ C− M = R⋅ e × C− M = R⋅<br />

e v<br />

3<br />

2 1 2<br />

(13)<br />

(15)


It is permitted to use the distributive law <strong>in</strong> equations (15) s<strong>in</strong>ce R is a rotation matrix.<br />

Equations (12) to (15) explicitly describe the parameters <strong>of</strong> the imaged conic section via the<br />

circle elements. Although the general problem is solved, additional simplifications can be<br />

applied. In the follow<strong>in</strong>g derivations, an <strong>in</strong>terest<strong>in</strong>g <strong>in</strong>variance <strong>of</strong> orthogonal unit vectors<br />

appears.<br />

While <strong>in</strong>f<strong>in</strong>ite sets <strong>of</strong> the vectors e 1 and e 2 exist to describe the same circle <strong>in</strong> space, the<br />

polynomial coefficients (13) have to be <strong>in</strong>variant regard<strong>in</strong>g the selected vector set. S<strong>in</strong>ce the<br />

normal vector <strong>of</strong> the circle plane (n <strong>in</strong> object space; i <strong>in</strong> image space) is <strong>in</strong>variant to e 1 and e 2<br />

as well, it has to be possible to f<strong>in</strong>d a description <strong>of</strong> equations (13) us<strong>in</strong>g only the vectors i<br />

and v (= C - M). It is noticeable that the elements <strong>of</strong> the vectors j and k always appear<br />

comb<strong>in</strong>ed and <strong>in</strong> quadratic form with<strong>in</strong> the coefficients. If these 6 quadratic sums can be<br />

expressed by i and v, the problem is solved.<br />

For the follow<strong>in</strong>g derivation the unit vectors e 1 and e 2 <strong>in</strong> image space are def<strong>in</strong>ed by us<strong>in</strong>g<br />

rotation matrices as<br />

e1<br />

e<br />

2<br />

⎛1⎞ ⎛ cosβ<br />

cosγ<br />

⎞<br />

= RXαRYβRZγ<br />

⋅<br />

⎜<br />

0<br />

⎟ ⎜<br />

s<strong>in</strong>α s<strong>in</strong>β cosγ cosα s<strong>in</strong>γ<br />

⎟<br />

= −<br />

⎜0⎟ ⎜cosα s<strong>in</strong>β cos γ+ s<strong>in</strong>α s<strong>in</strong>γ<br />

⎟<br />

⎝ ⎠ ⎝ ⎠<br />

⎛1⎞ ⎛ −cosβ<br />

s<strong>in</strong>γ<br />

⎞<br />

= RXαRYβRZ<br />

π<br />

⋅<br />

⎜<br />

0<br />

⎟ ⎜<br />

s<strong>in</strong>α s<strong>in</strong>β s<strong>in</strong>γ cosα cosγ<br />

⎟<br />

γ +<br />

= − −<br />

2 ⎜0⎟ ⎜−cosα s<strong>in</strong>β s<strong>in</strong> γ+ s<strong>in</strong>α cosγ<br />

⎟<br />

⎝ ⎠ ⎝ ⎠<br />

where γ describes the degrees <strong>of</strong> freedom with<strong>in</strong> the circle plane and RX, RY and RZ are<br />

rotation matrices def<strong>in</strong>ed by<br />

⎛1 0 0 ⎞ ⎛cosβ 0 −s<strong>in</strong>β⎞ ⎛ cosγ s<strong>in</strong>γ<br />

0⎞<br />

RX =<br />

⎜<br />

0 cosα s<strong>in</strong>α ⎟ ⎜<br />

0 1 0<br />

⎟ ⎜<br />

s<strong>in</strong>γ cosγ<br />

0<br />

α RY =<br />

β RZ = −<br />

⎟<br />

γ<br />

⎜0 −s<strong>in</strong>α cosα⎟ ⎜s<strong>in</strong> β 0 cos β ⎟ ⎜<br />

⎝ ⎠ ⎝ ⎠ ⎝ 0 0 1⎟<br />

⎠<br />

Substitut<strong>in</strong>g vectors e and e <strong>in</strong> equations (15) it follows that<br />

1 2<br />

(16)<br />

(17)<br />

⎛ s<strong>in</strong> β ⎞<br />

i = e<br />

⎜<br />

1× e = −s<strong>in</strong>α<br />

cos β<br />

⎟<br />

2<br />

⎟<br />

⎜−cosα<br />

cos β<br />

⎝<br />

⎠<br />

( )<br />

j= e × R⋅ v = e × v =<br />

1 1<br />

( s<strong>in</strong>α s<strong>in</strong> β cosγ − cosα s<strong>in</strong>γ ) + v ( −cosα s<strong>in</strong> β cosγ −s<strong>in</strong>α s<strong>in</strong>γ<br />

)<br />

v1( cosα s<strong>in</strong> β cosγ s<strong>in</strong>α s<strong>in</strong>γ ) v3cos β cosγ<br />

v ( − s<strong>in</strong>α s<strong>in</strong> β cosγ + cosα s<strong>in</strong>γ ) + v cos β cosγ<br />

⎛v3 2<br />

⎜<br />

= ⎜<br />

+ −<br />

⎜<br />

⎝<br />

( )<br />

k = e × R⋅ v = e × v =<br />

2 2<br />

1 2<br />

( −s<strong>in</strong>α s<strong>in</strong> β s<strong>in</strong>γ − cosα cosγ ) + v ( cosα s<strong>in</strong> β s<strong>in</strong>γ −s<strong>in</strong>α cosγ<br />

)<br />

v1( cosα s<strong>in</strong> β s<strong>in</strong>γ s<strong>in</strong>α cosγ ) v3cos β s<strong>in</strong>γ<br />

v ( s<strong>in</strong>α s<strong>in</strong> β s<strong>in</strong>γ + cosα cosγ ) −v<br />

cos β s<strong>in</strong>γ<br />

⎛v3 2<br />

⎜<br />

= ⎜<br />

− + +<br />

⎜<br />

⎝<br />

1 3<br />

⎞<br />

⎟<br />

⎟<br />

⎟<br />

⎠<br />

⎞<br />

⎟<br />

⎟<br />

⎟<br />

⎠<br />

(18)<br />

(19)<br />

(20)


where it can be seen that the normal vector i does not depend on γ. However, the vectors j and<br />

k still conta<strong>in</strong> γ terms. The next step is to compute the aforementioned quadratic sums us<strong>in</strong>g<br />

equations (19) and (20). As an example, the follow<strong>in</strong>g sum will be fully derived.<br />

( )<br />

2 2<br />

2<br />

2 2 2<br />

2<br />

2 2 2<br />

1 1 3<br />

s<strong>in</strong> β cos α cos β<br />

2<br />

(1 cos α cos β) 2<br />

3 2cosαs<strong>in</strong> α cos<br />

j + k = v + + v − + v v<br />

β (21)<br />

As expected all γ terms are elim<strong>in</strong>ated. Now, the angle terms can be substituted by the<br />

elements <strong>of</strong> the vector i (18) which results <strong>in</strong><br />

2<br />

( ) (1 ) 2 ( ) ( )<br />

2 2<br />

2<br />

2 2<br />

2<br />

2 2<br />

2<br />

1 1 3 1 3 2 3 3 2 2 3 2 2 3 3 1 2 3<br />

j + k = v i + i + v − i + v v i i = v i + v i + i v + v<br />

2 (22)<br />

consider<strong>in</strong>g that i is a unit vector. This substitution can also be carried out for the other five<br />

rema<strong>in</strong><strong>in</strong>g quadratic sums. The f<strong>in</strong>al results are listed below.<br />

2 2 2<br />

( )<br />

2 2 2<br />

( )<br />

( )( )<br />

2<br />

( 1) ( )<br />

( )( )<br />

j + k =− vi − v i + v + v<br />

2 2<br />

2 2 1 3 3 1 1 3<br />

j + k = − v i − vi + v + v<br />

2 2<br />

3 3 2 1 1 2 1 2<br />

j j + kk = vi −vi vi −vi −vv<br />

1 2 1 2 3 1 1 3 3 2 2 3 1 2<br />

jj+ kk = vv i − + v − vii+ vii−vii<br />

1 3 1 3 1 3 2 2 1 2 3 2 1 3 3 1 2<br />

j j + k k = vi −v i vi −v i −v v<br />

2 3 2 3 1 3 3 1 1 2 2 1 2 3<br />

Us<strong>in</strong>g equation (23) the polynomial coefficients (13) can be f<strong>in</strong>ally described us<strong>in</strong>g r, c, i and<br />

v as parameters only.<br />

2<br />

3<br />

4<br />

5<br />

= 2<br />

2 2 2<br />

( ) ( )<br />

r ⋅i − v i + v i − i v + v<br />

a1<br />

=<br />

d<br />

a<br />

a<br />

a<br />

a<br />

where d is<br />

2 2 2<br />

1 2 2 3 3 1 2 3<br />

( )( )<br />

2<br />

r ⋅i1⋅i2 − v3i1−vi 1 3<br />

v3i2 − v2i3 + v1v2<br />

( )<br />

2<br />

2 2<br />

2 2<br />

⋅<br />

2<br />

+<br />

1 3− 3 1<br />

−<br />

1<br />

−<br />

3<br />

r i vi v i v v<br />

=<br />

d<br />

( 1) ( )<br />

vv i − + v − vii + vii −vii −r ⋅i⋅i<br />

= 2 ⋅c<br />

d<br />

= 2 ⋅c<br />

d<br />

2 2<br />

1 3 2 2 123 213 312 1 3<br />

( )( )<br />

vi −vi vi −vi −v v −r ⋅i ⋅i<br />

2<br />

13 31 12 21 2 3 2 3<br />

2 2 2<br />

2 2<br />

( ( 21 12)<br />

1 2 3 )<br />

2<br />

d = c − v i − vi + v + v −r ⋅i<br />

d<br />

Equations (24) and (25) not only describe the connection between a <strong>circular</strong> target and its<br />

perspective image, but also these equations, consider<strong>in</strong>g mathematical rigorousness, represent<br />

new observations equations for the bundle adjustment. In this model each image po<strong>in</strong>t<br />

consists <strong>of</strong> 5 observations and any object po<strong>in</strong>t will be characterised by 7 unknowns (centre<br />

po<strong>in</strong>t M, target normal n and radius r). In consequence the matrices <strong>of</strong> the bundle adjustment<br />

would <strong>in</strong>crease dramatically. This can be avoided if the equation system is solved iteratively:<br />

(23)<br />

(24)<br />

(25)


1. Perform standard bundle adjustment;<br />

2. Employ equations (24) and (25) to adjust the target elements (M, n and r) <strong>of</strong> all object<br />

po<strong>in</strong>ts consider<strong>in</strong>g the exterior orientation (EO) <strong>of</strong> all images and the camera calibration<br />

as fixed;<br />

3. Compute bundle adjustment us<strong>in</strong>g image coord<strong>in</strong>ate observations which are<br />

compensated for the eccentricity; and<br />

4. Go back to 2 until convergence.<br />

One iteration is usually enough to achieve convergence for standard VM applications.<br />

In comparison to the standard utilisation process, the described strategy requires full ellipse<br />

<strong>in</strong>formation <strong>of</strong> each imaged target. Though the extraction <strong>of</strong> the target centre is well known <strong>in</strong><br />

Computer Vision, the determ<strong>in</strong>ation <strong>of</strong> the full ellipse <strong>in</strong>formation is a delicate problem s<strong>in</strong>ce<br />

<strong>targets</strong> are typically only a few pixels <strong>in</strong> diameter. Appropriate computation algorithms,<br />

consider<strong>in</strong>g the special circumstantiates <strong>of</strong> VM, are described <strong>in</strong> [6] and [7]. F<strong>in</strong>ally it should<br />

be noted that the derived formulae are equivalent to the solutions presented <strong>in</strong> [1] and [5].<br />

3. The Distortion Effect <strong>of</strong> Eccentricities with<strong>in</strong> the Bundle Adjustment<br />

Earlier <strong>in</strong>vestigations ([2] and [3]) have studied the impact <strong>of</strong> the eccentricity error with<strong>in</strong> a<br />

bundle adjustment. It was reported that <strong>in</strong> a free network adjustment with or without<br />

simultaneous camera calibration, the eccentricity error caused by moderately sized image<br />

<strong>targets</strong> is almost fully compensated for by changes <strong>in</strong> the exterior orientation parameters (and<br />

the pr<strong>in</strong>cipal distance) without affect<strong>in</strong>g the other estimated parameters.<br />

Network simulations performed by the authors have shown good agreement with earlier<br />

f<strong>in</strong>d<strong>in</strong>gs, especially when employ<strong>in</strong>g test fields with little variation <strong>in</strong> the target normals.<br />

However, test fields with a significant range <strong>of</strong> target orientations and with medium to large<br />

sized <strong>targets</strong> can show significant distortions with<strong>in</strong> the triangulated object po<strong>in</strong>t coord<strong>in</strong>ates,<br />

as shown <strong>in</strong> the follow<strong>in</strong>g (distortion <strong>of</strong> the EO and the camera calibration parameters are not<br />

considered s<strong>in</strong>ce they are irrelevant for standard VM applications)<br />

The first test field describes the surface <strong>of</strong> a parabolic antenna with a diameter <strong>of</strong> 1.4 m<br />

(Figure 3). In contrast to the relatively low target orientation variation <strong>of</strong> the parabolic surface<br />

a second s<strong>in</strong>us-shaped surface (horizontal extent <strong>of</strong> 3 by 5 m) was chosen, as <strong>in</strong>dicated <strong>in</strong><br />

Figure 4. For both surfaces, 16 artificially generated images were computed us<strong>in</strong>g a typical<br />

VM camera (c <strong>of</strong> 20 mm, resolution <strong>of</strong> 1500 by 1000 pixels). The target size was selected<br />

such that the result<strong>in</strong>g average target diameter with<strong>in</strong> the images was nearly the same for both<br />

networks, namely ~19 pixels.<br />

To <strong>in</strong>vestigate the eccentricity error, a free network adjustment us<strong>in</strong>g <strong>in</strong>tensity-weighted<br />

target centroids was performed. Afterwards, the computed object coord<strong>in</strong>ates were<br />

transformed <strong>in</strong>to the orig<strong>in</strong>al (error-free) coord<strong>in</strong>ate reference frame. The result<strong>in</strong>g positional<br />

discrepancies are illustrated <strong>in</strong> Figure 5 and listed <strong>in</strong> Table 1.<br />

Expected accuracy<br />

(triangulation precision<br />

Discrepancies<br />

(transformation)<br />

Object po<strong>in</strong>t sigma<br />

(bundle adjustment)<br />

1:100 000) Mean Max Mean Max<br />

Parabolic antenna 0.014 0.012 0.020 0.001 0.001<br />

S<strong>in</strong>us-shaped Surface 0.050 0.080 0.123 0.011 0.016<br />

Table 1: Results <strong>of</strong> simulated networks (all values <strong>in</strong> mm)


Figure 3: Parabolic antenna surface (d =1.4 m) Figure 4: S<strong>in</strong>us-shaped surface (3 x 5 m)<br />

From the results, <strong>in</strong>terest<strong>in</strong>g conclusions can be drawn. First, the eccentricity error can<br />

systematically distort the object po<strong>in</strong>t coord<strong>in</strong>ates, this be<strong>in</strong>g visible <strong>in</strong> Figure 5. The use <strong>of</strong><br />

high quality digital equipment allows the achievement <strong>of</strong> typical triangulation accuracies <strong>of</strong><br />

1:100,000, which is about 0.014 and 0.05 mm <strong>in</strong> these cases. The object po<strong>in</strong>t sigmas<br />

obta<strong>in</strong>ed from the bundle adjustment (right column <strong>of</strong> Table 1) are well with<strong>in</strong> the expected<br />

accuracy. (The sigmas only reflect the eccentricity effect s<strong>in</strong>ce the observations were<br />

simulated). However, the real accuracy <strong>of</strong> the object po<strong>in</strong>t coord<strong>in</strong>ates is visible <strong>in</strong> the column<br />

<strong>of</strong> transformation discrepancies (column with grey background <strong>of</strong> Table 1). Compared to the<br />

expected triangulation precision, the distortion effect <strong>of</strong> the eccentricity is higher for the<br />

s<strong>in</strong>us-shaped surface than for the parabolic antenna. This is caused by the higher variation <strong>in</strong><br />

target orientation. Summariz<strong>in</strong>g, it can be said that, for certa<strong>in</strong> configurations, the amount <strong>of</strong><br />

distortion can clearly exceed the <strong>measurement</strong> accuracy <strong>of</strong> VM.<br />

Figure 5: Discrepancy vectors between adjusted object coord<strong>in</strong>ates and orig<strong>in</strong>al coord<strong>in</strong>ates<br />

F<strong>in</strong>ally, a ‘dangerous’ effect for practical applications can also be observed. S<strong>in</strong>ce the<br />

adjustment parameters compensate for the eccentricity error, the bundle adjustment estimates<br />

the accuracy <strong>of</strong> the object coord<strong>in</strong>ates too optimistically (up to factor 10 <strong>in</strong> these cases),<br />

which may lead to mis<strong>in</strong>terpretation, eg <strong>in</strong> deformation <strong>measurement</strong>s. Consequently, it was<br />

necessary to ‘design’ a special network configuration to reveal the eccentricity effect with<strong>in</strong> a<br />

real test field, as presented <strong>in</strong> the follow<strong>in</strong>g section.


4. Evaluation <strong>of</strong> the Simulation Results with<strong>in</strong> a Real Test Field<br />

In simulated networks it is easy to obta<strong>in</strong> the eccentricity distortion <strong>of</strong> the object coord<strong>in</strong>ates<br />

s<strong>in</strong>ce the ‘error-free’ coord<strong>in</strong>ate reference frame is known. For a real test field the situation is<br />

quite different. The previous strategy can be applied only if ultra-precise coord<strong>in</strong>ates <strong>of</strong> the<br />

<strong>targets</strong> are given. S<strong>in</strong>ce the magnitude <strong>of</strong> the result<strong>in</strong>g errors is only about 0.01 mm (<strong>in</strong> case<br />

<strong>of</strong> a 1 m test field), the required precision for the reference coord<strong>in</strong>ates is very difficult to<br />

achieve. Hence, a different strategy was used to exam<strong>in</strong>e the eccentricity distortion.<br />

A plane test field consist<strong>in</strong>g <strong>of</strong> 27 retro-reflective <strong>targets</strong> us<strong>in</strong>g a special scale bar was<br />

employed. The field was surveyed three times mov<strong>in</strong>g the scale bar <strong>in</strong>to three different<br />

positions between each survey, as <strong>in</strong>dicated <strong>in</strong> Figure 6. Whereas scale bar position one and<br />

two were roughly with<strong>in</strong> the test field plane, scale bar position three was slanted with an angle<br />

<strong>of</strong> about 30°. A comparison <strong>of</strong> the three scale bar distances derived from the adjusted object<br />

coord<strong>in</strong>ates should then reveal the eccentricity effect.<br />

The scale bar had a length <strong>of</strong> about 0.5 m and consisted <strong>of</strong> 2 <strong>targets</strong> which enclosed an angle<br />

<strong>of</strong> about 30°, as shown <strong>in</strong> Figure 7. Different target orientations cause different distortion<br />

vectors which is absolutely necessary for this distance comparison method. To achieve<br />

dist<strong>in</strong>guishable distance differences, large <strong>targets</strong> (average target diameter to ~50 pixels) were<br />

employed for the scale bar.<br />

Target normal<br />

~ 30°<br />

Target<br />

Figure 6: Test field (1.5 x 1.5 m) with scale<br />

bar <strong>in</strong> three different positions.<br />

Figure 7: Special arrangement <strong>of</strong> <strong>targets</strong> on<br />

the scale bar.<br />

A total <strong>of</strong> 54 images (3 x 18 images) were recorded us<strong>in</strong>g a GSI Inca camera (resolution <strong>of</strong><br />

2029 by 2048 pixels, 18 mm lens), and the network was processed twice, us<strong>in</strong>g the standard<br />

bundle adjustment and then the modified adjustment with applied eccentricity corrections.<br />

The result<strong>in</strong>g distances computed from the object coord<strong>in</strong>ates <strong>of</strong> the two adjustments are<br />

given <strong>in</strong> Table 2. In the case <strong>of</strong> uncorrected observations the maximum distance difference<br />

(grey background column) is four times larger than would be expected from the object po<strong>in</strong>t<br />

sigmas. If the eccentricity corrections are applied, on the other hand, the maximum distance<br />

error is well with<strong>in</strong> the object po<strong>in</strong>t accuracy. The identical object po<strong>in</strong>t sigmas support the<br />

previous statement that the distortional effect <strong>of</strong> the eccentricity is not visible with<strong>in</strong> sigmas<br />

obta<strong>in</strong>ed from the bundle adjustment.<br />

Dist. 1 Dist. 2 Dist. 3<br />

Max. dist. Object po<strong>in</strong>t sigma<br />

difference (bundle adjustment)<br />

Results without corr. 437.172 437.143 437.089 0.083 0.018<br />

Results with corr. 437.256 437.242 437.245 0.014 0.018<br />

Table 2: Bundle adjustment results with and without eccentricity corrections (values <strong>in</strong> mm).


One could argue that these results have no relevance for practical applications s<strong>in</strong>ce such<br />

large <strong>targets</strong> were employed. However, the results only identify relative distortions.<br />

Consider<strong>in</strong>g the object coord<strong>in</strong>ates obta<strong>in</strong>ed from the ‘corrected’ adjustment as error-free<br />

compared to the uncorrected case, a similar analysis as made for the simulated network can be<br />

performed. It turns out that the average discrepancy <strong>of</strong> the scale bar po<strong>in</strong>ts exceeds 0.3 mm.<br />

This is 17 times larger than the object po<strong>in</strong>t sigma (note that sigma naught was identical for<br />

both bundle adjustments). It can be concluded that even clearly smaller <strong>targets</strong> will be<br />

distorted above the <strong>measurement</strong> precision as derived <strong>in</strong> the network simulations.<br />

5. Conclusion<br />

In the first part <strong>of</strong> the paper, rigorous formulae were presented which describe the connection<br />

<strong>of</strong> a <strong>circular</strong> target and its perspective image. Utiliz<strong>in</strong>g these equations the target plane can be<br />

determ<strong>in</strong>ed and the eccentricity <strong>of</strong> the image observations can be compensated. Additionally<br />

it has been shown that the eccentricity can cause object coord<strong>in</strong>ate distortions which are well<br />

above the <strong>measurement</strong> variance <strong>in</strong> the case <strong>of</strong> high precision applications. In standard VM<br />

s<strong>of</strong>tware the only way to avoid eccentricity distortion is to keep the <strong>targets</strong> as small as<br />

possible (target diameter should not exceed 5 pixels). However, this cannot be achieved <strong>in</strong><br />

general s<strong>in</strong>ce there will always be different scales <strong>in</strong> different parts <strong>of</strong> the images. Therefore,<br />

the authors propose adoption <strong>of</strong> the automatic eccentricity correction processes, as<br />

implemented <strong>in</strong> Australis. This will remove target size limitations caused by the eccentricity,<br />

which will <strong>in</strong> turn lead to even higher achievable <strong>measurement</strong> precision <strong>in</strong> VM.<br />

Acknowledgement<br />

Many thanks to Danny Brizzi <strong>of</strong> Vision Metrology Services <strong>of</strong> Melbourne, for imag<strong>in</strong>g the<br />

scale bar test field as described <strong>in</strong> Section 4.<br />

References:<br />

[1] Ahn, S.J., Warnecke, H.-J. and Kotowski, R.: Systematic Geometric Image<br />

Measurement Errors <strong>of</strong> Circular Object Targets: Mathematical Formulation and<br />

Correction, Photogrammetric Record, 16(93), 485-502, 1999.<br />

[2] Ahn, S. J. and Kotowski, R.: Geometric image <strong>measurement</strong> errors <strong>of</strong> <strong>circular</strong> object<br />

<strong>targets</strong>. Optical 3-D Measurement Techniques IV (Eds. A. Gruen and H. Kahmen).<br />

Herbert Wichmann Verlag, Heidelberg. 494 pages: 463–471, 1997.<br />

[3] Dold, J.: Influence <strong>of</strong> large <strong>targets</strong> on the results <strong>of</strong> photogrammetric bundle adjustment.<br />

International Archives <strong>of</strong> Photogrammetry & Remote Sens<strong>in</strong>g, 31(B5): 119–123, 1996.<br />

[4] Fraser, C. S. and Edmundson, K .L.: Design and implementation <strong>of</strong> a computational<br />

process<strong>in</strong>g system for <strong>of</strong>f-l<strong>in</strong>e digital close range photogrammetry. ISPRS Journal <strong>of</strong><br />

Photogrammetry and Remote Sens<strong>in</strong>g, 55(1): 94-104, 2000.<br />

[5] Kager H.: Bündeltriangulation mit <strong>in</strong>direkt beobachteten Kreiszentren.<br />

Geowissenschaftliche Mittelungen Heft 19, Institut für Photogrammetrie der<br />

Technischen Universität Wien, 1981.<br />

[6] Otepka, J., Fraser, C.S.: Accuracy Enhancement <strong>of</strong> Vision Metrology through<br />

Automatic Target Plane Determ<strong>in</strong>ation, Proceed<strong>in</strong>gs <strong>of</strong> the ISPRS Congress, 2004.<br />

[7] Otepka J.: Precision Target Mensuration <strong>in</strong> Vision Metrology, PHD Thesis, Vienna<br />

University <strong>of</strong> Technology, 2004.<br />

[8] Photometrix, Web site: http://www.photometrix.com.au (accessed 6 June 2005).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!