13.07.2015 Views

bundle block adjustment with 3d natural cubic splines

bundle block adjustment with 3d natural cubic splines

bundle block adjustment with 3d natural cubic splines

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

c○ Copyright byWon Hee Lee2008


Dedicated to my wife, parents, and grandparentsfor their support in all my life’s endeavors.v


ACKNOWLEDGMENTSThanks be to God, my Father, my Redeemer, my Comforter, my Healer, and myCounselor, that through Him everything is possible.I wish to express my heartfelt gratitude to my advisor, Professor Anton F. Schenk,for his guidance, encouragement, and insightful discussions throughout my graduatestudies. My sincere appreciation also goes to my dissertation committee, ProfessorAlper Yimaz and Professor Ralph von Frese, for their invaluable advice and comments.I would like to acknowledge my brothers and sisters at the Korean Church ofColumbus for their prayers, encouragement, and unconditional confidence in me.I am thankful to my colleagues, Tae-Hoon Yoon, Yushin Ahn, and Kyuing-JinPark for their valuable suggestions on various aspects of my academic life.I would like to thank Dr. Tae-Suk Bae for the helpful discussions and inspirationin the field of <strong>adjustment</strong>.I am greatly indebted to my parents, Tae-Hyun Kim and Sang-Sul Lee, my wife,Hyun-Jung Kim, my sister Yoon-Hee Lee and family members; no words can adequatelydescribe my thankfulness for their understanding, love, sacrifice and support.Finally, a special thanks to my mother for her unlimited support, encouragement,and belief in me during my many years of graduate studies.vi


VITA2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .B.S. Civil Engineering,Yonsei University, Seoul, S. Korea2001-2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graduate Teaching Associate,Civil, Urban & Geo-systems Engineering,Seoul National University, Seoul,S.Korea2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .M.S. Civil, Urban & Geo-systems Engineering,Seoul National University,Seoul, S.Korea2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Researcher,Spatial Informatics & System Laboratory,Seoul National University, Seoul,S.Korea2004-2007 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graduate Research Associate,Department of Civil & EnvironmentalEng. and Geodetic Science,The Ohio State University2007-present . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Senior Research ScientistQbaseSpringfield, OHFIELDS OF STUDYMajor Field: Geodetic Science and Surveyingvii


3. BUNDLE BLOCK ADJUSTMENTWITH 3D NATURAL CUBIC SPLINES . . . . . . . . . . . . . . . . . . 343.1 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> . . . . . . . . . . . . . . . . . . . . . . . . 343.2 Extended collinearity equation model for <strong>splines</strong> . . . . . . . . . . . 393.3 Arc-length parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> . . . . . . 443.4 Tangents of spline between image and object space . . . . . . . . . 524. MODEL INTEGRATION . . . . . . . . . . . . . . . . . . . . . . . . . . 564.1 Bundle <strong>block</strong> <strong>adjustment</strong> . . . . . . . . . . . . . . . . . . . . . . . . 564.2 Evaluation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> . . . . . . . . . . . . . . . . 644.3 Pose estimation <strong>with</strong> ICP algorithm . . . . . . . . . . . . . . . . . 675. EXPERIMENTS AND RESULTS . . . . . . . . . . . . . . . . . . . . . . 705.1 Synthetic data description . . . . . . . . . . . . . . . . . . . . . . . 755.2 Experiments <strong>with</strong> error free EOPs . . . . . . . . . . . . . . . . . . 785.3 Recovery of EOPs and spline parameters . . . . . . . . . . . . . . . 855.4 Tests <strong>with</strong> real data . . . . . . . . . . . . . . . . . . . . . . . . . . 896. CONCLUSIONS AND FUTURE WORK . . . . . . . . . . . . . . . . . . 96APPENDICES:A. DERIVATION OF THE PROPOSED MODEL . . . . . . . . . . . . . . 100A.1 Derivation of extended collinearity equation . . . . . . . . . . . . . 100A.2 Derivation of arc-length parameterization . . . . . . . . . . . . . . 103A.3 Derivation of the tangent of a spline . . . . . . . . . . . . . . . . . 110BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113ix


LIST OF TABLESTablePage4.1 Relationship between the eccentricity and the conic form . . . . . . . 585.1 Number of unknowns . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.2 Number of independent points for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> . . . . . . 735.3 EOPs of six <strong>bundle</strong> <strong>block</strong> images for simulation . . . . . . . . . . . . 765.4 Spline parameter and spline location parameter recovery . . . . . . . 815.5 Partial spline parameter and spline location parameter recovery . . . 835.6 Spline location parameter recovery . . . . . . . . . . . . . . . . . . . 845.7 EOP and spline location parameter recovery . . . . . . . . . . . . . . 865.8 EOP, control and tie spline parameter recovery . . . . . . . . . . . . 885.9 Information about aerial images used in this study . . . . . . . . . . . 915.10 Spline parameter and spline location parameter recovery . . . . . . . 925.11 Spline location parameter recovery . . . . . . . . . . . . . . . . . . . 935.12 EOP and spline location parameter recovery . . . . . . . . . . . . . . 94x


CHAPTER 1INTRODUCTION1.1 OverviewOne of the major tasks in digital photogrammetry is to determine the orientationparameters of aerial imageries quickly and accurately, which involves two primarysteps of interior orientation and exterior orientation. Interior orientation defines atransformation to a 3D image coordinate system <strong>with</strong> respect to the camera’s perspectivecenter, while a pixel coordinate system is the reference system for a digitalimage, using the geometric relationship between the photo coordinate system and theinstrument coordinate system.While the aerial photography provides the interior orientation parameters, theproblem is reduced to determine the exterior orientation <strong>with</strong> respect to the objectcoordinate system. Exterior orientation establishes the position of the camera projectioncenter in the ground coordinate system and three rotation angles of the cameraaxis to represent the transformation between the image and the object coordinatesystem. Exterior orientation parameters (EOPs) of stereo model consisting of twoaerial imageries can be obtained using relative and absolute orientation.1


EOPs of multiple overlapping aerial imageries can be computed using <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong>. The position and orientation of each exposure station are obtained by<strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> using collinearity equations which are linearized as an unknownposition and orientation <strong>with</strong> the object space coordinate system. In addition,interior orientation parameters can be recovered by using self-calibration which addsthe parameters into <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> to correct systematic errors such as lensdistortion, principal distance error, principal point effect and atmospheric refraction.Self-calibration is applied to individual images or flight lines, not to entire images, andadditional terms can be added depending environmental and operational conditions.Bundle <strong>block</strong> <strong>adjustment</strong> is a fundamental task in many applications such assurface reconstruction, orthophoto generation, image registration and object recognition.The program of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> in most softcopy workstations employspoint features as the control information. Bundle <strong>block</strong> <strong>adjustment</strong> is one ofphotogrammetric triangulation methods. Photogrammetric triangulation using digitalphotogrammetric workstations is more automated than aerial triangulation usinganalog instruments since the stereo model can be directly set using analytical triangulationoutputs. Bundle <strong>block</strong> <strong>adjustment</strong> reduces the cost of field surveying indifficult areas and verifies the accuracy of field surveying during the processing of <strong>bundle</strong><strong>block</strong> <strong>adjustment</strong>. Even though each stereo model requires at least two horizontaland three vertical control points, <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> can reduce the number ofcontrol points <strong>with</strong> accurate orientation parameters. EOPs of all photographs in thetarget area are determined by <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> which improves the accuracyand the reliability of photogrammetric tasks. Since object reconstruction is processed2


y an intersection employing more than two images, <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> providesthe redundancy for the intersection geometry and contributes to the eliminationof the gross error in the recovery of EOPs. Since in recent years the integration ofthe global positioning system (GPS) and inertial navigation system (INS) providesmore accurate information of the exposure station, the required control informationof the <strong>block</strong> <strong>adjustment</strong> is reduced.GPS was initially designed by the U.S. Department of Defense to determine thegeodetic position for military purposes accurately and quickly. GPS consists of 24medium Earth orbit satellites at an altitude of about 20,000 kilometers and at leastfive satellites are detected at any place on Earth.To determine a position, GPSestablishes the intersection from at least four satellites. INS records a position accelerationand angular acceleration to estimate the velocity, position and orientationinformation. The integration of GPS and INS provides more accurate trajectory informationsince INS generates the continuous positioning information to support thediscrete interval GPS information.In addition, INS can provide the location andorientation information by itself in case of a malfunction in receiving GPS signals.Incorporation of the GPS positions into the <strong>block</strong> <strong>adjustment</strong> can be implemented byGPS receivers and the data collection equipments. Three main considerations of theGPS position integration into the <strong>block</strong> <strong>adjustment</strong> are the offset between the GPSantenna position in the aircraft and camera position, the airplane’s turn at the end offlight lines, which can lose the GPS position information, and the systematic errorsof coordinate transformation from GPS coordinate system <strong>with</strong> respect to WGS84ellipsoid to local datum <strong>with</strong> respect to local geoid.3


The stereo model consisting of two images <strong>with</strong> twelve EOPs is a common orientationunit. The mechanism of object reconstruction from stereo model is the same<strong>with</strong> the animal and human visual system. The principle aspects of the human visionsystem including neurophysiology, anatomy and visual perception is well describedin Digital Photogrammetry (Schenk, 1999 [57]). The classical orientation model isimplemented in two steps <strong>with</strong> relative orientation and absolute orientation to solvetwelve orientation parameters for a model of two images. Five unknowns are solvedfrom relative orientation for a stereoscopic view to the model, and seven unknownsthree shifts, three rotations, and one scale factor are determined from absolute orientation.At least three vertical and two horizontal control points are required to obtainseven parameters of absolute orientation. In traditional photogrammetry, all orientationprocedures are performed manually by a photogrammetric operator. Fiducialmarks which define the photo coordinate system while they define the pixel coordinatesystem in digital photogrammetry are used in interior orientation which is imagereconstruction <strong>with</strong> respect to perspective center.The matching of conjugate entities plays an important role in relative orientation,and ground control points (GCPs) are adopted in absolute orientation to calculatethe object space coordinate system. Matching techniques can be divided into twocategories, area-based matching and feature-based matching. Area-based matchingmethods employ a similarity property between a small image patch in a templatewindow and an image patch in a matching window.Two well known area-basedmatching methods are cross-correlation and least squares matching, also gray levelsplay an important role in area-based matching. Feature-based matching uses featuresfor conjugate entities such as points, edges, linear features and volume features and the4


similarity of geometric properties are compared to find conjugate entities. Featurebasedmatching is more invariant to radiometric changes and the implementation timeof feature-based matching is faster than that of area-based matching. Extracting andmatching conjugate points are the first step for the autonomous space resection butthe general procedure of the autonomous space resection has not been developed yetas no matching algorithm can guarantee consistent accuracy.Conjugate entities correspond to the same point in the object space using collinearityequation that all a point on the image, a perspective center and the correspondingpoint in the object space are on the same straight line. For stereo model, thecoplanarity condition which is established by the equation for the volume of the parallelepipedcan be adopted <strong>with</strong> a mathematical formula. The volume of the parallelepipedis decided by the three vectors which are a vector between a left perspectivecenter and an image point on the left image, a vector between a right perspectivecenter and an image point on the right image and a vector between two perspectivecenters. That means that two perspective centers and two conjugate image rays liein the one plane and vectors are the object space vectors.The coplanarity equation is used for the relative orientation in the stereo model.Seven of twelve exterior parameters are fixed and the remaining five parameters areobtained by five or more coplanarity equations since one coplanarity equation eliminatesone degree of freedom. Adding more coplanarity equations increases the possibilityof the detection of the incorrect observation and the obtainment of a moreprecise result. However, the relative orientation of more than two images using coplanarityequations has a problem since, in general, all rays from each images do not5


intersect in a point. Possible constraint for coplanarity equations is that conjugatepoints are somewhere on the epipolar lines for point matching.Point-based procedure relationship between point primitives is most widely developedin traditional photogrammetry, that one measured point on an image is identifiedon another image. Even for linear features, data for the stereo model in softcopy workstationis collected by points so that further application and analysis relies on a pointas primary input data. The coefficients of interior, relative and absolute orientationare computed from point relationship. Interior orientation compensates lens distortion,film shrinkage, scanner error and atmosphere refraction. Relative orientationmakes the stereoscopic view possible, and the relationship between model coordinatesystem and object space coordinate system is reconstructed by absolute orientation.Using GCPs is widely employed to compute the orientation parameters. Althoughemploying a lot of GCPs is a time consuming procedure and <strong>block</strong>s the robust andaccurate automation what research on digital photogrammetry aims, developing ofa computer, storage capacity, photogrammetric software and a digital camera canreduce the computational and time complexity.In traditional photogrammetric activities, point-based methods <strong>with</strong> experiencedhuman operators are processed well but not the autonomous environment of digitalphotogrammetry.Although some reasonable approaches have been proven in digitalphotogrammetry, the object recognition, the feature extraction and matchingprocedures are a challenging task in the computer environment of the autonomousphotogrammetry. To develop more robust and accurate techniques, linear objects of6


straight linear features or formulated entities such as conic sections, which are accommodatingelements, other than points that are adopted instead of points in aerialtriangulation.Even though recent advanced algorithms provide accurate and reliable linear featureextraction, extracting linear features is more difficult than extracting a discreteset of points which can consist of any form of curves. Control points which are theinitial input data, and break points which are end points of piecewise curves, areeasily obtained <strong>with</strong> manual digitizing, edge operators or interest operators. A curvecan be represented analytically by seed points which are extracted automatically bythe point extraction in digital imagery.Employing high level features increases the feasibility of geometric informationand provides an analytical and suitable solution for the advanced computer technology.With the development of extraction, segmentation, classification and recognitionof features, the input data for feature-based photogrammetry has been expandedto increase redundancy of aerial triangulation. Since the identification, formulationand application of reasonable linear features is a crucial procedure for autonomousphotogrammetry, the higher order geometric feature-based modeling plays an importantrole in modern digital photogrammetry. The digital image format is suited toautonomous photogrammetry’s purpose, especially the feature extraction and measurement,and it is useful for precise and rigorous modeling of features from images.1.2 Scope of dissertationOver the years, many researchers have investigated the feasibility of linear featuresto autonomous photogrammetry, but the modeling of linear features are limited into7


straight features and conic sections. In this work the integrated model of the extendedcollinearity equation utilizing 3D <strong>natural</strong> <strong>cubic</strong> spline and arc-length parameterizationis derived to recover the exterior orientation parameters, 3D <strong>natural</strong> <strong>cubic</strong> splineparameters and spline location parameters. The research topics in this dissertationare sketched below bullet items.• 3D <strong>natural</strong> <strong>cubic</strong> spline is adopted for the 3D line expression in the objectspace to represent 3D features as parametric form. The result of this algorithmare the tie and control features for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>.This is a keyconcept of the mathematical model of linear features in the object space andits counterpart in the projected image space for line photogrammetry.• Arc-length parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> using Simpson’s rule isdeveloped to solve over-parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>. Additionalequation to the extended collinearity equation expands <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>from limited conditions such as straight lines or conic sections (circles, ellipses,parabolas and hyperbolas) to general cases.• Tangents of <strong>splines</strong> which are additional equations to solve the overparameterizationof 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> are established in case linear features in theobject space are straight lines or conic sections.• To establish the correspondence between 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> in the objectspace and their associated features in the 2D projected image space, the extendedcollinearity equation employing the projection ray which intersects the3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> is developed and linearized for least squares method.8


• Bundle <strong>block</strong> <strong>adjustment</strong> by the proposed method including the extended collinearityequation and arc-length parameterization equation is developed to show thefeasibility of tie <strong>splines</strong> and control <strong>splines</strong> for the estimation of exterior orientationof multiple images, spline parameters and t spline location parameters<strong>with</strong> simulated and real data.1.3 Organization of dissertationThis dissertation is divided into six chapters. The next chapter presents a reviewof line photogrammetry including mathematical representations of 3D curves.Chapter 3 provides the extended collinearity model and formulation using 3D <strong>natural</strong><strong>cubic</strong> <strong>splines</strong> and presents arc-length parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>.The non-linear integrated model is followed to recover EOPs, spline parameters andspline location parameters by tie and control features of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>in chapter 4. Detailed derivations of the extended collinearity equations, arc-lengthparameterization and tangents of <strong>splines</strong> are provided in the appendix. Chapter 5introduces experimental results to demonstrate the feasibility of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong><strong>with</strong> 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> <strong>with</strong> synthetic and real data. Finally, a summaryof experience and recommended future research are included in chapter 6.9


CHAPTER 2LINE PHOTOGRAMMETRYLine photogrammetry refers to photogrammetric applications such as single photoresection, relative orientation, triangulation, image matching, image registration andsurface reconstruction which are implemented using linear features and correspondencebetween linear features rather than point features. This chapter describes theissues of the photogrammetric development from point to linear feature based methodsand different mathematical models of free-form lines are presented to expand thecontrol feature from point and straight linear features to free-form linear features.2.1 Overview of line photogrammetryInterest conjugate points such as edge points, corner points and points on parkinglane are operated well for determining EOPs <strong>with</strong> respect to the object space coordinateframe in traditional photogrammetry. The most well-known edge and interestpoint detectors are Canny[9], Förstner[19], Harris well-known as the Plessy featurepoint detector[31], Moravec[45], Prewitt[51], Sobel[67] and SUSAN[66] interest pointdetectors. Canny, Prewitt, and Sobel operators are an edge detector and Förstner,Harris and, SUSAN are a corner detector. Other well-known corner detection algorithmsare Laplacian of Gaussian, the difference of Gaussians, and the determinant of10


Hessian. Interest point operators which detect well-defined point, edges and cornersplay an important role in automated triangulation and stereo matching. For example,the Harris operator is defined as a measurement of corner strength asH(x, y) = det(M) − α(trace(M)) 2 (2.1)where M is the local structure matrix and α is a parameter so that H ≥ if 0 ≤ α ≤0.25. A default value is 0.04. The gradients of x and y direction areg x = ∂I∂x ,g y = ∂I∂y(2.2)<strong>with</strong> I is an image. The local structure matrix M isM =<strong>with</strong> A = gx, 2 B = gy, 2 and C = g x g y .A corner is detected when[A CC B](2.3)H(x, y) > H thr (2.4)where H thr is the threshold parameter on corner strength.The Harris operator searches points where variations in two orthogonal directions arelarge using the local auto-correlation function and providing good repeatability undervarying rotation, scale, and illumination. Förstner corner detector is also based onthe covariance matrix for the gradient at a target point.A preliminary weight iscomputed asw pre ={ det(M), trace(M)p > p thr0, otherwise(2.5)11


where p thr is the threshold parameter on corner strength and p is a measure of isotropyasp =4det(M)trace(M) 2 (2.6)The final weight is obtained from a local-maxima filtering.w final ={wlc , local − maxima0, otherwise(2.7)where w lc is the local maximum <strong>with</strong>in a local neighborhood window.Marr[41] proposed the zero-crossing edge point detector utilizing second order directionalderivatives not first order directional derivatives. The maximum of first orderderivatives indicates the location of an edge and the zero of second order derivativesindicates the location of an edge. Physical boundaries of objects are detected easilysince the gray levels are changed abruptly in boundaries. Since no single operatorexists in edge detection, several criteria are required for each specific application.Matching point features occurs large percentages of matching errors since pointfeatures are ambiguous and the analytical solution for the point matching is notdeveloped.Because of the geometric information and symbolic meaning of linearfeatures, matching linear features is more reliable than matching point features inthe autonomous environment of digital photogrammetry. Since using linear featuresdoes not require the point-to-point correspondence, matching linear features is moreflexible than matching point features.A number of researchers have published studies of the automatic feature extractionand its application for various photogrammetric tasks, Föstner(1986) [18], Hannah(1989)[30], Schenk et al.(1991) [61], Schickler(1992) [62], Schenk and Toth(1993)12


[60], Ebner and Ohlhof(1994) [16], Ackerman and Tsingas(1994) [1], Haala and Vosselman(1992)[24], Drewniok and Rohr(1996, 1997) [14][15], Zahran(1997) [77] andSchenk(1998) [56]. However, point-based photogrammetry based on the manual measurementand identification of interest points is not established successfully in theautonomous environment of digital photogrammetry but the labor-intensive interpretationsince it has the limitation of occlusion, ambiguity and semantic information inview of robust and suitable automation.Since point features do not provide explicit geometric information, the geometricalknowledge is achieved by perceptual organization [55] [23] [10] [8]. Perceptualorganization derives features and structures from imagery <strong>with</strong>out prior knowledgeof geometric, spectral or radiometric properties of features and is a required stepfor object recognition. Perceptual organization is an intermediate level process forvarious vision tasks such as target-to-background discrimination, object recognition,target cueing, motion-based grouping, surface reconstruction, image interpretation,and change detection. Since objects can not be distinguished by one gray level pixel,an image must be investigated entirely to obtain perceptual information. Most recentresearches related to perceptual organization are the 2D image implementationat signal, primitive and structural levels.Generally grouping or segmentation has the same meaning <strong>with</strong> perceptual organizationin computer vision. Segmentation in computer vision is typically divided intotwo approaches, model-based method (top-down approach) and data-based method(bottom-up approach), and many researchers have employed edges and regions insegmentation.In edge-based approaches, edges are liked to general forms of linearfeatures <strong>with</strong>out discontinuities and in region-based approaches iterative region13


growing techniques using seed points are preferred for surface fitting. Model-basedmethods require domain knowledge for each specific application similar to the humanvisual system and data-based methods employ data properties for data recognition ina global fashion. The same invariant properties in different positions and orientationsare combined into the same regions or the same features in data-based methods. Oneapproach, however, cannot guarantee consistent quality so that combined approachesare implemented to minimize error segmentation.Berengolts and Lindenbaum [6] classified perceptual organization into three maincomponents; grouping cues, testable feature subsets, and cue integration method.Potential grouping cues are continuity, similarity, proximity, symmetry, tangents offeatures, lengths of features, area of regions, parallelism and other properties betweenobjects. Testable features are a suitable subset of features under considerationby the computational and time complexity. The computation space can be reducedin specific properties such as continuity and proximity. Cue integration should beconstructed to reduce a cost function, which is considered <strong>with</strong> statistical and mathematicalproperties rigorously.The symbolic representation using distinct points is difficult since interest pointscontain no explicit information about the physical reality.While traditional photogrammetrictechniques obtain the camera parameters from the correspondence between2D and 3D points, the more general and reliable process is required for advancedcomputer technology such as adopting linear features. Line photogrammetry is superiorin higher level tasks such as object recognition and automation as opposed topoint-based photogrammetry; however, to select correct candidate linear features is14


a complicated problem. The development of the algorithm from point-based to linebasedphotogrammetry chooses the both advantages. Selecting suitable features iseasier than extracting straight linear features and the candidate feature can be usedin higher applications. Another reason for developing curve features is that curvefeatures will be the prior and fundamental features to the next higher features suchas surfaces, areas and 3D volumes which consist of free-form linear features. Linebasedphotogrammetry is more suitable to develop the robust and accurate techniquesfor automation. If linear features are employed as control features, they provide advantagesover point features for the automation of aerial triangulation. Point-basedphotogrammetry based on the manual measurement and identification of conjugatepoints is less reliable than line-based photogrammetry since it has the limitation of occlusion(visibility),ambiguity(repetitive patterns) and semantic information in view ofrobust and suitable automation. The manual identification of corresponding entitiesin two images is crucial in the automation of photogrammetric tasks. No knowledge ofthe point-to-point correspondence is required in line-based photogrammetry. In addition,point features do not carry the information about the scene but linear featurescontain the semantic information related to real object features.Thus additionalinformation about linear features can increase the redundancy of the system.2.2 Literature reviewRelated works begin <strong>with</strong> methods of the pose estimation of imagery using linearfeatures which appear in most man-made objects such as buildings, roads and parkinglots. Over the years, a number of researchers in photogrammetry and computervision have used the line feature instead of point feature for example Masry(1981)15


[42],Tommaselli and Lugnani(1988) [70], Heikkilä(1991) [32], Kubik(1992) [37], Petsaand Patias(1994) [50], Gülch(1995) [20], Wiman and Axelsson(1996) [76], Chen andShibasaki(1998) [12], Habib(1999) [26], Heuvel(1999) [34], Tommaselli(1999) [71],Vosselman and Veldhuis(1999)[73], Föstner(2000) [17], Smith and Park(2000) [65],Schenk(2002) [58], Tangelder et al.(2003) [68] and Parian and Gruen(2005) [49]. In1988 Mulawa and Mikhail[47] proved the feasibility of linear features for close-rangephotogrammetric applications such as the space intersection, the space resection, relativeorientation and absolute orientation. This was the first step to employ the linearfeature based method in the close-range photogrammetric applications. Mulawa[46]developed linear feature based methods for different sensors. While straight linearfeatures and conic sections can be represented as unique mathematical expressions,free-form lines in nature cannot be described by algebraic equations. Hence Mikhailand Weerawong[44] employed <strong>splines</strong> and polylines to express free-form lines as analyticalexpressions.Tommaselli and Tozzi[72] proposed that the accuracy of thestraight line parameter should be a sub-pixel <strong>with</strong> the representation of four degreesof freedom in an infinite line. Many researchers in photogrammetry have describedstraight lines as infinite lines using minimal representation to reduce unknown parameters.The main consideration of straight line expression is singularities. Habibet al.[29] performed the <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> using 3D point set lying on controllinear features instead of traditional control points. EOPs were reconstructed hierarchicallyemploying automatic single photo resection (SPR). SPR was implementedby the relationship between image and object space points expressed as EOPs asfollowing equation (2.8).16


⎡⎢⎣x i − x py i − y p−f⎤⎡⎥⎦ = λR T ⎢(ω, φ, κ) ⎣X i − X oY i − Y oZ i − Z o⎤⎥⎦ (2.8)where λ scale, x i , y i the image coordinates, X i , Y i , Z i the object coordinates, x p , y p , finterior orientation parameters and X o , Y o , Z o , ω, φ, κ exterior orientation parameters.R T is the 3D orthogonal rotation matrix containing the three rotation angles ω, φ andκ.Linear features between image and object space were compared to calculate EOPs bythe modified iterated Hough transform. In the result, EOPs were robustly estimated<strong>with</strong>out the knowledge of the relationship between image and object space primitives.In addition, Habib et al.[28] demonstrated that straight lines in object spacecontributed for performing the <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> <strong>with</strong> self-calibration. Theyproposed that a large number of GCPs were reduced in calibration because of linearfeatures having four collinearity equations from two end points defining each straightline. Their experiment proposed the usefulness of linear features in orientation notonly <strong>with</strong> frame aerial imagery, but also in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>.Habib et al.[25] summarized linear features extracted from mobile mapping system,GIS database and maps for various photogrammetric applications such as single photoresection, triangulation, digital camera calibration, image matching, 3D reconstruction,image to image registration and surface to surface registration. Matched linearfeature primitives are utilized in space intersection for reconstruction of object spacefeatures and linear features in the object space are used for control features in triangulationand digital camera calibration. Since linear feature extraction can meetsub-pixel accuracy across the direction of edge and linear features can be extracted17


a lot in man-made structures and mobile mapping system in reality, they have focusedon implementation <strong>with</strong> straight linear features <strong>with</strong> geometric constraints.Since many man-made environments including buildings often have straight edgesand planar faces, it is advantageous to employ line photogrammetry instead of pointphotogrammetry when mapping polyhedral model objects.Mikhail[43] and Habib et al.[27] accomplished the geometrical modeling and theperspective transformation of linear features <strong>with</strong>in a triangulation process. Linearfeatures were used to recover relative orientation parameters. Habib et al. proposeda free-form line in object space by a sequence of 3D points along the object spaceline.Lee and Bethel[38] proposed employing both points and linear features were moreaccurate than using only points in ortho-rectification of airborne hyperspectral imagery.EOPs were recovered accurately and serious distortions were removed by thecontribution of linear features.Schenk[59] extended the concept of aerial triangulation from point features tolinear features. The line equation of six dependent parameters replaced the pointbased collinearity equation.X = X A + t · aY = Y A + t · bZ = Z A + t · c(2.9)where a real variable t, the start point (X A , Y A , Z A ) and direction vector (a, b, c).Traditional point-based collinearity equation was extended to line features18


x p = −f (X A + t · a − X C )r 11 + (Y A + t · b − Y C )r 12 + (Z A + t · c − Z C )r 13(X A + t · a − X C )r 31 + (Y A + t · b − Y C )r 32 + (Z A + t · c − Z C )r 33y p = −f (X A + t · a − X C )r 21 + (Y A + t · b − Y C )r 22 + (Z A + t · c − Z C )r 23(X A + t · a − X C )r 31 + (Y A + t · b − Y C )r 32 + (Z A + t · c − Z C )r 33(2.10)<strong>with</strong> x p , y p photo coordinates, f the focal length, X C , Y C , Z C camera perspectivecenter, and r ij the elements of the 3D orthogonal rotation matrix. The extendedcollinearity equation <strong>with</strong> six parameters was derived as the line expression of fourparameters (φ, θ, x o , y o ) since a 3D straight line has only four independent parameters.Two constrains are required to solve a common form of the 3D straight equations usingsix parameters determined by two vectors.⎡⎢⎣XYZ⎤⎥⎦ =⎡⎢⎣cos θ cos φ · x o − sin φ · y o + sin θ cos φ · zcos θ sin φ · x o + cos φ · y o + sin θ sin φ · z− sin θ · x o + cos θ · z⎤⎥⎦ (2.11)where z is a real variable. The advantage of the 3D straight line using four independentparameters is that it reduces the computation and time complexity in the<strong>adjustment</strong> processes such as a <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. The collinearity equationas the straight line function of four parameters was developed.x p = −f (X − X C)r 11 + (Y − Y C )r 12 + (Z − Z C )r 13(X − X C )r 31 + (Y − Y C )r 32 + (Z − Z C )r 33y p = −f (X − X (2.12)C)r 21 + (Y − Y C )r 22 + (Z − Z C )r 23(X − X C )r 31 + (Y − Y C )r 32 + (Z − Z C )r 33where X, Y, and Z were defined in (2.11).The solution of the <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> <strong>with</strong> linear features was implemented sothat the line-based aerial triangulation can provide a more robust and autonomousenvironment than the traditional point-based <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>.Anothermathematical model of the perspective relationship between the image and the object19


space features is the coplanarity approach. Projection plane defined by the perspectivecenter in the image space and the plane including the straight line in the objectspace are identical. The extended collinearity model using linear features comparing<strong>with</strong> the coplanarity method was proposed. Zielinski[80] described another straightline expression <strong>with</strong> independent parameters.Zalmanson and Schenk[79] extended their algorithm to relative orientation usingline parameters. Epipolar lines were employed to adopt linear features in relativeorientation <strong>with</strong> the extended collinearity equations (2.10).Gauss-Markov modelwas partitioned into orientation parameters and line parameters as (2.13).y = [ A R A L] [ ξ Rξ L]+ e (2.13)where ξ R relative orientation parameters, ξ L line parameters, A R the partial derivativesof extended collinearity equations <strong>with</strong> respect to relative orientation parameters,A L the partial derivatives of extended collinearity equations <strong>with</strong> respect to lineparameters, and e the error vector. They demonstrated that parallel lines to epipolarlines contributed to relative orientation totally and vertical lines to epipolar linesrendered only surface reconstruction. In addition, they introduced a regularizationscheme <strong>with</strong> soft constraints for the general cases.Zalmanson[78] updated EOPs using the correspondence between the parametriccontrol free-form line in object space and the projected 2D free-form line in imagespace. The hierarchical approach, the modified iteratively close point (ICP) method,was developed to estimate curve parameters. The ray lies on the free-form line whoseparametric equation represented <strong>with</strong> l parameter is as following. Besl and McKay[7]employed the ICP algorithm to solve matching problem of point sets, free-form curves,20


surfaces and terrain models in 2D and 3D space. ICP algorithm is executed <strong>with</strong>outthe prior knowledge of correspondence between points.The ICP method affectedZalmanson’s dissertation in the development of the recovery of EOPs using 3D freeformlines in photogrammetry. Euclidean 3D transformation is employed in the searchof the closest entity on the geometric data set. Rabbani et al.[52] utilized ICP methodin registration of Lidar point clouds to divide into four categories spheres, planes,cylinder and torus <strong>with</strong> direct and indirect method.Ξ(l) =⎡⎢⎣X(l)Y (l)Z(l)⎤⎥⎦ =⎡⎢⎣X k 0Y k0Z k 0⎤⎥⎦ +⎡⎢⎣⎤ρ 1⎥ρ 2ρ 3where X 0 , Y 0 , Z 0 , ω, ϕ, κ the EOPs and ρ 1 , ρ 2 , ρ 3 the direction vector.⎦ l (2.14)The parametric curve Γ(t) = [X(t) Y (t) Z(t)] Twas obtained by minimizing theEuclidian distance between two parametric curves.Φ(t, l) ≡ ‖Γ(t) − Ξ(l)‖ 2 = (X(t) − X 0 − ρ 1 l) 2 +(Y (t) − Y 0 − ρ 2 l) 2 +(Z(t) − Z 0 − ρ 3 l) 2(2.15)Φ(t, l) had a minimum value at ∂Φ/∂l = ∂Φ/∂t = 0 <strong>with</strong> two independent variablesl and t as (2.16).∂Φ/∂l = −2ρ 1 (X(t) − X 0 − ρ 1 l) − 2ρ 2 (Y (t) − Y 0 − ρ 2 l) − 2ρ 3 (Z(t) − Z 0 − ρ 3 l) = 0∂Φ/∂t = 2X ′ (t) (X(t) − X 0 − ρ 1 l) + 2Y ′ (t) (Y (t) − Y 0 − ρ 2 l) +2Z ′ (t) (Z(t) − Z 0 − ρ 3 l) = 0(2.16)Akav et al.[2] employed planar free form curves for aerial triangulation <strong>with</strong> the ICPmethod. Since the effect of Z parameter as compared <strong>with</strong> X and Y was large innormal plane equation aX +bY +cZ = 1, different plane representation was developed21


to avoid numerical problems as⎡⎢⎣⎤n 1⎥n 2n 3⎦ =⎡⎢⎣sin θ cos ϕsin θ sin ϕcos ϕn 1 (X − X o ) + n 2 (Y − Y o ) + n 3 (Z − Z o ) = 0n 1 X + n 2 Y + n 3 Z = D⎤⎥⎦(2.17)<strong>with</strong> θ angle from XY plane, ϕ angle around Z axis, n unit vector of plane normal andD the distance between the plane and the origin. Five relative orientation parametersand three planar parameters were obtained by using the homography mapping systemwhich searched the conjugate point in an image corresponding to a point in the otherimage.Lin[40] proposed the method of the autonomous recovery of exterior orientationparameters by the extension of the traditional point-based Modified Iterated HoughTransform (MIHT) to the 3D free-form linear feature based MIHT. Straight polylineswere generalized for matching primitives in the pose estimation since the mathematicalrepresentation of straight lines are much clearer than the algebraic expression ofconic sections and <strong>splines</strong>.Gruen and Akca[21] matched 3D curves whose forms were defined by a <strong>cubic</strong>spline using the least squares matching. Subpixels were localized by the least squaresmatching and the quality of the localization was decided by the geometry of imagepatches. Two free-form lines were defined as (2.18).f(u) = [x(u) y(u) z(u)] T = a 0 + a 1 u + a 2 u 2 + a 3 u 3g(u) = [x ′ (u) y ′ (u) z ′ (u)] T = b 0 + b 1 u + b 2 u 2 + b 3 u 3 (2.18)where u ∈ [0, 1], a 0 , a 1 , a 2 , a 3 , b 0 , b 1 , b 2 , b 3 variables and f(u), g(u) ∈ R 3Taylor expansion was employed to adopt Gauss-Markov model as (2.19).22


f(u) − e(u) = g(u)f(u) − e(u) = g 0 (u) + ∂g0 (u)du ∂u(2.19)f(u) − e(u) = g 0 (u) + ∂g0 (u) ∂udx + ∂g0 (u) ∂udy + ∂g0 (u) ∂udz ∂u ∂x ∂u ∂y ∂u ∂z2.3 Parametric representations of curvesWhile progress for the automatic detection, segmentation and recognition of 3Dlines and objects consisting of free-form lines has become sophisticated by significantadvances in computer technology, considerable techniques such as the developmentin segmentation and classification for digital photogrammetry have been developedduring the last few decades, such as; geospatial image processing software, digitalorthophoto generation software, and softcopy workstations. Digital photogrammetryis closely related to the field of computer graphics and computer vision. Free-formlines and objects are an important element of many applications in computer graphicsand computer vision. A number of researchers in computer vision and artificial intelligencehave used suitable constraints or assumptions to reduce the solution space insegmentation and have extracted constrained features such as contours [64], convexoutlines [35], rectangles [54] and ellipses [53].Free-form lines are one of three linear features and other linear features are straightlinear features and linear features described by unique mathematical equations. Anumber of researchers have preferred straight lines for photogrammetric applicationssince straight lines have no singularity problem and straight lines are easily detectedin man-made environments. A list of curves is described as follows.23


• Piecewise curves- Bezier curve - Splines - Ogee - Loess curve- Lowess - Reuleaux triangle• Fractal curves- Blancmange curve - Dragon curve - De Rham curve - Koch curve- Lévy C curve - Space-filling curve - Sierpinski curve• Transcendental curves- Bowditch curve - Brachistochrone - Butterfly curve - Catenary- Clélies - Cochleoid - Curve of pursuit - Cycloid- Horopter - Isochrone - Lamé curve - Lissajous curve- Pedal curve - Roulette curve - Rhumb line - Sail curve- Spirals - Superellipse - Syntractrix - Tractrix- Trochoid - Viviani’s curve• Algebraic curves- Cubic plane curve - Bicorn - Bean curve - Astroid- Ampersand curve - Atriphtaloid - Sextic plane curve- Hippopede - Deltoid curve - Conchoid of de Sluze- Epicycloid - Bullet-nose curve - Cardioid - Bow curve- Cissoid of Diocles - Conic sections - Crooked egg curve- Cruciform curve - Devil’s curve - Quintic plane curve- Epitrochoid - Hessian curve - Quartic plane curve- Kappa curve - Lemniscate - Kampyle of Eudoxus- Limacon - Line - Folium of Descartes- Nephroid - Quadrifolium - Quintic of I’Hospital- Rose curve - Strophoid - Rational normal curve- Tricuspid curve - Trident curve - Serpentine curve- Trifolium - Twisted <strong>cubic</strong> - Trisectrix of Maclaurin- Bicuspid curve - Cassini oval - Witch of Agnesi- Cassinoide - Cubic curve - Serpentine curve- Elliptic curve - Watt’s curve - Butterfly curve- Fermat curve - Klein quartic - Elkies trinomial curves- Mandelbrot curve - Trott curve - Erdös lemniscate- Classical modular curve - Hyperelliptic curve(http://en.wikipedia.org/wiki/List of curves)Free-form linear features are obtained easily from mobile mapping systems, objectrecognition system in computer vision and GIS database such as the digital map.24


Tankovich[69] used linear features represented by straight lines for resection and intersection.3D free-form lines in the object space can be represented by mathematicalfunction and parametric representations of curves can be categorized into four groups,for example; piecewise constructions, fractal curves, transcendental curves and algebraiccurves. Another approach of free-form line expression is using a point sequencealong linear features to preserve the original information of the linear features. Parametricrepresentations reduce the amount of information for the expression of freeformlines <strong>with</strong> generalization. A more accurate expression needs more parameters ina mathematical equation. Since any parametric representation of free-form lines hassingularities, two more constraints or generalization are required for free-form lineapplications in photogrammetry. While most free-form lines are finite segments inreality, most representations of curves are infinite representations.Piecewise curves are defined by each segment and are scale independently. The<strong>cubic</strong> polynomials <strong>with</strong> continuity C 2 is a standard piecewise curve. For smootheddisplay, small degree polynomial segments can be used for curve representation. Theend point of one segment is the start point of the next segment <strong>with</strong> the value of ton interval [0, 1].Fractal curves are random iterated function and can not be differentiated anywherealthough fractal curves are continuous. Fractal curves are transformed hierarchicallyon smaller and smaller scale.Transcendental curves are not algebraic that is if algebraic curve form is f(x, y) =0 <strong>with</strong> a polynomial in x and y, and transcendental curve form is f(x, y) = 0 <strong>with</strong>outa polynomial in x and y.25


Algebraic curves have implicit form having a polynomial equation in two variables.The graphs of a polynomial equation are algebraic curves and conic sections arealgebraic curves of degree 2 called a higher plane curve.2.3.1 SplineThe choice of the right feature model is important to develop the feature basedapproach since the ambiguous representation of features leads to an unstable <strong>adjustment</strong>.A spline is piecewise polynomial functions in n of vector graphics. A spline iswidely used for data fitting in the computer science because of the simplicity of thecurve reconstruction. Complex figures are approximated well through curve fittingand a spline has strength in the accuracy evaluation, data interpolation and curvesmoothing. One of important properties of a spline is that a spline can easily bemorph. A spline represents a 2D or 3D continuous line <strong>with</strong> a sequence of pixels andsegmentation. The relationship between pixels and lines is applied to a <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong> or a functional representation. A spline of degree 0 is the simplest splineand a linear spline of degree 1, a quadratic spline of degree 2 and a common <strong>natural</strong><strong>cubic</strong> spline of degree 3 <strong>with</strong> continuity C 2 . The geometrical meaning of continuityC 2 is that the first and second derivatives are proportional at joint points and theparametric importance of continuity C 2 is that the first and second derivatives areequal at connected points.A spline is defined as piecewise parametric form.F : [a, b] → Ra = x 0 < x 1 < · · · < x n−1 = bF (x) = G 0 (x), x ∈ [x 0 , x 1 ] (2.20)26


(a) 0th order continuity (b) 1st order continuity (c) 2nd order continuityFigure 2.1: Spline continuityF (x) = G 1 (x), x ∈ [x 1 , x 2 ]· · ·F (x) = G n−1 (x), x ∈ [x n−2 , x n−1 ]where x is a knot vector of a spline.Natural <strong>cubic</strong> splineThe number of break points which are the determination of a set of piecewise <strong>cubic</strong>functions varies depending on spline parameters. A <strong>natural</strong> <strong>cubic</strong> degree guaranteesthe second-order continuity which means the first and second order derivatives of twoconsecutive <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> are continuous at the break point. The intervalsfor a <strong>natural</strong> <strong>cubic</strong> spline do not need to be the same as the distance of every twoconsecutive data points. The best intervals are chosen by a least squares method. Ingeneral, the total number of break points is less than that of original input points.The algorithm of a <strong>natural</strong> <strong>cubic</strong> spline is as below.Generate the break point (control point) set for the spline of the original input data27


Calculate the maximum distance between the approximated spline and the originalinput dataWhile(the maximum distance > the threshold of the maximum distance)Add break point to the break point set at the location of the maximum distanceCompare the maximum distance <strong>with</strong> the thresholdThe larger threshold makes the more break points <strong>with</strong> more accurate spline to theoriginal input data.N piecewise <strong>cubic</strong> polynomial functions between two adjacent break points aredefined from the N + 1 break points. There is a separate <strong>cubic</strong> polynomial for eachsegment <strong>with</strong> its own coefficients.X 0 (t) = a 00 + a 01 t + a 02 t 2 + a 03 t 3 , t ∈ [0, 1]X 1 (t) = a 10 + a 11 t + a 12 t 2 + a 13 t 3 , t ∈ [0, 1]· · · (2.21)X i (t) = a i0 + a i1 t + a i2 t 2 + a i3 t 3 , t ∈ [0, 1]Y i (t) = b i0 + b i1 t + b i2 t 2 + b i3 t 3 , t ∈ [0, 1]Z i (t) = c i0 + c i1 t + c i2 t 2 + c i3 t 3 , t ∈ [0, 1]The strength is that segmented lines represent a free-form line <strong>with</strong> analytical parameters.The number of break points is reduced and the input error should be absorbedby a mathematical model especially in the expression of points on a straight line. A<strong>natural</strong> <strong>cubic</strong> spline is data independent curve fitting. The disadvantage is that theentire curve shape depends on all of the passing points. Changing any one of themchanges the entire curve.28


Cardinal splineA Cardinal spline is defined by a <strong>cubic</strong> Hermite spline, a third-degree spline.Hermite form defined by two control points and two control tangents is implied byeach polynomial of a Hermite form. The basic scheme of solving the system of a <strong>cubic</strong>Hermit spline is continuity C 1 that the first derivatives are equal at break points. Adisadvantage of a <strong>cubic</strong> Hermite spline is that tangents are always required while thisinformation is not available for all curves.A Cardinal spline consists of three points before and after a control point and atension parameter. A tangent m i of a cardinal spline is as following <strong>with</strong> given n + 1points, p 0 , · · · , p n .m i = 1 2 (1 − c)(p i+1 − p i−1 ) (2.22)where c is a tension parameter which contributes the length of the tangent. A tensionparameter is between 0 and 1.If tension is 0, a Cardinal spline is referred to asa Catmull-Rom spline[11]. A Catmull-Rom spline is frequently used to interpolatesmoothly between point data in mathematics and between key-frames in computergraphics. A cardinal spline represents the curve from the second point to the lastsecond point of the input point set. The input points are the control points thatdefine a Cardinal spline.A Cardinal spline produces a C 1 continuous curve, nota C 2 continuous curve, and the second derivatives are linearly interpolated <strong>with</strong>ineach segment. The advantage is no need for tangents but the disadvantage is theimprecision of the tangent approximation.29


B-splineThe equation of B-spline <strong>with</strong> m + 1 knot vector T , the degree p ≡ m − n − 1which must be satisfied from equality property of B-spline since a basis function isrequired for each control points and n + 1 control points P 0 , P 1 , · · · , P n is as [75]C(t) =n∑P i N i,p (t) (2.23)i=o<strong>with</strong>N i,0 (t) =N i,p (t) ={1 if ti ≤ t ≤ t i+1 and t i < t i+10 otherwiset − t iN i,p−1 (t) +t i+p+1 − tN i+1,p−1 (t)t i+p − t i t i+p+1 − t i+1Each point is defined by control points affected by segments and B-<strong>splines</strong> are apiecewise curve of degree p for each segment. Advantages of B-<strong>splines</strong> are to representcomplex shapes <strong>with</strong> lower degree polynomials while B-<strong>splines</strong> are drawn closer totheir original control polyline as the degree decreases. B-<strong>splines</strong> have an inside convexhull property that B-<strong>splines</strong> are contained in the convex hull presented by its polyline.With an inside convex hull property, the shape of B-<strong>splines</strong> can be controlled indetail. Although B-<strong>splines</strong> need more information and more complex algorithm thanother <strong>splines</strong>, a B-spline provides many important properties than other <strong>splines</strong>, suchas degree independence for control points.B-<strong>splines</strong> change the degree of curves<strong>with</strong>out changing control points, and control points can be changed <strong>with</strong>out affectingthe shape of whole B-<strong>splines</strong>. B-<strong>splines</strong>, however, cannot represent simple constrainedcurves such as straight lines and conic sections.30


2.3.2 Fourier transformFourier series and transform have been widely employed to represent 2D and 3Dcurves as well as 3D surfaces. Fourier series is useful for periodic curves and Fouriertransform is proper for non-periodic free-form lines. Parametric representations ofcurves have been established in many applications. Fourier transform is the generalizationof the complex Fourier series <strong>with</strong> the infinite limit. Let w be the frequencydomain, then time domain t is described as follow.X(w) = 1 √2π∫ ∞−∞x(t)e −iwt dt (2.24)Discrete Fourier transform (DFT) is commonly employed to solve a sampled signaland partial differential equations. Fast Fourier transform (FFT) which reduces thecomputation complexity for N points from 2N 2 to 2Nlog 2 N improves the efficiencyof DFT. DFT transforms N complex numbers x 0 , · · · , x N−1 into the sequence of Ncomplex number X 0 , · · · , X N−1 .X k =N−1 ∑k=0x n e − 2πiN kn k = 0, · · · , N − 1 (2.25)Shape information can be obtained <strong>with</strong> the first few coefficients of Fourier transformsince the low frequency covers most shape information. In addition, Fouriertransform can be developed in the smoothing of a 3D free-form line in the objectspace using the convolution theorem <strong>with</strong> the use of the FFT algorithm.Boundary conditions affect the application of Fourier related transforms especiallywhen solving partial differential equations. In some cases, discontinuities occur dueto the number of Fourier terms used.The fewer terms, the smoother a function.31


Fourier transform has strength in signal and data compression, filtering and smoothing.However, Fourier transform has little strength in the representation of a curveaccurately. More terms to represent a free-form line exactly increase the time andthe computation complexity.2.3.3 Implicit polynomialsImplicit polynomials obtained by the computation of the least squares problemfor the coefficients of the polynomial have the mathematical form:X(t) = a 0 + a 1 t + a 2 t 2 + · · · + a n t nY (t) = b 0 + b 1 t + b 2 t 2 + · · · + b n t n (2.26)Z(t) = c 0 + c 1 t + c 2 t 2 + · · · + c n t nwhere a 0 , a 1 , · · · a n , b 0 , · · · b n , c 0 , · · · c n ∈ R 3 and the value of n is the degree of thepolynomial R n .The coefficients a 0 , b 0 , c 0 characterize a point in the object spaceR 3 and remaining parameters represent the direction.The number of parametersrequired to represent the n degree polynomial R n is 3(n + 1). Implicit polynomial ina degree of 3 is analogous to a <strong>natural</strong> <strong>cubic</strong> spline.A free-form line is formulated in a simple form geometrically independent accordingto a polynomial fit. However, a quality of the results is implemented in case ofhigh degrees. A prediction of the outcome of fitting <strong>with</strong> implicit polynomials is difficultsince coefficients are not geometrically correlated. The best solution is optimizedby the iterative try and error method. In the worst case, the implementation seemsto never end <strong>with</strong> a tight threshold. The number of trials to fit successful coefficientsdepends on the restriction of the search space. Implicit polynomials are originatedthrough constraints and various minimization criteria.32


For other polyline expressions, Ayache and Faugeras[4] described a line as theintersection of two planes that one plane is parallel to the X-axis and the other isparallel to the Y-axis <strong>with</strong> two constraints to extend this concept to general lines.Two constraints reduced line parameters six to four-dimensional vector.Since standard collinearity model can be extended to adopt parametric representationof spline, 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> are employed for this research.Thecollinearity equations play an important role in photogrammetry since each controlpoint in the object space produces two collinearity equations for every photographin which the point appears. A <strong>natural</strong> 3D <strong>cubic</strong> spline allows the utilization of thecollinearity model for expressing orientation parameters and curve parameters.33


CHAPTER 3BUNDLE BLOCK ADJUSTMENTWITH 3D NATURAL CUBIC SPLINES3.1 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>In this chapter, the correspondence between the 3D curve in the object spacecoordinate system and its projected 2D curve in the image coordinate system isimplemented using a <strong>natural</strong> <strong>cubic</strong> spline accommodating curve feature because of itsboundary conditions the zero second derivatives at the end points. A <strong>natural</strong> <strong>cubic</strong>spline is composed of a sequence of <strong>cubic</strong> polynomial segments as figure 3.1 <strong>with</strong>x 0 , x 1 , · · · , x n the n + 1 control points and X 0 , X 1 , · · · , X n−1 the ground coordinate ofn segments.Figure 3.1: Natural <strong>cubic</strong> spline34


Each segment of a <strong>natural</strong> <strong>cubic</strong> spline is expressed <strong>with</strong> parametric representationsas (3.1).X 0 (t) = a 00 + a 01 t + a 02 t 2 + a 03 t 3 , t ∈ [0, 1]X 1 (t) = a 10 + a 11 t + a 12 t 2 + a 13 t 3 , t ∈ [0, 1]· · · (3.1)X i (t) = a i0 + a i1 t + a i2 t 2 + a i3 t 3 , t ∈ [0, 1]Y i (t) = b i0 + b i1 t + b i2 t 2 + b i3 t 3 , t ∈ [0, 1]Z i (t) = c i0 + c i1 t + c i2 t 2 + c i3 t 3 , t ∈ [0, 1]Bartels et al.[5] represented the solution of a <strong>natural</strong> <strong>cubic</strong> spline <strong>with</strong> given controlpoints.Since four coefficients are required for each interval, the number of totalparameters is 4n to define the spline X i .X i (t) = a i0 + a i1 t + a i2 t 2 + a i3 t 3X (1)i (t) = a i1 + 2a i2 t + 3a i3 t 2 (3.2)X (2)i (t) = 2a i2 + 6a i3 tThe first and second order derivatives are equal at the joints of a spline and therelationship between the control points and the segments leads (3.3).X i−1 (1) = x iX i (0) = x iX (1)i−1(1) = X (1)i (0) (3.3)X (2)i−1(1) = X (2)i (0) [1]X 0 (0) = x 0X n−1 (1) = x n35


Substituting (3.2) into (3.3 [1]) leads toand substituting (3.2) into (3.3) leads to2a (i−1)2 + 6a (i−1)3 = 2a i2 (3.4)a i0 = x ia i1 = D ia i2 = 3(x i+1 − x i ) − 2D i − D i+1 (3.5)where D is the derivative.Substituting (3.5) into (3.4) leads to<strong>with</strong> 1 ≤ i ≤ n − 1.a i3 = 2(x i − x i+1 ) + D i + D i+1D i−1 + 4D i + D i+1 = 3(x i+1 − x i−1 ) (3.6)Since the total number of D i unknowns is n and the total number of equationsis n − 2, two more equations are required to solve the underdetermined system. Ina <strong>natural</strong> <strong>cubic</strong> spline, two boundary conditions are given to complete the system ofn − 2 equations. The second derivatives at the end points are set to zero as (3.7).X (2)0 (0) = 0X (2)n−1(0) = 0 (3.7)Otherwise, the first and the last segment can be set to the 2nd order polynomial toreduce two unknowns, for which two boundary conditions are not required.Substituting (3.5) into (3.7) can be described as followings.2D 0 + D 1 = 3(x 1 − x 0 )D n−1 + 2D n = 3(x n − x n−1 ) (3.8)36


⎡⎢⎣2 11 4 11 4 1· · ·1 4 11 2⎤ ⎡⎥⎦ ⎢⎣D 0D 1D 2.D n−1D n⎤⎡=⎥ ⎢⎦ ⎣3(x 1 − x 0 )3(x 2 − x 0 )3(x 3 − x 1 ).3(x n − x n−2 )3(x n − x n−1 )⎤⎥⎦(3.9)D i is obtained in the case of close curves as⎡⎤ ⎡4 11 4 11 4 1⎥⎢⎢⎣· · ·1 4 11 1 4⎥⎦ ⎢⎣D 0D 1D 2.D n−1D n⎤⎡=⎥ ⎢⎦ ⎣3(x 1 − x 0 )3(x 2 − x 0 )3(x 3 − x 1 ).3(x n − x n−2 )3(x n − x n−1 )⎤⎥⎦(3.10)The normalized spline system is⎡⎢⎣b 11 4 11 4 1· · ·1 4 11 1 b⎤ ⎡⎥⎦ ⎢⎣D 0D 1D 2.D n−1D n⎤⎡= k⎥ ⎢⎦ ⎣(x 1 − x 0 )(x 2 − x 0 )(x 3 − x 1 ).(x n − x n−2 )(x n − x n−1 )⎤⎥⎦(3.11)where the value of b and k depends on the boundary conditions of a spline and kdepends on the type of a spline. Normally the value of b and k in a <strong>natural</strong> <strong>cubic</strong>are 2 and 3 in an unclosed curve case and 4 and 3 in a closed curve case respectively.The values of b and k in B-Splines are 5 and 6 respectively [13].In case of two parameters, the corresponding relationship between two parameterscan be calculated <strong>with</strong>out an intermediate t parameter. n + 1 point pairs,(x 0 , y 0 ), (x 1 , y 1 ), · · ·, (x n , y n ), have 4n unknown spline parameters in n segments. 2nequations from 0th continuity condition, n − 1 equations from 1st continuity condition,n − 1 equations from 2nd continuity condition, 2 equations from boundaryconditions that the second derivatives at the end points are set to zero.37


From 0th continuity,y 0 = a 00 + a 01 x 0 + a 02 x 2 0 + a 03 x 3 0y 1 = a 00 + a 01 x 1 + a 02 x 2 1 + a 03 x 3 1y 1 = a 10 + a 11 x 1 + a 12 x 2 1 + a 13 x 3 1y 2 = a 10 + a 11 x 2 + a 12 x 2 2 + a 13 x 3 2· · ·y n−1 = a (n−1)0 + a (n−1)1 x n−1 + a (n−1)2 x 2 n−1 + a (n−1)3 x 3 n−1y n = a (n−1)0 + a (n−1)1 x n + a (n−1)2 x 2 n + a (n−1)3 x 3 nFrom 1st continuity,a 01 + 2a 02 x 1 + 3a 03 x 2 1 = a 11 + 2a 12 x 1 + 3a 13 x 2 1a 11 + 2a 12 x 2 + 3a 13 x 2 2 = a 21 + 2a 22 x 2 + 3a 23 x 2 2· · ·a (n−2)1 + 2a (n−2)1 x n−1 + 3a (n−2)3 x 2 n−1 = a (n−1)1 + 2a (n−1)2 x n−1 + 3a (n−1)3 x 2 n−1From 2nd continuity,2a 02 + 6a 03 x 1 = 2a 12 + 6a 13 x 12a 12 + 6a 13 x 2 = 2a 22 + 6a 23 x 2· · ·2a (n−2)2 + 6a (n−2)3 x n−1 = 2a (n−1)2 + 6a (n−1)3 x n−1From boundary conditions,2a 01 + 6a 03 x 0 = 02a (n−1)2 + 6a (n−1)3 x n = 038


3.2 Extended collinearity equation model for <strong>splines</strong>The collinearity equations are the commonly used condition equations for relativeorientation, the space intersection which calculates a point location in the objectspace using projection ray intersection from two or more images and the space resectionwhich determines the coordinates of a point on an image and EOPs <strong>with</strong> respectto the object space coordinate system. The space intersection and the space resectionare the fundamental operations in photogrammetry for further processes such astriangulation. The basic concept of the collinearity equation is that all points on theimage, a perspective center and the corresponding point in the object space are on astraight line. The relationship between the image coordinate system and the objectcoordinate system is expressed by three position parameters and three orientation parameters.The collinearity equations play an important role in photogrammetry sinceeach control point in the object space produces two collinearity equations for everyphotograph in which the point appears. If m points appear in n images, then 2mncollinearity equations can be employed in the <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. The extendedcollinearity equations relating to a <strong>natural</strong> <strong>cubic</strong> spline in object space <strong>with</strong> groundcoordinates (X i (t), Y i (t), Z i (t)) into image space <strong>with</strong> photo coordinates (x pi , y pi ) are(3.12). A <strong>natural</strong> <strong>cubic</strong> spline allows the utilization of the collinearity model forexpressing orientation parameters and curve parameters.x pi = −f (X i(t) − X C )r 11 + (Y i (t) − Y C )r 12 + (Z i (t) − Z C )r 13(X i (t) − X C )r 31 + (Y i (t) − Y C )r 32 + (Z i (t) − Z C )r 33(3.12)y pi = −f (X i(t) − X C )r 21 + (Y i (t) − Y C )r 22 + (Z i (t) − Z C )r 23(X i (t) − X C )r 31 + (Y i (t) − Y C )r 32 + (Z i (t) − Z C )r 3339


<strong>with</strong> x pi , y pi the photo coordinates of the ith segment, f the focal length, X C , Y C , Z Cthe camera perspective center, and r ij the elements of the 3D orthogonal rotationmatrix R T by angular elements (ω, ϕ, κ) of EOPs.Figure 3.2: The projection of a point on a splineThe 3D orthogonal rotation matrix R T rotates the object coordinate system parallelto the photo coordinate system employing three sequential rotations: the primaryrotation angle ω around the X-axis (X ω Y ω Z ω ), the secondary rotation angle ϕ aroundthe once rotated Y -axis (X ωϕ Y ωϕ Z ωϕ ) and the tertiary rotation angle κ around thetwice rotated Z-axis (X ωϕκ Y ωϕκ Z ωϕκ ). Most often the consecutive rotations <strong>with</strong> thesequence R ω , R ϕ , R κ or R ϕ , R ω , R κ are used in photogrammetry.Since each threesequential rotations R ω , R ϕ , R κ are orthogonal matrices, the matrix R is orthogonal40


R −1 = R T . The matrix R T (= R −1 ) rotates the object coordinate system parallel tothe photo coordinate system and the matrix R rotates the photo coordinate systemparallel to the object coordinate system.R = R ω R ϕ R κ ==⎡⎢⎣⎡⎢⎣1 0 00 cos ω − sin ω0 sin ω cos ω⎤ ⎡⎥ ⎢⎦ ⎣cos ϕ 0 sin ϕ0 1 0− sin ϕ 0 cos ϕ⎤ ⎡⎥ ⎢⎦ ⎣cos κ − sin κ 0sin κ cos κ 00 0 1cos ϕ cos κ − cos ϕ sin κ sin ϕcos ω sin κ + sin ω sin ϕ cos κ cos ω cos κ − sin ω sin ϕ sin κ − sin ω cos ϕsin ω sin κ − cos ω sin ϕ cos κ sin ω cos κ + cos ω sin ϕ sin κ cos ω cos ϕThe extended collinearity equations can be written as follows:⎤⎥⎦⎤⎥⎦(3.13)x p = −f u w ,y p = −f v w(3.14)⎡⎢⎣uvw⎤⎡⎥⎦ = R T ⎢(ω, ϕ, κ) ⎣X i (t) − X CY i (t) − Y CZ i (t) − Z C⎤⎥⎦To recover the 3D <strong>natural</strong> <strong>cubic</strong> spline parameters and the exterior orientation parametersin a <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>, a non-linear mathematical model of the extendedcollinearity equation is differentiated. The models of exterior orientation recoveryare classified into linear and non-liner methods. While linear methods decreasethe computation load, accuracy and robustness of linear algorithms are not excellent.Otherwise non-linear methods are more accurate and robust. However, non-linearmethods require initial estimates and increase the computational complexity. Therelationship between a point in the image space and a corresponding point in theobject space is established by the extended collinearity equation. Prior knowledge41


about correspondences between individual points in the 3D object space and theirprojected features in the 2D image space is not required in extended collinearityequations <strong>with</strong> 3D <strong>natural</strong> <strong>splines</strong>. One point on a <strong>cubic</strong> spline has 19 parameters(X c , Y c , Z c , ω, ϕ, κ, a 0 , a 1 , a 2 , a 3 , b 0 , b 1 , b 2 , b 3 , c 0 , c 1 , c 2 , c 3 , t). The differentials of (3.14)is (3.15).dx p = − f fudu +w w dw, 2dy p = − f wfvdv + dw (3.15)w2 <strong>with</strong> the differentials of du, dv, dw (3.16)⎡⎢⎣dudvdw⎤⎥⎦ = ∂RT∂ω+ ∂RT∂κ+ R T ⎡⎢⎣⎡+ R T ⎢⎣⎡+ R T ⎢⎣1000t0⎡⎢⎣⎡⎢⎣X i (t) − X CY i (t) − Y CZ i (t) − Z CX i (t) − X CY i (t) − Y CZ i (t) − Z C⎤⎡⎥⎦ da 0 + R T ⎢⎣⎤ ⎡⎥⎦ db 1 + R T ⎢⎣⎤⎡0⎥0 ⎦ dc 2 + R T ⎢⎣t 2⎤⎥⎦ dω + ∂RT∂ϕ⎤ ⎡⎥⎦ dκ − R T ⎢⎣t000t 2 0100⎡⎢⎣⎤⎡⎥⎦ da 1 + R T ⎢⎣⎤ ⎡⎥⎦ db 2 + R T ⎢⎣⎤⎡0⎥0 ⎦ dc 3 + R T ⎢⎣t 3X i (t) − X CY i (t) − Y CZ i (t) − Z C⎤⎡⎥⎦ dX c − R T ⎢⎣t 2 000t 3 0⎤⎥⎦ dϕ010⎤⎡⎥⎦ da 2 + R T ⎢⎣⎤ ⎡⎥⎦ db 3 + R T ⎢⎣⎤⎡⎥⎦ dY c − R T ⎢⎣t 3 00001a 1 + 2a 2 t + 3a 3 t 2b 1 + 2b 2 t + 3b 3 t 2c 1 + 2c 2 t + 3c 3 t 2001⎤⎤⎡⎥⎦ da 3 + R T ⎢⎣⎤⎡⎥⎦ dc 0 + R T ⎢⎣⎤⎥⎦ dt⎥⎦ dZ c01000t⎤⎥⎦ db 0⎤(3.16)⎥⎦ dc 1Substituting du, dv, dw in (3.15) by the expressions found in (3.16) leads to42


dx p = M 1 dX C + M 2 dY C + M 3 dZ C + M 4 dω + M 5 dϕ + M 6 dκ + M 7 da 0 + M 8 da 1+ M 9 da 2 + M 10 da 3 + M 11 db 0 + M 12 db 1 + M 13 db 2 + M 14 db 3 + M 15 dc 0+ M 16 dc 1 + M 17 dc 2 + M 18 dc 3 + M 19 dtdy p = N 1 dX C + N 2 dY C + N 3 dZ C + N 4 dω + N 5 dϕ + N 6 dκ + N 7 da 0 + N 8 da 1+ N 9 da 2 + N 10 da 3 + N 11 db 0 + N 12 db 1 + N 13 db 2 + N 14 db 3 + N 15 dc 0+ N 16 dc 1 + N 17 dc 2 + N 18 dc 3 + N 19 dt(3.17)M 1 , · · · , M 19 , N 1 , · · · , N 19 denote the partial derivatives of the extended collinearityequation for curves. Detailed derivations are in Appendix A.1. The linearized extendedcollinearity equations by Taylor expansion, ignoring 2nd and higher orderterms can be written as followings.x p + f u0w 0 = M 1 dX C + M 2 dY C + M 3 dZ C + M 4 dω + M 5 dϕ + M 6 dκ + M 7 da 0+M 8 da 1 + M 9 da 2 + M 10 da 3 + M 11 db 0 + M 12 db 1 + M 13 db 2+M 14 db 3 + M 15 dc 0 + M 16 dc 1 + M 17 dc 2 + M 18 dc 3 + M 19 dt+e xy p + f v0w 0 = N 1 dX C + N 2 dY C + N 3 dZ C + N 4 dω + N 5 dϕ + N 6 dκ + N 7 da 0+N 8 da 1 + N 9 da 2 + N 10 da 3 + N 11 db 0 + N 12 db 1 + N 13 db 2+N 14 db 3 + N 15 dc 0 + N 16 dc 1 + N 17 dc 2 + N 18 dc 3 + N 19 dt+e y (3.18)<strong>with</strong> u 0 , v 0 , w 0 the approximate parameters by (X 0 C, Y 0 C, Z 0 C, ω 0 , ϕ 0 , κ 0 , a 0 0, a 0 1, a 0 2, a 0 3,b 0 0, b 0 1, b 0 2, b 0 3, c 0 0, c 0 1, c 0 2, c 0 3, t 0 ) and e x , e y the stochastic errors of x p , y p the observed photocoordinates <strong>with</strong> zero expectation respectively.Orientation parameters including43


3D <strong>natural</strong> <strong>cubic</strong> spline parameters are expected to recover correctly since extendedcollinearity equations <strong>with</strong> 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> increase redundancy.3.3 Arc-length parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>The assumption in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> by the Gauss-Markov model is thatall estimated parameters are uncorrelated. Hence design matrix of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>must be of full rank, the non-singular normal matrix. However, since splineparameters are not independent to spline location parameters, additional observationsare required to obtain the estimations of parameters. In point-based approaches thepoint location relationship between image and object space is established to determinepose estimation including positions and orientations of a camera, which is afundamental application in photogrammetry, the remote sensing and the computervision. The coordinates of a point is the only piece of possible information to obtainthe solutions of the space intersection and the space resection. To remove the rankdeficiency which is caused by datum defects, constraints are adopted to estimate unknownparameters in point-based photogrammetry. The most common constraintsare coplanarity, symmetry, perpendicularity, and parallelism. The minimum numberof constraints is equal to the rank deficiency of the system. Inner constraints areoften used in the photogrammetric network, which can be applied both the objectfeatures and the camera orientation parameters. Angle or distance condition equationsprovide the relative information between observations in the object space andpoints in the image space. The absolute information can be obtained from the fixedcontrol points.44


In this research, the arc-length parameterization is applied as an additional conditionequation to solve the rank deficient problem in the extended collinearity equationsusing 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>. The concept of differentiable parameterization is thatthe arc-length of a curve can be divided into tiny pieces and then the pieces can beadded up so the length of each piece will be approximately linear as figure 3.3. Thesum of the squares of derivatives is the same <strong>with</strong> a velocity since a parametric curvecan be considered as the point trajectory. A velocity vector describes the path ofa curve and moving characteristics. If the particle on a curve moves at a constantrate, the curve is parameterized by the arc-length. While the extended collinearityequation provides the only piece of information, curves contain additional geometricconstraints such as the arc-length, the tangent of location, and the curvature,which are supportive to space resection under the assumption of measuring properadditional independent observations both the image space and the object space. A<strong>natural</strong> <strong>cubic</strong> spline in the object space is parameterized by a single variable t.R(t) = [x i (t) y i (t) z i (t)] T (3.19)The arc-length in the object space is determined by a geometric integration usingthe construction from the differentiable parameterization of a spline.A spline isdifferentiable if each of function x i (t), y i (t) and z i (t) are differentiable respectively.Arc(t) iO ==∫ ti+1t i∫ ti+1t i∥ ∥∥∥∥ dRdt ∣ dt√(x ′ i(t)) 2 + (y i(t)) ′ 2 + (z i(t)) ′ 2 dt (3.20)45


wherex i (t) = a i0 + a i1 (t − t i ) + a i2 (t − t i ) 2 + a i3 (t − t i ) 3y i (t) = b i0 + b i1 (t − t i ) + b i2 (t − t i ) 2 + b i3 (t − t i ) 3z i (t) = c i0 + c i1 (t − t i ) + c i2 (t − t i ) 2 + c i3 (t − t i ) 3<strong>with</strong> t ∈ [t i , t i+1 ], i = 0, 1, · · · , n − 1 and a ij , b ij , c ij spline parameters. The derivativeR ′ (t) is a velocity and integrand of the arc-length integral so that the derivativeis always positive. The arc-length in the image space is calculated by a geometricintegration of the construction from the differentiable parameterization of the photocoordinates from a spline in the object space.Figure 3.3: Arc lengthArc(t) iI ==∫ ti+1t i√(x′p (t)) 2 + (y ′ p(t)) 2 dt∫ ti+1t i√ √√√ {(−f u(t)w(t)) ′ } 2+{(−f v(t) ) ′ } 2dt (3.21)w(t)46


=√∫ √√√ ()ti+1−f u′ (t)w(t) − u(t)w ′ 2 ()(t)+ −f v′ (t)w(t) − v(t)w ′ 2(t)dtt i w 2 (t)w 2 (t)where f is the focal length and⎡ ⎤⎡u(t)⎢ ⎥⎣ v(t) ⎦ = R T ⎢(ω, ϕ, κ) ⎣w(t)⎡u ′ ⎤⎡(t)⎢⎣ v ′ ⎥(t) ⎦ = R T ⎢(ω, ϕ, κ) ⎣w ′ (t)a 0 + a 1 t + a 2 t 2 + a 3 t 3 − X Cb 0 + b 1 t + b 2 t 2 + b 3 t 3 − Y Cc 0 + c 1 t + c 2 t 2 + c 3 t 3 − Z Ca 1 + 2a 2 t + 3a 3 t 2b 1 + 2b 2 t + 3b 3 t 2c 1 + 2c 2 t + 3c 3 t 2⎤⎥⎦⎤⎥⎦Since neither the problem of the arc-length parameterization of <strong>splines</strong> has an analyticalsolution, several numerical approximations of reparameterization techniquesfor <strong>splines</strong> or other curve representations have been developed. While most curvesare not parameterized for the arc-length, the arc-length of a B-spline can be reparameterizedby adjusting the knots of a B-spline. Wang et al.[74] approximated theparameterized arc-length of spline curves by generating a new curve which accuratelyapproximated the original spline curve to reduce the computation complexity of thearc-length parameterization. They showed that the approximation of the arc-lengthparameterization works well in a variety of real time applications including a drivingsimulation.Guenter and Parent[22] employed the hierarchical approach algorithm of the linearsearch of the arc-length subdivision table for parameterized curves to reduce the arclengthcomputation time. A table of the correspondence between parameter t and thearc-length can be established to accelerate the arc-length computation. After dividingthe parameter range into intervals, the arc-length of each interval is computed formapping parameters to the arc-length. A table is the reference of the arc-length forvarious intervals. Another method of the arc-length approximation is using explicit47


function such as the Bézier curve which is advantageous because of its fast functionevaluations. Adaptive Gaussian integrations employ recursive method which startsfrom few samples and add more samples as necessary. Adaptive Gaussian integrationalso uses a table which maps curves or spline parameter values to the arc-lengthvalues.Nasri et al.[48] proposed the arc-length approximation method of circles and piecewisecircular <strong>splines</strong> generated by control polygons or points using a recursive subdivisionalgorithm. While B-<strong>splines</strong> have various tangents over the curve depending onarc-length parameterization, circular <strong>splines</strong> have constant tangents, which tangentvectors are useful in the arc-length computation.The simple approach for the integral approximation is using equal space valuesof x, independent variable, from a to b. If the range from a to b is divided into nintervals, the integral size iss = b − an(3.22)Integration by the trapezium rule calculates the approximation of integral values ass2 (y 0 + 2y 1 + 2y 2 + · · · + 2y n−1 + y n ) (3.23)where y n = f(a + ns). While the trapezium rule uses intervals by its approximationof degree 1, Simpson’s rule uses intervals by its approximation of degree 2 to providea more accurate integral approximation. Simpson’s one-third rule iss3 (y 0 + 4y 1 + 2y 2 + 4y 3 + · · · + 4y n−3 + 2y n−2 + 4y n−1 + y n ) (3.24)and the integral approximation <strong>with</strong> two intervals iss3 (y 0 + 4y 1 + y 2 ) (3.25)48


This equation can be replaced <strong>with</strong> half interval, (a + s/2) and (b − s/2) as follows.s6 (y 0 + 4y 1 + y 1 ) (3.26)2Simpson’s three-eighths rule <strong>with</strong> the integral approximation by a polynomial of degree3 is3s8 (y 0 + 3y 1 + 3y 2 + 2y 3 + · · · + 2y n−3 + 3y n−2 + 3y n−1 + y n ) (3.27)and the integral approximation <strong>with</strong> two intervals is3s8 (y 0 + 3y 1 + 3y 2 + y 3 ) (3.28)Simpson’s one-third rule is commonly used and referred to as Simpson’s rule sincehigh-order polynomials do not improve the accuracy of the integral approximationsignificantly.Andrew[3] improved the accuracy of Simpson’s one-third rule (3.24) and threeeighthsrule (3.27) <strong>with</strong> homogenized one-third rule (3.29) and homogenized threeeighthsrule (3.30) respectively.s24 (9y 0 + 28y 1 + 23y 2 + 24y 3 + · · · + 24y n−3 + 23y n−2 + 28y n−1 + 9y n ) (3.29)s72 (26y 0 + 87y 1 + 66y 2 + 73y 3 + 72y 4 + · · · + 72y n−4 + 73y n−3 + 66y n−2 + 87y n−1 + 26y n )(3.30)A proof of Simpson’s rule is as followings. Let’s assume the integral of an algebraicfunction.∫ abf(x)dx ∼ =<strong>with</strong> a second order polynomial f 2 (x) = a 0 + a 1 x + a 1 x 2 .Three points between the range from a and b can be selected as49∫ abf 2 (x)dx (3.31)


(a, f(a)), ( a+b, f ( ))a+b2 2 , (b, f(b)). The three unknowns, a0 , a 1 , and a 2 , are obtainedfrom the following three equations:ff(a) = f 2 (a) = a 0 + a 1 a + a 1 a 2( ) ( )( ) ( ) 2a + b a + ba + b a + b= f 2 = a 0 + a 1 + a 1 (3.32)2222f(b) = f 2 (b) = a 0 + a 1 b + a 1 b 2Three unknowns areThusa 0 = a2 f(b) + abf(b) − 4abf ( )a+b2 + abf(a) + b 2 f(a)a 2 − 2ab + b 2a 1 = af(a) − 4af ( a+b2) (+ 3af(b) + 3bf(a)) − 4bfa+b2a 2 − 2ab + b 2)+ f(b)))+ bf(b)a 2 = 2 ( f(a) − 2f ( a+b2(3.33)a 2 − 2ab + b 2∫ abf(x)dx ∼ ===∫ ab∫ ab[Substituting (3.33) into (3.34) leads tof 2 (x)dx(a0 + a 1 x + a 2 x 2) dx (3.34)a 0 x + a 1x 22 + a 2x 3 ] b3a= a 0 (b − a) + a 1b 2 − a 22+ a 2b 3 − a 33∫ abf 2 (x)dx = b − a6[( ) ]a + bf(a) + 4f + f(b)2(3.35)In addition, Simpson’s rule can be derived by different methods such as the Lagrangepolynomial, the method of coefficients and the approximation by a second50


order polynomial using Newton’s divided difference polynomial. The error of Simpson’srule is known to be(b − a)5e = −2880 f 4 (ξ), a < ξ < b (3.36)The error of multiple segments using Simpson’s rule is the sum of each error ofSimpson’s rule as∑ n/2(b − a)5 i=1 f (4) (ξ i )e = −90n 4 n(3.37)The error is proportional to (b − a) 5 .Simpson’s rule is the numerical approximation of definite integrals. The geometricintegration of the arc-length in the image space can be calculated by Simpon’s ruleas followings.∫ t2Arc(t) =∫ t2t 1t 1f(t)dt ∼ =t 2 − t 16f(t)dt[f(t 1 ) + 4f( ) t2 + t 12]+ f(t 2 )f(t) = √ (x ′ p(t)) 2 + (y p(t)) ′ 2{(√= −f u(t) ) ′ } 2 {(+ −f v(t) ) ′ } 2(3.38)w(t)w(t)() √= −f u′ (t)w(t) − u(t)w ′ 2 ()(t)+ −f v′ (t)w(t) − v(t)w ′ 2(t)w 2 (t)w 2 (t)df(t) = 1 [12 (f(t))− 2 f 2x ′ p(t) w′w du − 2 2x′ p(t) 1 w du′ + 2y p(t) ′ w′w dv − 2 2y′ p(t) 1 w dv′{+ 2x ′ p(t) u′ w 2 − (u ′ w − uw ′ )2w− 2ywp(t) ′ v′ w 2 − (v ′ w − vw ′ })2wdw4 w 4+{2x ′ p(t) u w + 2 2y′ p(t) v } ]dw ′w 2Arc(t) − t0 2 − t 0 16[( tf(t 0 01) + 4f 2 + t 0 ) ]1+ f(t 022)(3.39)51


= A 1 dX C + A 2 dY C + A 3 dZ C + A 4 dω + A 5 dϕ + A 6 dκ + A 7 da 0+A 8 da 1 + A 9 da 2 + A 10 da 3 + A 11 db 0 + A 12 db 1 + A 13 db 2 + A 14 db 3+A 15 dc 0 + A 16 dc 1 + A 17 dc 2 + A 18 dc 3 + A 19 dt 1 + A 20 dt 2 + e a<strong>with</strong> t 0 1, t 0 2, f(t 0 ) the approximate parameters by (XC, 0 YC, 0 ZC, 0 ω 0 , ϕ 0 , κ 0 , a 0 0, a 0 1, a 0 2,a 0 3, b 0 0, b 0 1, b 0 2, b 0 3, c 0 0, c 0 1, c 0 2, c 0 3, t 0 i ) and e a the stochastic error of the arc-length betweentwo locations <strong>with</strong> zero expectation. A 1 , · · · , A 20 denote the partial derivatives of thearc-length parameterization of a 3D <strong>natural</strong> <strong>cubic</strong> spline. Detailed derivations are inAppendix A.2.3.4 Tangents of spline between image and object spaceSince employing spline leads to over parameterization, geometric constraints arerequired to solve the system, such as slope, distance, perpendicularity, coplanar features.While some of the geometric constraints, slope and distance observations, aredependent on <strong>splines</strong>, other constraints increase non-redundant information in <strong>adjustment</strong>to reduce the overall rank deficiency of the system. Tangents of <strong>splines</strong> provideadditional constraints to solve the over parameterization of 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>.In case linear features in the object space are straight lines or conic sections. Tangentsare one of straight line constraints incorporated into <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> usingthe assumption that the transformation of straight lines in the object space is straightlines as well in the image space. In linear features using collinearity equations, therelationship establishment of two corresponding properties in the image space andthe object space is possible. Since tangents are independent measurements in theimage space and the object space and the relationship between them is established52


y collinearity equations, tangents are additional parameters to solve the over parameterization.In general, tangents are not represented mathematically except straightlines and conic sections since tangents cannot be measured in curves exactly in theimage space. If EOPs are known parameters, the image space coordinates of curvesprojected from the 3D spline in object space are a function of the parameter t simply.The relationship of tangents in the object space and the image space is describedin figure 3.4. Tangent direction is determined by the derivative [X ′ (t) Y ′ (t) Z ′ (t)] TFigure 3.4: Tangent in the object space and its counterpart in the projectedimage spacerepresented by its individual components.53


x p (t) = −f u(t)w(t) ,y p(t) = −f v(t)w(t)(3.40)where⎡⎢⎣u(t)v(t)w(t)⎤⎡⎥⎦ = R T ⎢(ω, ϕ, κ) ⎣X i (t) − X CY i (t) − Y CZ i (t) − Z C⎤⎥⎦Differentiating the collinearity equations <strong>with</strong> respect to parameter t leads to 2Dtangent direction in the image space.x ′ p(t) = −f u′ (t)w(t) − u(t)w ′ (t), y ′w 2 (t)p(t) = −f v′ (t)w(t) − v(t)w ′ (t)w 2 (t)(3.41)⎡⎢⎣u ′ (t)v ′ (t)w ′ (t)⎤⎡⎥⎦ = R T ⎢(ω, ϕ, κ) ⎣X i(t)′ ⎤Y i ′ ⎥(t) ⎦Z i(t)′⎡⎢⎣du ′ ⎤dv ′dw ′⎥⎦ = ∂RT∂ω⎡X ′ ⎤⎡iX ′ ⎤⎡⎢⎣ Y i′ ⎥⎦ dω + ∂RT iX ′ ⎤⎢⎣ Y ′ ⎥Z i′ i ⎦ dϕ + ∂RT i⎢⎣ Y ′ ⎥∂ϕZ i′ i ⎦ dκ∂κZ i′ ⎡ ⎤⎡ ⎤⎡12t3t 2 ⎤⎡+R T ⎢ ⎥⎣ 0 ⎦ da 1 + R T ⎢ ⎥⎣ 0 ⎦ da 2 + R T ⎢ ⎥⎣ 0 ⎦ da 3 + R T ⎢⎣000⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡+R T ⎢ ⎥⎣ ⎦ db 2 + R T ⎢ ⎥⎣ ⎦ db 3 + R T ⎢ ⎥⎣ ⎦ dc 1 + R T ⎢⎣+R T ⎡⎢⎣02t0⎤0⎥03t 2⎡⎦ dc 3 + R T ⎢⎣03t 202a 2 + 6a 3 t2b 2 + 6b 3 t2c 2 + 6c 3 t⎤001010002t⎤⎥⎦ db 1⎤⎥⎦ dc 2⎥⎦ dt j (3.42)tan(θ t ) = y′ px ′ p= v′ (t)w(t) − v(t)w ′ (t)u ′ (t)w(t) − u(t)w ′ (t)(3.43)where tan(θ t ) the tangent in terms of the angle θ t (0 ≤ θ t ≤ 2π)54


d tan(θ t ) = w′ (v ′ w − w ′ v)(u ′ w − w ′ u) 2 du − w(v′ w − w ′ v)(u ′ w − w ′ u) 2 du′ −w ′u ′ w − w ′ u dv +wu ′ w − w ′ u dv′+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)(u ′ w − w ′ u) 2 dw − v(u′ w − w ′ u) − u(v ′ w − vw ′ )(u ′ w − w ′ u) 2 dw ′(3.44)Linear observation equation can be obtained <strong>with</strong> incremental values, dX C , dY C , dZ C ,dω, dϕ, dκ, da 0 , da 1 , da 2 , da 3 , db 0 , db 1 , db 2 , db 3 , dc 0 , dc 1 , dc 2 , dc 3 , dt j as (3.45).tan(θ t ) − v′0 w 0 − w ′0 v 0u ′0 w 0 − w ′0 u 0 = L 1 dX C + L 2 dY C + L 3 dZ C + L 4 dω + L 5 dϕ + L 6 dκ+L 7 da 0 + L 8 da 1 + L 9 da 2 + L 10 da 3 + L 11 db 0+L 12 db 1 + L 13 db 2 + L 14 db 3 + L 15 dc 0 + L 16 dc 1+L 17 dc 2 + L 18 dc 3 + L 19 dt j + e t (3.45)<strong>with</strong> tan(θ t ) the tangent direction of a point in the image space, u 0 , v 0 , w 0 , u ′0 , v ′0 , w ′0the approximate parameters by (XC, 0 YC, 0 ZC, 0 ω 0 , ϕ 0 , κ 0 , a 0 0, a 0 1, a 0 2, a 0 3, b 0 0, b 0 1, b 0 2, b 0 3,c 0 0, c 0 1, c 0 2, c 0 3, t 0 i ) and e t the stochastic error of tangent between two locations <strong>with</strong> zeroexpectation. L 1 , · · · , L 19 denote the partial derivatives of the tangent of a 3D <strong>natural</strong><strong>cubic</strong> spline. Detailed derivations are in Appendix A.3.55


CHAPTER 4MODEL INTEGRATION4.1 Bundle <strong>block</strong> <strong>adjustment</strong>The objective of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is twofold, namely to calculate exteriororientation parameters of a <strong>block</strong> of images and the coordinates of the ground featuresin the object space. In the determination of orientation parameters, additional interiororientation parameters such as lens distortion, atmospheric refraction, and principalpoint offset can be obtained by self-calibration. In general, orientation parametersare determined by <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> using a large number of control points.The establishment of a large number of control points, however, is an expensive fieldworkso the economic and accurate <strong>adjustment</strong> method is required. Linear featureshave several advantages to complement points that they are useful features for higherlevel task and they are easily extracted in man-made environments. The line photogrammetric<strong>bundle</strong> <strong>adjustment</strong> in this research aims at the estimation of exteriororientation parameters and 3D <strong>natural</strong> <strong>cubic</strong> spline parameters using correspondencebetween <strong>splines</strong> in the object space and spline observations of multiple images in theimage space. Nonlinear functions of orientation parameters, spline parameters andspline location parameters are represented by the extended collinearity equations andthe arc-length parameterization equations. Five observation equations are produced56


y each two points, which are four extended collinearity equations (3.18) and one arclengthparameterization equation (3.39). While tangents are modeled analytically instraight lines and conic sections, the parametric representation of the tangent of the3D <strong>natural</strong> <strong>cubic</strong> spline can not be generated in a global fashion. Integrated modelprovides not only the recovery of image orientation parameters but also surface reconstructionusing 3D curves. Of course since the equation system of the integratedmodel has datum defects of seven, the control information about the coordinate systemis required to obtain parameters. This is a step toward higher level vision taskssuch as object recognition and surface reconstruction.Figure 4.1: Concept of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> <strong>with</strong> <strong>splines</strong>57


In the case of straight lines and conic sections, tangents are the additional observationsin the integrated model. Conic sections which, like points, are good mathematicalconstraints since conic sections are non-singular curves of degree 2. Second-degreeequations provide the information for reconstruction and transformation and conicsections are divided by the eccentricity e as shown in table 4.1. Since conic sectionscan adopt more constraints than points and straight line features, conic sections areuseful for close range photogrammetric applications. In addition, conic sections havethe strength in the correspondence establishment between 3D conic sections in theobject space and their counterpart features in the 2D projected image space.Eccentricity Conic forme=0 Circle0 < e < 1 Ellipsee = 1 Parabolae > 1 HyperbolaTable 4.1: Relationship between the eccentricity and the conic formJi et al.(2000) [36] employed conic sections for the recovery of EOPs andHeikkila(2000) [33] used them for camera calibration. Hough transformation reducesthe time complexity of the conic section extraction using five parameter spaces for asingle-photo resection, camera calibration and triangulation [63].x p + f u0w 0 = M 1 dX C + M 2 dY C + M 3 dZ C + M 4 dω + M 5 dϕ + M 6 dκ + M 7 da i0+M 8 da i1 + M 9 da i2 + M 10 da i3 + M 11 db i0 + M 12 db i1 + M 13 db i258


+M 14 db i3 + M 15 dc i0 + M 16 dc i1 + M 17 dc i2 + M 18 dc i3 + M 19 dt j+e xy p + f v0w 0 = N 1 dX C + N 2 dY C + N 3 dZ C + N 4 dω + N 5 dϕ + N 6 dκ + N 7 da i0+N 8 da i1 + N 9 da i2 + N 10 da i3 + N 11 db i0 + N 12 db i1 + N 13 db i2Arc(t) − t0 2 − t 0 16+N 14 db i3 + N 15 dc i0 + N 16 dc i1 + N 17 dc i2 + N 18 dc i3 + N 19 dt j+e y (4.1)[) ]f 0 (t 0 1) + 4f 0 ( t02 + t 0 12+ f 0 (t 0 2)= A 1 dX C + A 2 dY C + A 3 dZ C + A 4 dω + A 5 dϕ + A 6 dκ + A 7 da i0+A 8 da i1 + A 9 da i2 + A 10 da i3 + A 11 db i0 + A 12 db i1 + A 13 db i2 + A 14 db i3+A 15 dc i0 + A 16 dc i1 + A 17 dc i2 + A 18 dc i3 + A 19 dt 1 + A 20 dt 2 + e twhere x p , y p the photo coordinates and Arc(t) the arc-length between two locationsrespectively.Parameters are linearized in the previous three sections and the Gauss-Markovmodel is employed for the unknown parameter estimation. Tangent observation canbe added in case of straight linear features and conic sections but in this model<strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> <strong>with</strong> the extended collinearity equations for 3D <strong>natural</strong><strong>cubic</strong> <strong>splines</strong> and the arc-length parameterization equation is described for generalcases. The equation system of the integrated model is described as⎡⎢⎣A k EOP A i SP A i tA kiAL⎡ξ⎤ EOPk ⎥⎦ξSPi ⎢⎣ξti⎤= [ y ] ki (4.2)⎥⎦59


A k EOP =A i SP =A i t =A kiAL =⎡⎢⎣M k1i1.M kmi1N k1i1.N kmi1Mi2 k1 · · · Mi6k1Mi2 km · · · Mi6kmNi2 k1 · · · Ni6k1Ni2 km · · · Ni6km⎤⎡Mi7 1 Mi8 1 · · · Mi181 .M i7 m Mi8 m · · · Mi18m N i7 1 Ni8 1 · · · Ni181⎢⎣.Ni7 m Ni8 m · · · Ni18m⎥⎦⎡Mi19 1 Mi20 1 · · · Mi18+n1 .M i19 m Mi20 m · · · Mi18+nm N i19 1 Ni20 1 · · · Ni18+n1⎢⎣ .Ni19 m Ni20 m · · · Ni18+nm ⎡⎤⎢⎣A k1i1.A kmi1A k1i2 · · · A k1i20A kmi2 · · · A kmi20ξ k EOP = [ dX k C dY k C dZ k C dω k dϕ k dκ k ] T⎥⎦⎤⎥⎦⎤⎥⎦ξ i SP = [ da i0 da i1 da i2 da i3 db i0 db i1 db i2 db i3 dc i0 dc i1 dc i2 dc i3] Tξ i t = [ dt i1 dt i2 · · · dt in] Ty ki =[x kip+ f u0w 0y kip+ f v0w 0Arc(t) kip− Arc(t) 0 ] T<strong>with</strong> Arc(t) 0 = t0 2 −t0 16[f 0 (t 0 1) + 4f ( ) 0 t 0 2 +t0 12 + f 0 (t 0 2) ] , m the number of images, n thenumber of points on a spline segment, k the kth image and i the ith spline segment.The partial derivatives of symbolic representations (M, N, A) of the extended60


collinearity model are described in appendix A. Since the equation system of theintegrated model has datum defects of seven, the control information about the coordinatesystem is required to obtain the seven transformation parameters. In a generalphotogrammetric network, the rank deficiency referred as datum defects is seven.Estimates of the unknown parameters are obtained by the least squares solutionwhich minimizes the sum of squared deviations. A non-linear least squares system isrequired in the conventional non-linear photogrammetric solution to obtain orientationparameters. Many observations in photogrammetry are random variables whichare considered as different values in the case of repeated observations such as imagecoordinates of points on images. Each measured observation represents an estimateof random variables of the image coordinates of points on images. If image coordinatesof points are measured using the digital photogrammetric workstation, thevalues would be measured slightly differently for each measurement. The integratedand linearized Gauss-Markov model and the least squares estimated parameter vector<strong>with</strong> its dispersion matrix arey ki = A IM ξ IM + e⎡A k EOP A i SP A i ⎤t⎢⎥A IM = ⎣⎦A kiALξ IM = [ ] TξEOP k ξSP i ξti (4.3)ˆξ IM = (A T IMP A IM ) −1 A T IMP y kiD(ˆξ IM ) = σ 2 o(A T IMP A IM ) −1<strong>with</strong> e˜N(0, σoP 2 −1 ) the error vector <strong>with</strong> zero mean and cofactor matrix P −1 , andvariance component σo 2 which can be known or not, ˆξ IM the least squares estimatedparameter vector and D(ˆξ IM ) the dispersion matrix.61


The optimal unbiased estimate of the variance component can be obtained aŝσ 2 o = ẽT P ẽn − mẽ = y − A ̂ξ(4.4)where n is the number of equations and m is the number of parameters.If one or more of the three estimated parameter sets ξEOP k , ξSP i , ξt i are consideredas stochastic constraints, the reduction of the normal equation matrix can be applied.Control information is implemented as stochastic constraints in <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong>. Distribution and quality of control features depend on the number andthe density of control features, the number of tie features and the degree of the overlapof tie features. If adding stochastic constraints removes the rank deficiency of theGauss-Markov model, <strong>bundle</strong> <strong>adjustment</strong> can be implemented employing only the extendedcollinearity equations for 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>. Fixed exterior orientationparameters, control <strong>splines</strong> or control spline location parameters can be stochasticconstraints. In addition, <strong>splines</strong> in the object space can be divided into control featuresand tie features so that tie spline parameters can be recovered by <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong>.Stochastic constraints assigned into ˆξ 2 , the integrated model can bewritten as62


[ ] [ ] [ ]N11 N 12 ˆξ1 c1N12T =N 22 ˆξ 2 c 2N 11 ˆξ1 + N 12 ˆξ2 = c 1N T 12 ˆξ 1 + N 22 ˆξ2 = c 2ˆξ 2 = N −122 c 2 − N −122 N T 12 ˆξ 2(N 11 − N 12 N −122 N T 12)ˆξ 1 = c 1 − N 12 N −122 c 2ˆξ 1 = (N 11 − N 12 N −122 N T 12) −1 (c 1 − N 12 N −122 c 2 )where the matrix of N, c, ξ depends on the stochastic constraints.D(ˆξ 1 ) = σ 2 oQ 11D(ˆξ 2 ) = σ 2 oQ 22D[ ] [ ] −1 [ ]ˆξ1= σ ˆξo2 N11 N 122 N12T = σoN 2 Q11 Q 1222 Q T 12 Q 22[ ] [ ] [N11 N 12 Q11 Q 12 I 0N12 T N 22 Q T =12 Q 22 0 I]N 11 Q 11 + N 12 Q T 12 = IN T 12Q 11 + N 22 Q T 12 = 0Q T 12 = −N −122 N T 12Q 11Q 12 = −Q 11 N 22 N −122(N 11 − N 12 N −122 N T 12)Q 11 = IQ 11 = (N 11 − N 12 N −122 N T 12) −163


N 11 Q 12 + N 12 Q 22 = 0where Q is a cofactor matrix.Q 12 = −N −1 11N 12 Q 22N T 12Q 12 + N 22 Q 22 = I(N 22 − N T 12N −111 N 12 )Q 22 = IQ 22 = (N 22 − N T 12N −111 N 12 ) −1Q 22 = N −122 − N −122 N T 12Q 124.2 Evaluation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>Bundle <strong>block</strong> <strong>adjustment</strong> must be followed by an evaluation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>,post-<strong>adjustment</strong> analysis, to check the suitability of project specificationsand requirements. Iteratively reweighed least squares and least median of squares arethe appropriate implementation for a statistical evaluation which removes poor observations.The important element to affect <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is the geometryof aerial images. Generally, the previous flight plan is adopted to obtain the suitableresults. A simulation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is implemented before employing aflight plan of the new project design since simulation can reduce the effect of errormeasurements.A qualitative evaluation which allows the operator to recognize the <strong>adjustment</strong>characteristics is often used after <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. The sizes of the residualsin images are drawn for a qualitative evaluation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. Theimage residuals can be points or long lines and if all image residuals have the samedirection, then the image has a systematic error such as atmospheric refraction ororientation parameter error. In addition, the lack of flatness of the focal plane causes64


systematic errors in the image space, which affects the accuracy of <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong>. Distortions are different from one location to another in the entire imagespace. The topographic measurement of the focal plane can correct the focal planeunflatness. Image coordinate errors are correlated in case of systematic image errors.A poor measurement can make the residual direction opposite or the residual large.Three main elements in a statistical evaluation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>s are aprecision, accuracy and reliability. A precision is calculated employing the variancesand the covariances of parameters since a small variance represents that the estimatedvalues have a small range and a large variance means that the estimated values are notcalculated properly. The range of the parameter variance is from zero, in case of errorfree parameters, to infinite, in case of completely unknown parameters. The dispersionmatrix in which diagonal elements are parameter variances, and off-diagonal elements,are covariances between two parameters.The dispersion matrix in point-based <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is⎡⎢⎣σ 2 X σ XY σ XZσ XY σ 2 Y σ Y Zσ XZ σ Y Z σ 2 Y⎤⎡⎥ ⎢⎦ or ⎣⎤σ X ρ XY ρ XZ⎥ρ XY σ Y ρ Y Zρ XZ ρ Y Z σ Y⎦ (4.5)where σ 2 X, σ 2 Y , σ 2 Z variances of X, Y and Z coordinates of groud control points in theobject spaceσ XY , σ XZ , σ Y Z covariance between two parametersρ XY , ρ XZ , ρ Y Z correlation coefficients between two parameters as ρ ij = σ ijσ i σ j.Accuracy can be verified using check points which are not contained in <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong> like control points. Reliability can be confirmed from other redundantobservations.65


The extended collinearity equations are a mathematical model for <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong>.The mathematical model of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> consists of twomodels, a functional model and a stochastic model. The functional model representsgeometrical properties and the stochastic model describes statistical properties. Therepeated measurements at the same location in the image space are represented <strong>with</strong>respect to the functional model and the redundant observations of image locationsin the image space are expressed <strong>with</strong> respect to the stochastic model. While theGauss-Markov model uses indirect observations, condition equations such as coordinatetransformations, and the coplanarity condition can be employed in the <strong>adjustment</strong>.The Gauss-Markov model and the condition equation can be combined intothe Gauss-Helmert model. In addition, functional constraints such as points havingthe same height or straight railroad segments can be added into the <strong>block</strong> <strong>adjustment</strong>.The difference between condition equations and constraint equations is that conditionequations consist of observations and parameters, and constraint equationsconsist of only parameters. With the advances of technology, the input data in photogrammetryhas been increased so adequate formulation of <strong>adjustment</strong> is required.All variables are involved in the mathematical equations and the weight matrix ofvariables is changed from zero to infinity depending on variances. Variables <strong>with</strong> thenear to zero weight are considered as unknown parameters and variables <strong>with</strong> thenear to infinite weight are considered as constants. Most actual observations are inexistence between two boundary cases. Assessment of <strong>adjustment</strong>, post-<strong>adjustment</strong>analysis, is important in photogrammetry to analyze the results. One of the assessmentmethods is to compare the estimated variance <strong>with</strong> the two-tailed confidence66


interval based on the normal distribution. The two-tailed confidence interval is computedby a reference variance σ 2 o <strong>with</strong> χ 2 distribution asr̂σ 2 oχ 2 r,α/2< σ 2 o < r̂σ2 oχ 2 r,1−α/2(4.6)where r is degrees of freedom and α is a confidence coefficient (or a confidence level).If σo 2 has the value outside of the interval, we can assume the mathematical modelof <strong>adjustment</strong> is incorrect such as the wrong formulation, the wrong linearization,blunders or systematic errors.4.3 Pose estimation <strong>with</strong> ICP algorithmUnlike the previous case <strong>with</strong> spline segments which the correspondence betweenspline segments in the image and the object space are assumed, now it is unknownwhich image points belong to which spline segment. ICP algorithm can be utilized forthe recovery of EOPs since the initial estimated parameters of the relative pose canbe obtained from orientation data in general photogrammetric tasks. The originalICP algorithm steps are as follows. The closest point operators search the associatepoint by the nearest neighboring algorithm and then the transformation parametersare estimated using mean square cost function.The point is transformed by theestimated parameters and this step is iteratively established until converging into alocal minimum of the mean square distance. The transformation including translationand rotation between two clouds of points is estimated iteratively converging into aglobal minimum. In other words, the iterative calculation of the mean square errorsis terminated when a local minimum falls below a predefined threshold. The smallerglobal minimum or the fluctuated curve requires more memory intensive and timeconsuming computation. In every iteration step, a local minimum is calculated <strong>with</strong>67


any transformation parameters, but the convergence into a global minimum <strong>with</strong> thecorrect transformation parameters is not always obtained.By the definition of a <strong>natural</strong> <strong>cubic</strong> spline, each parametric equation of <strong>splines</strong>egment (S i (t)) can be expressed as (4.7).S i (t) =⎡⎢⎣X i (t)Y i (t)Z i (t)⎤⎥⎦ =⎡⎢⎣a i0 + a i1 t + a i2 t 2 + a i3 t 3b i0 + b i1 t + b i2 t 2 + b i3 t 3c i0 + c i1 t + c i2 t 2 + c i3 t 3⎤⎥⎦ , t ∈ [0, 1] (4.7)<strong>with</strong> X i (t), Y i (t), Z i (t) the object space coordinates and a i , b i , c i the coefficients of ithspline segment.The ray from the perspective center (X C , Y C , Z C ) to image point (x p , y p , −f) iswhereΞ(l) =⎡⎢⎣⎡⎢⎣⎤d 1⎥d 2d 3X(l)Y (t)Z(l)⎤⎥⎦ =⎡⎢⎣X k CY k CZ k C⎤⎥⎦ +⎡⎦ = R T (ω k , ϕ k , κ k ⎢) ⎣<strong>with</strong> X k C, Y k C , Z k C, ω k , ϕ k , κ k EOPs at the k th iteration.⎡⎢⎣x py p−f⎤d 1⎥d 2d 3⎤⎦ l (4.8)⎥⎦ (4.9)A point on the ray searches the closest a <strong>natural</strong> <strong>cubic</strong> spline by minimizing thefollowing target function for every spline segment as (4.10). Transformation parametersrelated <strong>with</strong> an image point and its closest spline segment can be establishedusing the least squares method.Φ(l, t) ≡ ‖Ξ(l) − S i (t)‖ 2 = stationary l, t (4.10)The global minimum of Φ(l, t) can be calculated by ∇Φ(l, t) = 0 or∂Φ/∂l = ∂Φ/∂t = 0. Substituting (4.7) and (4.8) into (4.10) and taking the derivatives<strong>with</strong> respect to l and t leads68


1 ∂Φ2 ∂l= (X C + d 1 l − a i0 − a i1 t − a i2 t 2 − a i3 t 3 )d 1+(Y C + d 2 l − b i0 − b i1 t − b i2 t 2 − b i3 t 3 )d 21 ∂Φ2 ∂t+(Z C + d 3 l − c i0 − c i1 t − c i2 t 2 − c i3 t 3 )d 3 = 0 (4.11)= (X C + d 1 l − a i0 − a i1 t − a i2 t 2 − a i3 t 3 )(−a i1 − 2a i2 t − 3a i3 t 2 )+(Y C + d 2 l − b i0 − b i1 t − b i2 t 2 − b i3 t 3 )(−b i1 − 2b i2 t − 3b i3 t 2 )+(Z C + d 3 l − c i0 − c i1 t − c i2 t 2 − c i3 t 3 )(−c i1 − 2c i2 t − 3c i3 t 2 ) = 0Convergence into a global minimum does not exist since equation (4.11) is nota linear system in l and t. The relationship between an image space point and itscorresponding spline segment can not be established <strong>with</strong> the minimization method.69


CHAPTER 5EXPERIMENTS AND RESULTSIn the last chapter, the mathematical integrated model was introduced using theextended collinearity equations <strong>with</strong> <strong>splines</strong>, tangents of <strong>splines</strong>, and the arc-lengthparameterization equations of <strong>splines</strong>. This chapter demonstrates the feasibility andthe performance of the proposed model for the acquisition of spline parameters, splinelocation parameters and image orientation parameters based on control and tie <strong>splines</strong>in the object space <strong>with</strong> the simulated and real data set. In general photogrammetrictasks, the correspondence between image edge features must be established eitherautomatically or manually but in this study correspondence between image edgefeatures is not required. A series of experiments <strong>with</strong> the synthetic data set consistsof six experiments that the first test recovers spline parameters and spline locationparameters <strong>with</strong> respect to the case of error free EOPs.The second experimentrecovers the partial spline parameters related <strong>with</strong> the shape of <strong>splines</strong>. The thirdprocedure estimates the spline location parameters in the case of error free EOPs andthe fourth one calculates EOPs and spline location parameters followed by the fifthstep which estimates EOPs <strong>with</strong> full control <strong>splines</strong>, which the parametric curves usedas control features are assumed to be error free. In the last experiment, EOPs andtie spline parameters are obtained <strong>with</strong> the control spline. Object space knowledge70


about <strong>splines</strong>, their relationships, and the orientation information of images can beconsidered as control information.Spline parameters in partial control spline ororientation parameters can be considered as stochastic constraints in the integrated<strong>adjustment</strong> model. The starting point of a spline is considered as a known parameterin the partial control spline which a 0 , b 0 and c 0 of X, Y and Z coordinates of a splineare known. The number of unknowns is described in table 5.1 and figure 5.1 wheren is the number of points in the object space, t shows the number of spline locationparameters and m represents the number of overlapped images in the target area.EOP Spline Number of unkonwnsTie spline12(n − 1) + tKnown EOP Partial control spline 9(n − 1) + tFull control splinetUnknown EOP Partial control spline 6m + 9(n − 1) + tFull control spline 6m + tTable 5.1: Number of unknownsFour points on a spline segment in one image are the only independent observationsso additional points on the same spline segment do not provide non-redundantinformation to reduce the overall deficiency of the EOP and spline parameter recovery.To prove the information content of an image spline, we demonstrate that anyfive points on a spline segment generates a dependent set of the extended collinearityequations. Any combinations of four points yielding eight collinearity equations areindependent observations but five points bearing ten collinearity equations producea dependent set of observations related <strong>with</strong> the correspondence between a <strong>natural</strong>71


(a) Known EOPs <strong>with</strong> tie <strong>splines</strong>(b) Known EOPs <strong>with</strong> partial control <strong>splines</strong>(c) Known EOPs <strong>with</strong> full control <strong>splines</strong>(d) Unknown EOPs <strong>with</strong> partial control <strong>splines</strong>(e) Unknown EOPs <strong>with</strong> full control <strong>splines</strong>(Red: Unknown parameters, Green: Partially fixed parameters,Blue: Fixed parameters)Figure 5.1: Different examples72


<strong>cubic</strong> spline in the image and the object space. More than four point observations onan image spline segment increase the redundancy related <strong>with</strong> the accuracy but donot decrease the overall rank deficiency of the proposed <strong>adjustment</strong> system. In thesame fashion, the case using a polynomial of degree 2 can be implemented. Threepoints on a quadratic polynomial curve in one image are the only independent sets soadditional points on the same curve segment are a dependent observation. More thanindependent point observations on a polynomial increase the redundancy related <strong>with</strong>the accuracy but do not provide the non-redundant information. Depending on theorder of a polynomial, the number of independent points for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>is limited as table 5.2.Polynomial Number of independent pointsConstant polynomial 1Linear polynomial 2Quadratic polynomial 3Cubic polynomial 4Table 5.2: Number of independent points for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>The amount of information carried by a <strong>natural</strong> <strong>cubic</strong> spline can be calculated<strong>with</strong> the redundancy budget. Every spline segment has twelve parameters and everypoint measured on a spline segment increases one additional parameter. Let n bethe number of points measured on one spline segment in the image space and m bethe number of images which contain a tie spline.2nm collinearity equations andm(n-1) the arc-length parameterizations are equations and 12 (the number of onespline segment parameters) + nm (the number of spline location parameters) are73


unknowns. The redundancy is 2nm-m-12 for one spline segment so that if two images(m=2) are used for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>, the redundancy is 4n-14. Four points arerequired to determine spline and spline location parameters in case one spline segmentand one degree of freedom to the overall redundancy budget is solved by each pointmeasurement <strong>with</strong> the extended collinearity equation. Arc-length parameterizationalso contributes one degree of freedom to the overall redundancy budget. The fifthpoint does not provide additional information to reduce the overall deficiency butonly makes spline parameters robust, which means it increases the overall precisionof the estimated parameters.This fact is an advantage of adopting <strong>splines</strong> which the number of degrees offreedom is four since in tie straight lines; only two points per line are independent.Independent information, the number of degrees of freedom, of a straight line is twofrom two points or a point <strong>with</strong> its tangent direction. A redundancy is r=2m-4 <strong>with</strong>a line expression of four parameters since equations are 2nm collinearity equationsand unknowns are 4+nm [59]. Only two points (n=2) are available to determine fourline parameters <strong>with</strong> two images (m=2) so at least three images must contain a tieline. The information content of t tie lines on m images is t(2m-4). One straight lineincreases two degrees of freedom to the redundancy budget and at least three linesare required in the space resection. An additional point on a straight line does notprovide additional information to reduce the rank deficiency of the recovery of EOPsbut only contributes image line coefficients. If spline location parameters or splineparameters enter the integrated <strong>adjustment</strong> model through stochastic constraints,employing extended collinearity equations is enough for solving the system <strong>with</strong>outthe arc-length parameterization.74


The redundancy budget of a tie point is r=2m-3 so tie points provide one more independentequation than tie lines. However, using tie points requires semi-automaticmatching procedure to identify tie points on all images and employing linear featuresis more robust and accurate than point features for object recognition, posedetermination, and other higher photogrammetric activities.5.1 Synthetic data descriptionThe standard <strong>block</strong> configuration is generated by strips of images <strong>with</strong> approximately60% overlap in the flight direction and 20%-30% overlap in the neighboredflight strips. In <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>, a ground feature is required at least intwo images to determine three dimensional coordinates. The simulation of aerial image<strong>block</strong> <strong>with</strong> six images, a forward overlap of 60% and a side overlap of 20%, isperformed to verify the feasibility of the proposed model in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong><strong>with</strong> synthetic data. EOPs of the simulation data set <strong>with</strong> six images are described intable 5.3 and figure 5.2 <strong>with</strong> 0.15m focal length and zero offsets from a fiducial-basedorigin to a perspective center origin of a camera. This means that interior orientationparameters are known and fixed. EOPs of six images are generated under theassumption of the vertical viewing condition.To evaluate the new <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> model using <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>,the analysis of sensitivity and robustness of the model is required. Verification of themodel suitability can be assessed by the estimated parameters <strong>with</strong> the dispersionmatrix including standard deviations and correlations. The accuracy of <strong>bundle</strong> <strong>block</strong><strong>adjustment</strong> is determined by the geometry of a <strong>block</strong> of all images and the quality ofthe position and attitude information of a camera. For novel approaches, a simulation75


Figure 5.2: Six image <strong>block</strong>Parameter X c [m] Y c [m] Z c [m] ω [deg] ϕ [deg] κ [deg]Image 1 3000.00 4002.00 503.00 0.1146 0.0573 5.7296Image 2 3305.00 4005.00 499.00 0.1432 0.0859 -5.7296Image 3 3610.00 3995.00 505.00 0.1719 0.4584 2.8648Image 4 3613.00 4613.00 507.00 0.2865 -0.0573 185.6383Image 5 3303.00 4617.00 493.00 -0.1432 0.4011 173.0333Image 6 2997.00 4610.00 509.00 -0.1833 -0.2865 181.6276Table 5.3: EOPs of six <strong>bundle</strong> <strong>block</strong> images for simulation76


of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is required prior to the actual experiment <strong>with</strong> real datato evaluate the performance of the proposed algorithms. A simulation can control themeasurement errors so random noises affect the overall geometry of a <strong>block</strong> a little.The individual observations are generated based on the general situation of <strong>bundle</strong><strong>block</strong> <strong>adjustment</strong> to estimate the properties of the proposed algorithms. A simulationallows the <strong>adjustment</strong> for geometric problems or conditions <strong>with</strong> various experiments.A spline is derived by three ground control points (3232, 4261, 18), (3335, 4343, 52),(3373, 4387, 34).Figure 5.3: Natural <strong>cubic</strong> splineIn this chapter, several factors which affect the estimates of exterior orientationparameters, spline parameters, and spline location parameters are implemented usingthe proposed <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> model <strong>with</strong> the simulated image <strong>block</strong> and thereal image <strong>block</strong>.77


5.2 Experiments <strong>with</strong> error free EOPsSince spline parameters and spline location parameters are dependent <strong>with</strong> respectto other parameters, the unknowns can be obtained by the combined model ofthe extended collinearity equations and the arc-length parameterization equations of<strong>splines</strong>. Splines in the object space are considered as tie lines in the same fashionas tie points in conventional <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. Information about the exteriororientation parameters is considered as control information in this experiment.A well-known fact of least squares system is that the good initial estimates to truevalues make the system convergent to the correct solution quickly and correctly. Theequation system of integrated model is described as⎡⎢⎣A k SPA i t⎤ ⎡⎥ ⎢⎦ ⎣ξ i SP⎤⎥⎦ = [ y ki ] (5.1)A i ALξ i tA i SP =A kit =A kiAL =⎡Mi7 1 Mi8 1 · · · M 1 ⎤i18.M i7 m Mi8 m · · · M m i18N 1 i7 Ni8 1 · · · Ni181 ⎢⎥⎣ .⎦Ni7 m Ni8 m · · · Ni18m⎡Mi19 1 Mi20 1 · · · Mi18+n1.M i19 m Mi20 m · · · M m i18+nN 1 i19 Ni20 1 · · · Ni18+n1 ⎢⎥⎣ .⎦Ni19 m Ni20 m · · · Ni18+nm ⎡A k1⎤i1⎢⎣.A kmi1A k1i2 · · · A k1i20A kmi2 · · · A kmi20⎥⎦⎤78


ξ kiSP = [ da i0 da i1 da i2 da i3 db i0 db i1 db i2 db i3 dc i0 dc i1 dc i2 dc i3] Tξ i t = [ dt i1 dt i2 · · · dt in] Ty ki =[x kip+ f u0w 0y kip+ f v0w 0Arc(t) kip− Arc(t) 0 ] T<strong>with</strong> Arc(t) 0 = t0 2 −t0 16[f 0 (t 0 1) + 4f ( ) 0 t 0 2 +t0 12 + f 0 (t 0 2) ] , m the number of images, n thenumber of points on a spline segment, k the kth image and i the ith spline segment.The partial derivatives of symbolic representations (M, N, A) of the extendedcollinearity model are described in appendix A.The estimated unknown parameters are obtained by least squares solution which minimizesthe sum of squared deviations. The integrated and linearized Gauss-Markovmodel isy ki = A IM ξ IM + e⎡⎤A IM =⎢⎣A kiSPA kiALξ IM = [ ξ kiSPA kitξ kit⎥⎦ˆξ IM = (A T IMP A IM ) −1 A T IMP y kiD(ˆξ IM ) = σ 2 o(A T IMP A IM ) −1] T(5.2)<strong>with</strong> e˜N(0, σoP 2 −1 ) the error vector <strong>with</strong> zero mean and cofactor matrix P −1 , andvariance component σo 2 which can be known or not, ˆξ IM the least squares estimatedparameter vector and D(ˆξ IM ) the dispersion matrix. Normally distributed randomnoises are added in all experiments <strong>with</strong> a zero mean and σ = ±5µm standarddeviation to points in the image space coordinate system.79


Generally the larger noise level the more accurate approximations are required toachieve the ultimate convergence of the results. The worst case scenario for estimatingis that the large noise level leads the proposed model not to converge into thespecific estimates since the convergence radius is proportional to the noise level. Theparameter estimation is sensitive to the noise of the image measurement. Error propagationrelated <strong>with</strong> the noise error in the image space observation is one of the mostimportant elements in the estimation theory. The proposed <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>can be evaluated statistically using the variances and the covariances of parameterssince a small variance represents that the estimated values have a small range anda large variance means that the estimated values are not calculated properly. Therange of the parameter variance is from zero in case of error free parameters to infinitein case of completely unknown parameters. The result of one spline segmentis expressed as table 5.4 <strong>with</strong> ξ 0 the initial values and ˆξ the estimates. The estimatedspline and spline location parameters along <strong>with</strong> their standard deviations areestablished <strong>with</strong>out the knowledge of the point-to-point correspondence.If no random noise is added to image points, the estimates are converged to thetrue values. The quality of initial estimates is important in least squares system sinceit determines the iteration number of the system and the accuracy of the convergence.The assumption is that two points on one spline segment are measured ineach image so the total number of equations are 2 × 6(the number of images)×2(thenumber of points) +6(the number of the arc-length) and the total number of unknownsare 12(the number of spline parameters)+12(the number of spline locationparameters).The redundancy (= the number of equations - the number of parameters),the degrees of freedom, is 6. While some of geometric constraints such as80


Spline location parametersImage 1 Image 2 Image 3t 1 t 7 t 2 t 8 t 3 t 9ξ 0 0.02 0.33 0.09 0.41 0.16 0.47ˆξ 0.0415 0.3651 0.0917 0.4158 0.1412 0.4617±0.0046 ±0.0016 ±0.0017 ±0.0032 ±0.0043 ±0.0135Image 4 Image 5 Image 6t 4 t 10 t 5 t 11 t 6 t 12ξ 0 0.18 0.51 0.28 0.52 0.33 0.57ˆξ 0.2174 0.4974 0.2647 0.5472 0.3133 0.6157±0.0098 ±0.0079 ±0.0817 ±0.0317 ±0.0127 ±0.1115Spline parametersa 10 a 11 a 12 a 13 b 10 b 11ξ 0 3322.17 72.16 -45.14 27.15 4377.33 69.91ˆξ 3335.0080 70.4660 -48.8529 16.5634 4343.0712 63.0211±0.0004 ±0.0585 ±0.8310 ±1.2083 ±0.0004 ±0.0258b 12 b 13 c 10 c 11 c 12 c 13ξ 0 -17.49 13.68 48.82 10.15 -27.63 21.90ˆξ -28.7770 9.8893 51.9897 8.1009 -39.3702 13.3904±0.2193 ±0.2067 ±0.0006 ±0.0589 ±0.7139 ±1.0103Table 5.4: Spline parameter and spline location parameter recovery81


slope and distance observations are dependent on the extended collinearity equationsusing <strong>splines</strong>, other constraints such as slope and arc-length increase non-redundantinformation in <strong>adjustment</strong> to reduce the overall rank deficiency of the system.The coplanarity approach is another mathematical model of the perspective relationshipbetween the image and the object space features. The projection planedefined by the perspective center in the image space and the plane including thestraight line in the object space are identical. Since the coplanarity condition is foronly straight lines, the coplanarity approach can not be extended to curves.Object space knowledge about a starting point of a spline can be employed to<strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. Since control information about a starting point is availablefor only three parameters of total twelve unknown parameters of a spline, a spline<strong>with</strong> control information about a starting point is called a partial control spline.Three spline parameters related to a starting point of a spline are set to stochasticconstraints and the result is in table 5.5.The total number of equations is 2 × 6(the number of images)×2(the number ofpoints) +6(the number of the arc-length) = 30 and the total number of unknownsis 9(the number of partial spline parameters)+12(the number of spline location parameters)= 21 so the redundancy is 9. A convergence of a partial spline and splinelocation parameters has been archived <strong>with</strong> a partial control spline.In the next experiment, spline location parameters are estimated <strong>with</strong> knownEOPs and a full control spline. Since spline parameters and spline location parametersare dependent <strong>with</strong> respect to other parameters, the unknowns can be obtained <strong>with</strong>the model of an observation equation <strong>with</strong> stochastic constraints. In this experimentspline parameters are set to stochastic constraints and the result is in table 5.6. The82


Spline location parametersImage 1 Image 2 Image 3t 1 t 7 t 2 t 8 t 3 t 9ξ 0 0.04 0.36 0.09 0.40 0.14 0.45ˆξ 0.0525 0.3547 0.1128 0.4158 0.1575 0.4543±0.0067 ±0.0020 ±0.0047 ±0.0091 ±0.0028 ±0.0083Image 4 Image 5 Image 6t 4 t 10 t 5 t 11 t 6 t 12ξ 0 0.21 0.50 0.27 0.54 0.31 0.61ˆξ 0.1916 0.5128 0.2563 0.5319 0.2961 0.6239±0.0037 ±0.0087 ±0.0044 ±0.0056 ±0.0139 ±0.1115Spline parametersa 11 a 12 a 13 b 11 b 12 b 13ξ 0 75.14 -52.87 30.71 70.05 -40.33 10.98ˆξ 71.7099 -47.2220 -15.8814 62.3703 -28.7260 7.1137±0.0795 ±0.6872 ±2.6439 ±0.0579 ±0.6473 ±1.7699c 11 c 12 c 13ξ 0 0.82 -30.72 10.51ˆξ 7.1198 -35.3841 8.1557±0.9483 ±1.3403 ±3.5852Table 5.5: Partial spline parameter and spline location parameter recovery83


total number of equations is 2 × 6(the number of images)×3(the number of points) =36 and the total number of unknowns is 18(the number of spline location parameters)so the redundancy is 18. Since spline location parameters are independent of eachother, the arc-length parameterization is not required.Spline location parametersImage 1 Image 2t 1 t 7 t 13 t 2 t 8 t 14ξ 0 0.01 0.37 0.63 0.09 0.44 0.71ˆξ 0.0589 0.3570 0.6712 0.1134 0.4175 0.7069±0.0015 ±0.0076 ±0.0197 ±0.0072 ±0.0054 ±0.0080Image 3 Image 4t 3 t 9 t 15 t 4 t 10 t 16ξ 0 0.17 0.46 0.74 0.21 0.49 0.81ˆξ 0.1757 0.4784 0.7631 0.2039 0.4869 0.8122±0.0031 ±0.0071 ±0.0095 ±0.0102 ±0.0030 ±0.0044Image 5 Image 6t 5 t 11 t 17 t 6 t 12 t 18ξ 0 0.26 0.53 0.84 0.29 0.61 0.89ˆξ 0.2544 0.5554 0.8597 0.3151 0.6284 0.9013±0.0050 ±0.0069 ±0.0089 ±0.0095 ±0.0052 ±0.0086Table 5.6: Spline location parameter recoveryThe result represents that a convergence of spline location parameters has beenachieved <strong>with</strong> fixed spline parameters considered as stochastic constraints. The proposedmodel is robust <strong>with</strong> respect to the initial approximations of spline parameters.The uncertain information related to the representation of a <strong>natural</strong> <strong>cubic</strong> spline isdescribed in the dispersion matrix.84


5.3 Recovery of EOPs and spline parametersThe object space knowledge about <strong>splines</strong> is available to recover the exterior orientationparameters in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>.Control spline and partial controlspline are applied to verify the feasibility of control information <strong>with</strong> <strong>splines</strong>.In both cases, equations of the arc-length parameterization are not necessary if wehave enough equations to solve the system since spline parameters are independentof each other. In the experiment of full control spline, the total number of equationsare 2 × 6(the number of images)×4(the number of points) +3(the number ofarc-length)×6(the number of images) = 66 and the total number of unknowns are36(the number of EOPs) +24(the number of spline location parameters) = 60. Theredundancy is 6.In the case of the partial control spline <strong>with</strong> one spline segment, the total numberof equations are 2 × 6(the number of images)×4(the number of points) +3(the numberof arc-length)×6(the number of images) = 66 and the total number of unknownsare 36(the number of EOPs)+9(the number of partial spline parameters)+24(thenumber of spline location parameters) = 69. Thus one more segment is required tosolve the underdetermined system. The total number of equations using two <strong>splines</strong>egments are 2 × 6(the number of images)×4(the number of points)×2(the numberof spline segment) +3(the number of arc-length)×6(the number of images)×2(thenumber of spline segment) = 132 and the total number of unknowns are 36(the numberof EOPs)+9(the number of partial spline parameters)×2(the number of <strong>splines</strong>egment)+24(the number of spline location parameters)×2(the number of spline segment)= 102. The redundancy is 30. A convergence of EOPs of an image <strong>block</strong> andspline parameters have been achieved in both experiments.85


EOPsParameter X c [m] Y c [m] Z c [m] ω [deg] ϕ [deg] κ [deg]ξ 0 3007.84 4001.17 501.81 8.7090 -9.7976 -12.5845Image 1 ˆξ 3001.5852 4001.2238 503.2550 -0.8909 0.3252 6.0148±0.0154 ±0.0215 ±0.1386 ±0.3895 ±0.1351 ±0.8142ξ 0 3308.17 4001.17 497.52 10.2300 8.3144 -5.5004Image 2 ˆξ 3305.1962 4004.9827 501.2641 -0.1247 -0.5497 -5.2858±0.3804 ±0.1785 ±0.2489 ±0.0308 ±0.0798 ±0.4690ξ 0 3612.68 3993.37 506.32 5.2731 7.2581 -10.135Image 3 ˆξ 3611.8996 3995.7891 505.1299 0.1486 0.1192 2.3372±0.1226 ±0.0695 ±0.0337 ±0.4467 ±0.0168 ±0.0794ξ 0 3619.75 4612.78 506.88 6.2571 -5.3482 183.6600Image 4 ˆξ 3612.7128 4613.0145 507.0654 -0.0921 -0.152 184.5016±0.0258 ±0.1895 ±0.0251 ±0.7485 ±0.4505 ±0.2289ξ 0 3301.84 4618.63 497.61 -6.1731 7.5182 187.7145Image 5 ˆξ 3302.8942 4617.0538 492.9424 -0.6347 0.2662 171.9808±0.0467 ±0.0817 ±0.0704 ±0.1413 ±0.8006 ±0.6445ξ 0 2999.59 4615.74 508.49 -7.1651 -4.8427 185.1057Image 6 ˆξ 2997.9827 4610.1432 509.2952 -0.1360 -0.1279 183.1789±0.0513 ±0.0249 ±0.0401 ±0.5659 ±0.6225 ±0.2271Spline location parametersImage 1 Image 2t 1 t 7 t 13 t 19 t 2 t 8 t 14 t 20ξ 0 0.04 0.28 0.52 0.76 0.08 0.32 0.56 0.80ˆξ 0.0432 0.2980 0.5176 0.7705 0.0813 0.3338 0.5715 0.8136±0.0033 ±0.0012 ±0.0039 ±0.0077 ±0.0082 ±0.0041 ±0.0039 ±0.0069Image 3 Image 4t 3 t 9 t 15 t 21 t 4 t 10 t 16 t 22ξ 0 0.12 0.36 0.60 0.84 0.16 0.40 0.64 0.88ˆξ 0.1294 0.3578 0.6024 0.8437 0.1594 0.4112 0.6418 0.9783±0.0036 ±0.0092 ±0.0046 ±0.0079 ±0.0115 ±0.0057 ±0.0029 ±0.0037Image 5 Image 6t 5 t 11 t 17 t 23 t 6 t 12 t 18 t 24ξ 0 0.20 0.44 0.68 0.92 0.24 0.48 0.72 0.96ˆξ 0.2039 0.4461 0.6713 0.9264 0.2483 0.4860 0.7181 0.9613±0.0057 ±0.0125 ±0.0080 ±0.0061 ±0.0085 ±0.0073 ±0.0084 ±0.0079Table 5.7: EOP and spline location parameter recovery86


Table 5.7 expressed the convergence achievement of EOPs and spline location parameters.The correlation coefficient between parameter X c and ϕ is high (ρ ≈ 1) inthe dispersion matrix, that is two parameters are highly correlated among EOPs. Thecorrelation coefficient between parameter Y c and ω is approximately 0.85. Lee[39] representedthat parameter X c &ϕ and parameter Y c &ω are highly correlated in the linecamera orientation parameters. In general, the correlation coefficient between parameterX c and ϕ is higher than between parameter Y c and ω. In addition, Lee demonstratedthat image coordinates in x direction are related <strong>with</strong> parameter X c , Z c , ω, ϕand κ and image coordinates in y direction are the function of parameter Y c , Z c , ω, ϕand κ in a line camera.Since a control spline provides the object space information about the coordinatesystem having datum defects of seven, tie spline parameters and EOPs can be recoveredsimultaneously. In the experiment of combined <strong>splines</strong>, the total number ofequations are 2 × 6(the number of images)×3(the number of points) ×2(the numberof <strong>splines</strong>)+12(the number of the arc-length)×2(the number of <strong>splines</strong>) = 96 andthe total number of unknowns are 36(the number of EOPs) +12(the number of tiespline parameters)+18(the number of tie spline location parameters)+18(the numberof control spline location parameters) = 84. Since the knowledge of the object spaceinformation about spline referred to as a full control spline is available prior to aerialtriangulation. A control spline is considered as a stochastic constraint in the proposed<strong>adjustment</strong> model and the representation of a control spline is the same <strong>with</strong> that oftie spline. The result of the combined <strong>splines</strong> which demonstrates the feasibility oftie spline and control spline for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is described in table 5.8.87


EOPsParameter X c [m] Y c [m] Z c [m] ω [deg] ϕ [deg] κ [deg]ξ 0 3010.87 4007.18 500.79 0.9740 -8.6517 7.2155Image 1 ˆξ 3000.5917 4001.8935 503.2451 -0.0974 0.4297 6.6005±0.0011 ±0.0059 ±0.1572 ±0.1432 ±0.0974 ±0.2807ξ 0 3315.37 4008.57 503.31 -8.4225 -3.3232 7.2766Image 2 ˆξ 3305.1237 4005.0571 498.8916 -0.5214 -0.1948 -6.1421±0.0057 ±0.0043 ±0.0784 ±0.3610 ±0.1375 ±0.5558ξ 0 3613.85 3991.17 508.37 -1.3751 5.3783 4.3148Image 3 ˆξ 3609.5400 3995.1419 505.1791 4.5378 1.1746 2.2288±0.1576 ±0.0803 ±0.0428 ±5.4947 ±0.3610 ±0.4870ξ 0 3618.46 4617.61 503.18 8.5541 2.4287 182.7735Image 4 ˆξ 3613.1988 4612.8281 507.2056 1.1803 -0.4068 185.7014±0.0599 ±0.0206 ±0.0472 ±0.2578 ±0.2979 ±0.1089ξ 0 3305.71 4620.37 491.17 -8.7148 -5.1487 183.1114Image 5 ˆξ 3302.9716 4617.0808 492.9357 -0.6990 1.0485 172.8671±0.0718 ±0.0592 ±0.0660 ±0.1087 ±0.1437 ±0.2137ξ 0 3002.72 4613.63 491.22 8.5475 5.0124 178.2353Image 6 ˆξ 2996.9737 4610.8773 509.3724 -3.2888 0.5672 182.2693±0.0315 ±0.0672 ±0.0027 ±0.0688 ±0.3837 ±0.2478Control spline location parametersImage 1 Image 2 Image 3t 1 t 7 t 13 t 2 t 8 t 14 t 3 t 9 t 15ξ 0 0.04 0.36 0.66 0.11 0.41 0.71 0.16 0.46 0.76ˆξ 0.0597 0.3495 0.6518 0.0982 0.4085 0.7087 0.1494 0.4499 0.7564±0.0173 ±0.0085 ±0.0065 ±0.0074 ±0.0096 ±0.0067 ±0.0094 ±0.0089 ±0.0156Image 4 Image 5 Image 6t 4 t 10 t 16 t 5 t 11 t 17 t 6 t 12 t 18ξ 0 0.19 0.51 0.79 0.24 0.54 0.86 0.29 0.59 0.91ˆξ 0.2018 0.4984 0.8065 0.2573 0.5586 0.8553 0.3172 0.6137 0.8958±0.0043 ±0.0078 ±0.0096 ±0.0086 ±0.0068 ±0.0110 ±0.0088 ±0.0078 ±0.0085Tie spline location parametersImage 1 Image 2 Image 3t 1 t 7 t 13 t 2 t 8 t 14 t 3 t 9 t 15ξ 0 0.03 0.34 0.67 0.09 0.39 0.71 0.14 0.47 0.73ˆξ 0.0680 0.3577 0.6694 0.1141 0.4032 0.6937 0.1495 0.4599 0.7618±0.0073 ±0.0067 ±0.0033 ±0.0116 ±0.0073 ±0.0054 ±0.0124 ±0.0075 ±0.0054Image 4 Image 5 Image 6t 4 t 10 t 16 t 5 t 11 t 17 t 6 t 12 t 18ξ 0 0.21 0.49 0.81 0.26 0.56 0.83 0.31 0.58 0.92ˆξ 0.1975 0.5109 0.8068 0.2488 0.5733 0.8527 0.3308 0.6142 0.9018±0.0026 ±0.0019 ±0.0216 ±0.0773 ±0.0027 ±0.0138 ±0.0034 ±0.0115 ±0.0317Tie spline parametersa 10 a 11 a 12 a 13 b 10 b 11ξ 0 3341.44 73.13 -48.32 -20.72 4337.49 56.97ˆξ 3335.0147 71.2914 -47.5124 -14.8527 4342.0369 62.4762±0.0012 ±0.0478 ±0.7959 ±1.8668 ±0.0009 ±0.0804b 12 b 13 c 10 c 11 c 12 c 13ξ 0 -36.55 2.57 44.16 3.65 -28.22 7.99ˆξ -28.0982 6.8679 51.5228 6.8220 -36.9681 13.0338±0.4851 ±1.4219 ±0.0008 ±0.0421 ±1.9215 ± 2.0048Table 5.8: EOP, control and tie spline parameter recovery88


Iteration <strong>with</strong> an incorrect spline segment in which a spline in the image spacedoes not lie on the projection of a 3D spline in the object space results in a divergenceof the system. A control spline is assumed to be error-free but in reality the assumptionis not correct. The accuracy of control <strong>splines</strong> is propagated into the proposed<strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> algorithm since initial data such as GIS database, maps ororthophotos can not be <strong>with</strong>out error.5.4 Tests <strong>with</strong> real dataIn this section, actual experiments <strong>with</strong> real data are implemented to verify thefeasibility of the proposed <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> algorithm using <strong>splines</strong> for therecovery of EOPs and spline parameters.Medium scale aerial images covering the area of Jakobshavn Isbrae in West Greenlandare employed for this study. The aerial photographs were obtained by Kort andMatrikelstyrelsen (KMS: Danish National Survey and Cadastre) in 1985. KMS establishedaerial triangulation using GPS ground control points <strong>with</strong> ± 1 pixel RMS(Root Mean Square) error under favorable circumstances and images were orientedto the WGS84 reference frame. Technical information on aerial images are describedin table 5.9.The diapositive films are scanned <strong>with</strong> the RasterMaster photogrammetric precisionscanner which has the maximum image resolution of 12µm and the scan dimensionof 23cm × 23cm to obtain digital images for softcopy workstation as figure5.4.The first experiment is the recovery of spline parameters <strong>with</strong> known EOPs obtainedby manual operation using softcopy workstation. Spline consists of four parts89


(a) Image 762 (b) Image 764 (c) Image 766 (d) Target areaFigure 5.4: Test images90


Vertical aerial photographData 9 Jul 1985OriginKMSFocal Length 87.75 mmPhoto scale 1:150,000Pixel Size 12 µmScanning Resolution 12 µmGround Sampling Distance 1.9 mTable 5.9: Information about aerial images used in this studyand the second segment parameters are recovered. The total number of equationsis 2 × 3(the number of images)×3(the number of points) +2(the number of arclength)×3(thenumber of images)= 24 and the total number of unknowns is 12(thenumber of spline parameters) +9(the number of spline location parameters) = 21 sothe redundancy is 3. Table 5.10 expressed the convergence achievement of spline andspline location parameters.Estimation of spline parameters including location parameters is established bythe relationship between <strong>splines</strong> in the object space and their projection in the imagespace <strong>with</strong>out the knowledge of the point-to-point correspondence. Since <strong>bundle</strong><strong>block</strong> <strong>adjustment</strong> using <strong>splines</strong> does not require conjugate points generated by theknowledge of the point-to-point correspondence, the more robust and flexible matchingalgorithm can be adopted.Table 5.11 represents the result which the objectspace information <strong>with</strong>out the knowledge of the point-to-point correspondence (thefull control spline) is available. However, as mentioned earlier in chapter 3, the correspondencebetween the image location and object spline segment is not established.91


Spline location parametersImage 762 Image 764t 1 t 4 t 7 t 2 t 5 t 8ξ 0 0.08 0.38 0.72 0.22 0.53 0.82ˆξ 0.0844 0.4258 0.6934 0.2224 0.5170 0.8272±0.0046 ±0.0058 ±0.0072 ±0.0175 ±0.0104 ±0.0156Image 766t 3 t 6 t 9ξ 0 0.32 0.59 0.88ˆξ 0.3075 0.6176 0.9158±0.0097 ±0.0148 ±0.0080Spline parametersa 10 a 11 a 12 a 13ξ 0 535000.00 830.00 -150.00 50.00ˆξ 535394.1732 867.6307 -173.1357 24.3213±0.1273 ±0.7142 ±7.6540 ±21.3379b 10 b 11 b 12 b 13ξ 0 7671000.00 150.00 140.00 -300.00ˆξ 7672048.3173 143.1734 130.8147 -290.1270±0.2237 ±1.6149 ±10.9058 ±26.7324c 10 c 11 c 12 c 13ξ 0 0.00 -10.00 -50.00 50.00ˆξ 2.1913 -3.7669 -39.8003 27.7922±0.0547 ±0.1576 ±9.1572 ±19.6787Table 5.10: Spline parameter and spline location parameter recovery92


All locations are assumed as on the second spline segment and the second <strong>splines</strong>egment calculated from softcopy workstation is used as control information.Spline location parametersImage 762 Image 764 Image 766t 1 t 4 t 2 t 5 t 3 t 6ξ 0 0.15 0.60 0.30 0.75 0.45 0.90ˆξ 0.1647 0.6177 0.2872 0.7481 0.4362 0.9249±0.0084 ±0.0091 ±0.0034 ±0.0093 ±0.0155 ±0.0087Table 5.11: Spline location parameter recoveryThe next experiment is the recovery of EOPs <strong>with</strong> control spline. Spline controlpoints are (534415.91, 767199305, -18.97), (535394.52, 7672045.02, 2.127), (536110.66,7672024.29, -13.897), and (536654.04, 7671016.20, -2.51). Even though the edgedetectors are often used in digital photogrammetry and remote sensing software,the control points are extracted manually since edge detection is not our main goal.Among three segments, the second spline segment is used for the EOP recovery.The information of control spline is obtained by manual operation using softcopyworkstation <strong>with</strong> an estimated accuracy of ±1 pixel. The convergence radius of theproposed iterative algorithm is proportional to the estimated accuracy level.Theimage coordinate system is converted into the photo coordinate system using theinterior orientation parameters from KMS. The association between a point on a 3Dspline segment and a point on a 2D image is not established in this study. Of course3D spline measurement in the stereo model using softcopy workstation can not be93


<strong>with</strong>out error so the accuracy of control spline is propagated into the recovery ofEOPs. The result is described in table 5.12.EOPsImage X c [m] Y c [m] Z c [m] ω [deg] ϕ [deg] κ [deg]ξ 0 547000.00 7659000.00 14000.00 3.8471 2.1248 91.8101762 ˆξ 547465.37 7658235.41 13700.25 0.3622 0.5124 91.5124±15.0911 ±13.0278 ±5.4714 ±0.8148 ±0.1784 ±0.1717ξ 0 546500.00 7670000.00 13500.00 0.1125 0.6128 90.7015764 ˆξ 546963.22 7672016.87 13708.82 -0.3258 -0.5217 91.1612±12.5460 ±17.1472 ±7.1872 ±0.6913 ±0.8632 ±1.1004ξ 0 546000.00 7680000 13700 1.4871 5.9052 92.0975766 ˆξ 546547.58 7685836.75 13712.20 1.2785 0.5469 92.9796±13.8104 ±12.1486 ±8.4854 ±1.4218 ±1.1957 ±0.6557Spline location parametersImage 762 Image 764t 1 t 4 t 7 t 10 t 2 t 5ξ 0 0.08 0.32 0.56 0.80 0.16 0.40ˆξ 0.0865 0.3192 0.5701 0.8167 0.1759 0.4167±0.0097 ±0.0159 ±0.0072 ±0.0088 ±0.0067 ±0.0085Image 764 Image 766t 8 t 11 t 3 t 6 t 9 t 12ξ 0 0.64 0.88 0.24 0.48 0.72 0.96ˆξ 0.6557 0.8685 0.2471 0.4683 0.7251 0.9713±0.0131 ±0.0092 ±0.0086 ±0.0069 ±0.0141 ±0.0089Table 5.12: EOP and spline location parameter recoveryThe control information about spline is utilized as stochastic constraints in the<strong>adjustment</strong> model. Since adding stochastic constraints removes the rank deficiency94


of the Gauss-Markov model corresponding to spline parameters which spline parametersare dependent to spline location parameters, <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> can beimplemented using only the extended collinearity equations for <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>.95


CHAPTER 6CONCLUSIONS AND FUTURE WORKIn this paper, the traditional least squares of a <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> processhave been augmented to support <strong>splines</strong> instead of conventional point features. Estimationof EOPs and spline parameters including location parameters is establishedby the relationship between <strong>splines</strong> in the object space and their projection in theimage space <strong>with</strong>out the knowledge of the point-to-point correspondence. Since <strong>bundle</strong><strong>block</strong> <strong>adjustment</strong> using <strong>splines</strong> does not require conjugate points generated bythe knowledge of the point-to-point correspondence, the more robust and flexiblematching algorithm can be adopted. Point-based aerial triangulation <strong>with</strong> experiencedhuman operators is processed well in traditional photogrammetric activitiesbut not the autonomous environment of digital photogrammetry. Feature-based aerialtriangulation is more suitable to develop the robust and accurate techniques forautomation. If linear features are employed as control features, they provide advantagesover point features for the automation of aerial triangulation. Point-based aerialtriangulation based on the manual measurement and identification of conjugate pointsis less reliable than feature-based aerial triangulation since it has the limitations ofvisibility(occlusion), ambiguity(repetitive patterns) and semantic information in viewof robust and suitable automation. Automation of aerial triangulation and the pose96


estimation is obstacled by the correspondence problem, but employing <strong>splines</strong> is oneway to overcome the occlusion and ambiguity problems. The manual identificationof corresponding entities in two images is crucial in the automation of photogrammetrictasks.Another problem of point-based approaches is the weak geometricconstraints compared to feature-based methods so the accurate initial values for theunknown parameters are required. Feature-based aerial triangulation can be implemented<strong>with</strong>out conjugate points since the measured points in each image are notthe conjugate points in this proposed <strong>adjustment</strong> model. Thus tie spline which doesnot appear in all overlapped images together can be employed in feature-based aerialtriangulation. Another advantage of employing <strong>splines</strong> is that adopting high levelfeatures increases the feasibility of geometric information and provides an analyticaland suitable solution to increase the redundancy of aerial triangulation.3D linear features expressed by 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong> are employed as themathematical model of linear features in the object space and its counterpart in theprojected image space for <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>. To solve over-parameterizationof 3D <strong>natural</strong> <strong>cubic</strong> <strong>splines</strong>, arc-length parameterization using Simpson’s rule is developedand in case of straight lines and conic sections, tangents of spline can be additionalequations to the overparameterized system. Photogrammetric triangulationby the proposed model including the extended collinearity equation and arc-lengthparameterization equation is developed to show the feasibility of tie <strong>splines</strong> and control<strong>splines</strong> for the estimation of exterior orientation of multiple images, spline andspline location parameters.A useful stochastic constraint for a spline segment is examined to become a fullor partial control spline such as known EOPs <strong>with</strong> a tie, partial control, and full97


control spline and unknown EOPs <strong>with</strong> a partial and full control spline. In addition,information content of an image spline is calculated and the feasibility of a tie splineand a control spline for a <strong>block</strong> <strong>adjustment</strong> is described.A simulation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> is implemented prior to the actual experiment<strong>with</strong> real data to evaluate the performance of the proposed algorithms. Asimulation can control the measurement errors so random noises slightly affect theoverall geometry of a <strong>block</strong>.The individual observations are generated based onthe general situation of <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> to estimate the properties of theproposed algorithms. A simulation allows the <strong>adjustment</strong> for geometric problems orconditions <strong>with</strong> various experiments.The future research topics can be summarized as:• In the proposed <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> algorithm <strong>with</strong> 3D <strong>natural</strong> <strong>cubic</strong><strong>splines</strong>, <strong>splines</strong> have been employed as control features. Future work should includean enhancement of the proposed model to triangulation and other featurebasedphotogrammetric tasks using linear features having an arbitrary mathematicalrepresentation.• More investigations should be done to find out an additional analytical observationor stochastic constraints to solve over-parameterization of 3D <strong>natural</strong><strong>cubic</strong> spline. Other constraints such as higher order derivatives in image spacefeatures can increase non-redundant information in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> toreduce the overall rank deficiency of the system.• Future work should concentrate on the elaborated testing for complete <strong>bundle</strong><strong>block</strong> <strong>adjustment</strong> including camera interior orientation parameters <strong>with</strong> <strong>splines</strong>.98


Interior orientation defines a transformation to the 3D image coordinate system<strong>with</strong> respect to the camera’s perspective center, while the pixel coordinate systemis the reference system for a digital image, using the geometric relationshipbetween the photo coordinate system and the instrument coordinate system.Since we have used the data of the camera calibration provided by the aerialphotography, the uncertainty of camera interior orientation parameters is notconsidered in <strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong>.• The proposed model can be augmented to the line scan imagery instead ofthe traditional frame aerial imagery to recover orientation parameters for everyscan line since <strong>splines</strong> can provide independent observations for EOPs of allscan lines. Since all scan lines have their own orientation parameters, usingonly point features is not enough for the pose estimation.• The automatic extraction of the most informative features in the image andobject space, and matching features can be considered to recover orientationparameters automatically. It will be the next logical step towards automatic<strong>bundle</strong> <strong>block</strong> <strong>adjustment</strong> using a <strong>natural</strong> <strong>cubic</strong> spline.99


APPENDIX ADERIVATION OF THE PROPOSED MODELA.1 Derivation of extended collinearity equationThe partial derivatives of symbolic representation of the extended collinearitymodel for curves are introduced as follows.∂R T∂ω⎡= R T ⎢⎣⎡∂R T∂ϕ = ⎢⎣∂R T∂κ=⎡⎢⎣0 0 00 0 10 −1 0⎤⎥⎦− sin ϕ cos κ sin ω cos ϕ cos κ − cos ω cos ϕ cos κsin ϕ sin κ − sin ω cos ϕ sin κ cos ω cos ϕ sin κcos ϕ sin ω sin ϕ − cos ω sin ϕ0 1 0−1 0 00 0 0⎤M 1 = f w r 11 − fuw 2 r 31M 2 = f w r 12 − fuw 2 r 32M 3 = f w r 13 − fuw 2 r 33⎥⎦ R TM 4 = − f w {r 12 (Z i (t) − Z C ) − r 13 (Y i (t) − Y C )}+ fuw 2 {r 32 (Z i (t) − Z C ) − r 33 (Y i (t) − Y C )}M 5 = − f w {s 11 (X i (t) − X C ) + s 12 (Y i (t) − Y C ) + s 13 (Z i (t) − Z C )}⎤⎥⎦100


+ fu ( w2 31 (X i (t) − X C ) + s 32 (Y i (t) − Y C ) + s 33 (Z i (t) − Z C )})s ij are the elements of ∂RT∂ϕM 6 = − fvwM 7 = − f w r 11 + fuw r 2 31M 8 = − f w r 11t + fuw r 31t2M 9 = − f w r 11t 2 + fuw r 31t 22M 10 = − f w r 11t 3 + fuw 2 r 31t 3M 11 = − f w r 12 + fuw 2 r 32M 12 = − f w r 12t + fuw 2 r 32tM 13 = − f w r 12t 2 + fuw 2 r 32t 2M 14 = − f w r 12t 3 + fuw 2 r 32t 3M 15 = − f w r 13 + fuw 2 r 33M 16 = − f w r 13t + fuw 2 r 33tM 17 = − f w r 13t 2 + fuw 2 r 33t 2M 18 = − f w r 13t 3 + fuw 2 r 33t 3M 19 = − f w {( a i1 + 2a i2 t + 3a i3 t 2) r 11 + ( b i1 + 2b i2 t + 3b i3 t 2) r 12+ ( c i1 + 2c i2 t + 3c i3 t 2) r 13 } + fuw 2 {( a i1 + 2a i2 t + 3a i3 t 2) r 31+ ( b i1 + 2b i2 t + 3b i3 t 2) r 32 + ( c i1 + 2c i2 t + 3c i3 t 2) r 33 }N 1 = f w r 21 − fvw 2 r 31N 2 = f w r 22 − fvw 2 r 32N 3 = f w r 23 − fvw 2 r 33101


N 4 = − f w {r 22 (Z i (t) − Z C ) − r 23 (Y i (t) − Y C )}+ fvw 2 {r 32 (Z i (t) − Z C ) − r 33 (Y i (t) − Y C )}N 5 = − f w {s 21 (X i (t) − X C ) + s 22 (Y i (t) − Y C ) + s 23 (Z i (t) − Z C )}+ fvw {s 2 31 (X i (t) − X C ) + s 32 (Y i (t) − Y C ) + s 33 (Z i (t) − Z C )}N 6 = fuwN 7 = − f w r 21 + fvw r 2 31N 8 = − f w r 21t + fvw r 31t2N 9 = − f w r 21t 2 + fvw r 31t 22N 10 = − f w r 21t 3 + fvw 2 r 31t 3N 11 = − f w r 22 + fvw 2 r 32N 12 = − f w r 22t + fvw 2 r 32tN 13 = − f w r 22t 2 + fvw 2 r 32t 2N 14 = − f w r 22t 3 + fvw 2 r 32t 3N 15 = − f w r 23 + fvw 2 r 33N 16 = − f w r 23t + fvw 2 r 33tN 17 = − f w r 23t 2 + fvw 2 r 33t 2N 18 = − f w r 23t 3 + fvw 2 r 33t 3N 19 = − f w {( a i1 + 2a i2 t + 3a i3 t 2) r 21 + ( b i1 + 2b i2 t + 3b i3 t 2) r 22+ ( c i1 + 2c i2 t + 3c i3 t 2) r 23 } + fvw 2 {( a i1 + 2a i2 t + 3a i3 t 2) r 31+ ( b i1 + 2b i2 t + 3b i3 t 2) r 32 + ( c i1 + 2c i2 t + 3c i3 t 2) r 33 }102


A.2 Derivation of arc-length parameterizationThe partial derivatives of symbolic representation of the arc-length parameterizationare introduced as follows.Du(t) = 2x ′ p(t)f w′w 2Du ′ (t) = −2x ′ p(t) f wDv(t) = 2y ′ p(t)f w′w 2Dv ′ (t) = −2y p(t) ′ f w dv′{Dw(t) = 2x ′ p(t) u′ w 2 − (u ′ w − uw ′ )2w− 2ywp(t) ′ v′ w 2 − (v ′ w − vw ′ })2w4 w 4Dw ′ (t) ={2x ′ p(t) u w + 2 2y′ p(t) v }w 2A 1 = t [2 − t 1 16 2 f(t 1) − 1 2 {−Du(t1 )r 11 − Dv(t 1 )r 21 − Dw(t 1 )r 31 }( ) t1 + t −1 {2 2+2f−Du( t 1 + t 2)r 11 − Dv( t 1 + t 2)r 21 − Dw( t 1 + t 22222+ 1 ]2 f(t 2) − 1 2 {−Du(t2 )r 11 − Dv(t 2 )r 21 − Dw(t 2 )r 31 }A 2 = t [2 − t 1 16 2 f(t 1) − 1 2 {−Du(t1 )r 12 − Dv(t 1 )r 22 − Dw(t 1 )r 32 }( ) t1 + t −1 {2 2+2f−Du( t 1 + t 2)r 12 − Dv( t 1 + t 2)r 22 − Dw( t 1 + t 22222+ 1 ]2 f(t 2) − 1 2 {−Du(t2 )r 12 − Dv(t 2 )r 22 − Dw(t 2 )r 32 }A 3 = t [2 − t 1 16 2 f(t 1) − 1 2 {−Du(t1 )r 13 − Dv(t 1 )r 23 − Dw(t 1 )r 33 }( ) t1 + t −1 {2 2+2f−Du( t 1 + t 2)r 13 − Dv( t 1 + t 2)r 23 − Dw( t 1 + t 22222+ 1 ]2 f(t 2) − 1 2 {−Du(t2 )r 13 − Dv(t 2 )r 23 − Dw(t 2 )r 33 }A 4 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 ){r 12 (Z i (t 1 ) − Z C ) − r 13 (Y i (t 1 ) − Y C )}103})r 31 }})r 32 }})r 33 }


+Du ′ (t 1 ){r 12 (Z ′ i(t 1 )) − r 13 (Y ′i (t 1 ))} + Dv(t 1 ){r 22 (Z i (t 1 ) − Z C )−r 23 (Y i (t 1 ) − Y C )} + Dv ′ (t 1 ){r 22 (Z ′ i(t 1 )) − r 23 (Y ′i (t 1 ))}+Dw(t 1 ){r 32 (Z i (t 1 ) − Z C ) − r 33 (Y i (t 1 ) − Y C )}+Dw ′ (t 1 ){r 32 (Z i(t ′ 1 )) − r 33 (Y i ′ (t 1 ))}+2f( t [1 + t 2) − 1 2 Du( t 1 + t 2){r 12 (Z i ( t 1 + t 2) − Z C )222−r 13 (Y i ( t 1 + t 22+Dv(t 1 ){r 22 (Z i ( t 1 + t 22+Dv ′ ( t 1 + t 22+Dw( t 1 + t 22+Dw ′ ( t 1 + t 22) − Y C )} + Du ′ ( t 1 + t 22){r 12 (Z i( ′ t 1 + t 2)) − r 13 (Y ′2) − Z C ) − r 23 (Y i ( t 1 + t 2) − Y C )}2){r 22 (Z i( ′ t 1 + t 2)) − r 23 (Y ′2){r 32 (Z i ( t 1 + t 22){r 32 (Z i( ′ t 1 + t 2)) − r 33 (Y ′2i ( t 1 + t 2))}2) − Z C ) − r 33 (Y i ( t 1 + t 2) − Y C )}2i ( t ]1 + t 2))}2+ 1 2 f(t 2) − 1 2 {Du(t2 ){r 12 (Z i (t 2 ) − Z C ) − r 13 (Y i (t 2 ) − Y C )}+Du ′ (t 2 ){r 12 (Z ′ i(t 2 )) − r 13 (Y ′i (t 2 ))} + Dv(t 2 ){r 22 (Z i (t 2 ) − Z C )−r 23 (Y i (t 2 ) − Y C )} + Dv ′ (t 2 ){r 22 (Z ′ i(t 2 )) − r 23 (Y ′i (t 2 ))}+Dw(t 2 ){r 32 (Z i (t 2 ) − Z C ) − r 33 (Y i (t 2 ) − Y C )}+Dw ′ (t 2 ){r 32 (Z i(t ′ 2 )) − r 33 (Y i ′ (t 2 ))}]A 5 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 ){s 11 (X i (t 1 ) − X C )) + s 12 (Y i (t 1 ) − Y C )i ( t 1 + t 2))}2+s 13 (Z i (t 1 ) − Z C )} + Du ′ (t 1 ){s 11 (X ′ i(t 1 )) + s 12 (Y ′i (t 1 )) + s 13 (Z ′ i(t 1 ))}+Dv(t 1 ){s 21 (X i (t 1 ) − X C )) + s 22 (Y i (t 1 ) − Y C ) + s 23 (Z i (t 1 ) − Z C )}+Dv ′ (t 1 ){s 21 (X ′ i(t 1 )) + s 22 (Y ′i (t 1 )) + s 23 (Z ′ i(t 1 ))}+Dw(t 1 ){s 31 (X i (t 1 ) − X C )) + s 32 (Y i (t 1 ) − Y C ) + s 33 (Z i (t 1 ) − Z C )}+Dw ′ (t 1 ){s 31 (X ′ i(t 1 )) + s 32 (Y ′i (t 1 )) + s 33 (Z ′ i(t 1 ))}104


+2f( t [1 + t 2) − 1 22s 12 (Y i ( t 1 + t 22+Du ′ ( t 1 + t 22+Dv( t 1 + t 22s 22 (Y i ( t 1 + t 22+Dv ′ ( t 1 + t 22+Dw( t 1 + t 22+s 32 (Y i ( t 1 + t 22Du( t 1 + t 22){s 11 (X i ( t 1 + t 2) − X C ))+2) − Y C ) + s 13 (Z i ( t 1 + t 2) − Z C ))2){s 11 (X i( ′ t 1 + t 2)) + s 12 (Y ′2){s 21 (X i ( t 1 + t 2) − X C )) +2i ( t 1 + t 22) − Y C ) + s 23 (Z i ( t 1 + t 2) − Z C ))2){s 12 (X i( ′ t 1 + t 2)) + s 22 (Y ′2){s 31 (X i ( t 1 + t 2) − X C ))2i ( t 1 + t 22) − Y C ) + s 33 (Z i ( t 1 + t 2) − Z C ))2)) + s 13 (Z i( ′ t 1 + t 2)))2)) + s 23 (Z i( ′ t 1 + t 2)))2+Dw ′ ( t 1 + t 2){s 31 (X ′2i( t 1 + t 2)) + s 32 (Y i ′ ( t 1 + t 2)) + s 33 (Z ′22i( t 1 + t 2)))212 f(t 2) − 1 2 {Du(t2 ){s 11 (X i (t 2 ) − X C )) + s 12 (Y i (t 2 ) − Y C )+s 13 (Z i (t 2 ) − Z C )} + Du ′ (t 2 ){s 11 (X ′ i(t 2 )) + s 12 (Y ′i (t 2 )) + s 13 (Z ′ i(t 1 ))}+Dv(t 2 ){s 21 (X i (t 2 ) − X C )) + s 22 (Y i (t 2 ) − Y C ) + s 23 (Z i (t 2 ) − Z C )}+Dv ′ (t 2 ){s 21 (X ′ i(t 2 )) + s 22 (Y ′i (t 2 )) + s 23 (Z ′ i(t 2 ))}+Dw(t 2 ){s 31 (X i (t 2 ) − X C )) + s 32 (Y i (t 2 ) − Y C ) + s 33 (Z i (t 2 ) − Z C )}+Dw ′ (t 2 ){s 31 (X i(t ′ 2 )) + s 32 (Y i ′ (t 2 )) + s 33 (Z i(t ′ 2 ))}A 6 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 ){r 21 (X i (t 1 ) − X C )) + r 22 (Y i (t 1 ) − Y C )+r 23 (Z i (t 1 ) − Z C )} + Du ′ (t 1 ){r 21 (X ′ i(t 1 )) + r 22 (Y ′i (t 1 )) + r 23 (Z ′ i(t 1 ))}+Dv(t 1 ){−r 11 (X i (t 1 ) − X C )) − r 12 (Y i (t 1 ) − Y C ) − r 13 (Z i (t 1 ) − Z C )}+Dv ′ (t 1 ){−r 11 (X i(t ′ 1 )) − r 12 (Y i ′ (t 1 )) − r 13 (Z i(t ′ 1 ))}+2f( t [1 + t 2) − 1 2 Du( t 1 + t 2){r 21 (X i ( t 1 + t 2) − X C ))+222r 22 (Y i ( t 1 + t 22) − Y C ) + r 23 (Z i ( t 1 + t 2) − Z C ))2105


+Du ′ ( t 1 + t 2){r 21 (X ′2i( t 1 + t 2)) + r 22 (Y ′2+Dv( t 1 + t 22−r 12 (Y i ( t 1 + t 22){−r 11 (X i ( t 1 + t 2) − X C )) +2i ( t 1 + t 22) − Y C ) − r 13 (Z i ( t 1 + t 2) − Z C ))2)) + r 23 (Z i( ′ t 1 + t 2)))2+Dv ′ ( t 1 + t 2){−r 11 (X ′2i( t 1 + t 2)) − r 12 (Y i ′ ( t 1 + t 2)) − r 13 (Z ′22i( t 1 + t 2)))212 f(t 2) − 1 2 {Du(t2 ){r 21 (X i (t 2 ) − X C )) + r 22 (Y i (t 2 ) − Y C )+r 23 (Z i (t 2 ) − Z C )} + Du ′ (t 2 ){r 21 (X ′ i(t 2 )) + r 22 (Y ′i (t 2 )) + r 23 (Z ′ i(t 1 ))}+Dv(t 2 ){−r 11 (X i (t 2 ) − X C )) − r 12 (Y i (t 2 ) − Y C ) − r 13 (Z i (t 2 ) − Z C )}+Dv ′ (t 2 ){−r 11 (X i(t ′ 2 )) − r 12 (Y i ′ (t 2 )) − r 13 (Z i(t ′ 2 ))} ]A 7 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 11 + Dv(t 1 )r 21 + Dw(t 1 )r 31 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 11 + Dv( t 1 + t 2)r 21 + Dw( t }1 + t 2)r 31 }2222+ 1 ]2 f(t 2) − 1 2 {+Du(t2 )r 11 + Dv(t 2 )r 21 + Dw(t 2 )r 31 }A 8 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 11 t + Dv(t 1 )r 21 t + Dw(t 1 )r 31 t}Du ′ (t 1 )r 11 + Dv ′ (t 1 )r 21 + Dw ′ (t 1 )r 31 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 222+Du ′ ( t 1 + t 22)r 11 + Dv ′ ( t 1 + t 22)r 11 t + Dv( t 1 + t 22)r 21 + Dw ′ ( t }1 + t 2)r 312+ 1 2 f(t 2) − 1 2 {Du(t2 )r 11 t + Dv(t 2 )r 21 t + Dw(t 2 )r 31 t}+Du ′ (t 2 )r 11 + Dv ′ (t 2 )r 21 + Dw ′ (t 2 )r 31 }]A 9 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 11 t 2 + Dv(t 1 )r 21 t 2 + Dw(t 1 )r 31 t 2+Du ′ (t 1 )2r 11 t + Dv ′ (t 1 )2r 21 t + Dw ′ (t 1 )2r 31 t}( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 11 t 2 + Dv( t 1 + t 2)r 21 t 2222+Dw( t 1 + t 2)r 31 t 22106)r 21 t + Dw( t 1 + t 2)r 31 t}2


+Du ′ ( t 1 + t 22)2r 11 t + Dv ′ ( t 1 + t 22)2r 21 t + Dw ′ ( t }1 + t 2)2r 31 t}2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 11 t 2 + Dv(t 2 )r 21 t 2 + Dw(t 2 )r 31 t 2 }+Du ′ (t 2 )2r 11 t + Dv ′ (t 2 )2r 21 t + Dw ′ (t 2 )2r 31 t}]A 10 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 11 t 3 + Dv(t 1 )r 21 t 3 + Dw(t 1 )r 31 t 3+Du ′ (t 1 )3r 11 t 2 + Dv ′ (t 1 )3r 21 t 2 + Dw ′ (t 1 )3r 31 t 2 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 11 t 3 + Dv( t 1 + t 2)r 21 t 3222+Dw( t 1 + t 2)r 31 t 32+Du ′ ( t 1 + t 22)3r 11 t 2 + Dv ′ ( t 1 + t 22)3r 21 t 2 + Dw ′ ( t }1 + t 2)3r 31 t 2 }2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 11 t 3 + Dv(t 2 )r 21 t 3 + Dw(t 2 )r 31 t 3 }+Du ′ (t 2 )3r 11 t 2 + Dv ′ (t 2 )3r 21 t 2 + Dw ′ (t 2 )3r 31 t 2 }]A 11 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 12 + Dv(t 1 )r 22 + Dw(t 1 )r 32 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 12 + Dv( t 1 + t 2)r 22 + Dw( t }1 + t 2)r 32 }2222+ 1 ]2 f(t 2) − 1 2 {+Du(t2 )r 12 + Dv(t 2 )r 22 + Dw(t 2 )r 32 }A 12 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 12 t + Dv(t 1 )r 22 t + Dw(t 1 )r 32 t}Du ′ (t 1 )r 12 + Dv ′ (t 1 )r 22 + Dw ′ (t 1 )r 32 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 222+Du ′ ( t 1 + t 22)r 12 + Dv ′ ( t 1 + t 22)r 12 t + Dv( t 1 + t 22)r 22 + Dw ′ ( t }1 + t 2)r 322+ 1 2 f(t 2) − 1 2 {Du(t2 )r 12 t + Dv(t 2 )r 22 t + Dw(t 2 )r 32 t}+Du ′ (t 2 )r 12 + Dv ′ (t 2 )r 22 + Dw ′ (t 2 )r 32 }]A 13 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 12 t 2 + Dv(t 1 )r 22 t 2 + Dw(t 1 )r 32 t 2+Du ′ (t 1 )2r 12 t + Dv ′ (t 1 )2r 22 t + Dw ′ (t 1 )2r 32 t})r 22 t + Dw( t 1 + t 2)r 32 t}2107


( t1 + t 2+2f2) −1+Dw( t 1 + t 2)r 32 t 22+Du ′ ( t 1 + t 222 { Du( t 1 + t 22)2r 12 t + Dv ′ ( t 1 + t 22)r 12 t 2 + Dv( t 1 + t 2)r 22 t 22)2r 22 t + Dw ′ ( t }1 + t 2)2r 32 t}2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 12 t 2 + Dv(t 2 )r 22 t 2 + Dw(t 2 )r 32 t 2 }+Du ′ (t 2 )2r 12 t + Dv ′ (t 2 )2r 22 t + Dw ′ (t 2 )2r 32 t}]A 14 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 12 t 3 + Dv(t 1 )r 22 t 3 + Dw(t 1 )r 32 t 3+Du ′ (t 1 )3r 12 t 2 + Dv ′ (t 1 )3r 22 t 2 + Dw ′ (t 1 )3r 32 t 2 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 12 t 3 + Dv( t 1 + t 2)r 22 t 3222+Dw( t 1 + t 2)r 32 t 32+Du ′ ( t 1 + t 22)3r 12 t 2 + Dv ′ ( t 1 + t 22)3r 22 t 2 + Dw ′ ( t }1 + t 2)3r 32 t 2 }2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 12 t 3 + Dv(t 2 )r 22 t 3 + Dw(t 2 )r 32 t 3 }+Du ′ (t 2 )3r 12 t 2 + Dv ′ (t 2 )3r 22 t 2 + Dw ′ (t 2 )3r 32 t 2 }]A 15 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 13 + Dv(t 1 )r 23 + Dw(t 1 )r 33 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 13 + Dv( t 1 + t 2)r 23 + Dw( t }1 + t 2)r 33 }2222+ 1 ]2 f(t 2) − 1 2 {Du(t2 )r 13 + Dv(t 2 )r 23 + Dw(t 2 )r 33 }A 16 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 13 t + Dv(t 1 )r 23 t + Dw(t 1 )r 33 t}Du ′ (t 1 )r 13 + Dv ′ (t 1 )r 23 + Dw ′ (t 1 )r 33 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 222+Du ′ ( t 1 + t 22)r 13 + Dv ′ ( t 1 + t 22)r 13 t + Dv( t 1 + t 22)r 23 + Dw ′ ( t }1 + t 2)r 332+ 1 2 f(t 2) − 1 2 {Du(t2 )r 13 t + Dv(t 2 )r 23 t + Dw(t 2 )r 33 t}+Du ′ (t 2 )r 13 + Dv ′ (t 2 )r 23 + Dw ′ (t 2 )r 33 }])r 23 t + Dw( t 1 + t 2)r 33 t}2108


A 17 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 13 t 2 + Dv(t 1 )r 23 t 2 + Dw(t 1 )r 33 t 2+Du ′ (t 1 )2r 13 t + Dv ′ (t 1 )2r 23 t + Dw ′ (t 1 )2r 33 t}( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 13 t 2 + Dv( t 1 + t 2)r 23 t 2222+Dw( t 1 + t 2)r 33 t 22+Du ′ ( t 1 + t 22)2r 13 t + Dv ′ ( t 1 + t 22)2r 23 t + Dw ′ ( t }1 + t 2)2r 33 t}2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 13 t 2 + Dv(t 2 )r 23 t 2 + Dw(t 2 )r 33 t 2 }+Du ′ (t 2 )2r 13 t + Dv ′ (t 2 )2r 23 t + Dw ′ (t 2 )2r 33 t}]A 18 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 13 t 3 + Dv(t 1 )r 23 t 3 + Dw(t 1 )r 33 t 3+Du ′ (t 1 )3r 13 t 2 + Dv ′ (t 1 )3r 23 t 2 + Dw ′ (t 1 )3r 33 t 2 }( ) t1 + t −1 {2 2+2fDu( t 1 + t 2)r 13 t 3 + Dv( t 1 + t 2)r 23 t 3222+Dw( t 1 + t 2)r 33 t 32+Du ′ ( t 1 + t 22)3r 13 t 2 + Dv ′ ( t 1 + t 22)3r 23 t 2 + Dw ′ ( t }1 + t 2)3r 33 t 2 }2+ 1 2 f(t 2) − 1 2 {Du(t2 )r 13 t 3 + Dv(t 2 )r 23 t 3 + Dw(t 2 )r 33 t 3 }+Du ′ (t 2 )3r 13 t 2 + Dv ′ (t 2 )3r 23 t 2 + Dw ′ (t 2 )3r 33 t 2 }]A 19 = t [2 − t 1 16 2 f(t 1) − 1 2 {Du(t1 )r 11 X i(t ′ 1 ) + Dv(t 1 )r 12 Y i ′ (t 1 ) + Dw(t 1 )r 13 Z i(t ′ 1 )+Du ′ (t 1 )r 11 X i ′′ (t 1 ) + Dv ′ (t 1 )r 23 Y i ′′ (t 1 ) + Dw ′ (t 1 )r 33 Z i ′′ (t 1 )}( ) t1 + t −12+2f2+Dw( t 1 + t 222 { Du( t 1 + t 22)r 33 Z i( ′ t 1 + t 2)2)r 11 X ′ i( t 1 + t 22) + Dv( t 1 + t 2)r 23 Y ′2i ( t 1 + t 2)2+Du ′ ( t 1 + t 2)r 13 X i ′′ ( t 1 + t 2) + Dv ′ ( t 1 + t 2)r 23 Y i ′′ ( t 1 + t 2)2222+Dw ′ ( t 1 + t 2)r 33 Z i ′′ ( t }]1 + t 2)}22A 20 = t [2 − t 1 16 2 f(t 2) − 1 2 {Du(t2 )r 11 X i(t ′ 2 ) + Dv(t 2 )r 12 Y i ′ (t 2 ) + Dw(t 2 )r 13 Z i(t ′ 2 )109


+Du ′ (t 2 )r 11 X i ′′ (t 2 ) + Dv ′ (t 2 )r 23 Y i ′′ (t 2 ) + Dw ′ (t 2 )r 33 Z i ′′ (t 2 )}( ) t1 + t −12+2f2+Dw( t 1 + t 222 { Du( t 1 + t 22)r 33 Z i( ′ t 1 + t 2)2)r 11 X ′ i( t 1 + t 22) + Dv( t 1 + t 2)r 23 Y ′2+Du ′ ( t 1 + t 2)r 13 X i ′′ ( t 1 + t 2) + Dv ′ ( t 1 + t 2)r 23 Y i ′′ ( t 1 + t 2)2222+Dw ′ ( t 1 + t 2)r 33 Z i ′′ ( t }]1 + t 2)}22i ( t 1 + t 2)2A.3 Derivation of the tangent of a splineThe partial derivatives of symbolic representation of the tangent of the splinebetween image and object space are illustrated as follows.L 1 = − w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 11 +(u ′ w − w ′ u) r 21 − w′ (u ′ v − v ′ u)(u ′ w − w ′ u) r 2 31L 2 = − w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 12 +(u ′ w − w ′ u) r 22 − w′ (u ′ v − v ′ u)(u ′ w − w ′ u) r 2 32L 3 = − w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 13 +(u ′ w − w ′ u) r 23 − w′ (u ′ v − v ′ u)(u ′ w − w ′ u) r 2 33L 4 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ [r 12 (Z i (t) − Z C ) − r 13 (Y i (t) − Y C )]−w[r 12 (Z ′ i(t)) − r 13 (Y ′i (t))]}1−u ′ w − w ′ u {w′ [r 22 (Z i (t) − Z C ) − r 23 (Y i (t) − Y C )]−w[r 22 (Z ′ i(t)) − r 23 (Y ′i (t))]}1+(u ′ w − w ′ u) {[v′ − u′ (v ′ w − v ′ w)u ′ w − w ′ u ][r 32(Z i (t) − Z C ) − r 33 (Y i (t) − Y C )]−[v − u(v′ w − v ′ w)u ′ w − w ′ u ][r 32(Z ′ i(t) − r 33 (Y ′i (t)]}L 5 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ [s 11 (X i (t) − X C ) + s 12 (Y i (t) − Y C ) + s 13 (Z i (t) − Z C )]−w[s 11 (X ′ i(t)) + s 12 (Y i (t)) + s 13 (Z i (t))]}110


1−u ′ w − w ′ u {w′ [s 21 (X i (t) − X C ) + s 22 (Y i (t) − Y C ) + s 23 (Z i (t) − Z C )]− [s 21 (X ′ i(t)) + s 22 (Y i (t)) + s 23 (Z i (t))]}1+(u ′ w − w ′ u) {[v′ − u′ (v ′ w − v ′ w)u ′ w − w ′ u ][s 31(X i (t) − X C ) + s 32 (Y i (t) − Y C )+s 33 (Z i (t) − Z C )]−[v − u(v′ w − v ′ w)u ′ w − w ′ u ][[s 31(X ′ i(t)) + s 32 (Y i (t)) + s 33 (Z i (t))]}L 6 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ [r 21 (X i (t) − X C ) + r 22 (Y i (t) − Y C ) + r 23 (Z i (t) − Z C )]−w[r 21 (X ′ i(t)) + r 22 (Y ′i (t)) + r 23 (Z ′ i(t))]}1−u ′ w − w ′ u {w′ [[−r 11 (X i (t) − X C ) − r 12 (Y i (t) − Y C ) − r 13 (Z i (t) − Z C )]]−wr 11 (X ′ i(t)) − r 12 (Y ′i (t)) − r 13 (Z ′ i(t))}L 7 = w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 11 −u ′ w − w ′ u r 21 + v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 31L 8 = (v′ w − w ′ v)1(u ′ w − w ′ u) 2 {w′ r 11 t − wr 11 } −u ′ w − w ′ u {w′ r 21 t − wr 21 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 31 t − v(u′ w − w ′ u) − u(v ′ w − vw ′ )r(u ′ w − w ′ u) 2 31L 9 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 11 t 2 1− 2wr 11 t} −u ′ w − w ′ u {w′ r 21 t 2 − 2wr 21 t}+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 31 t 2 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )2r(u ′ w − w ′ u) 2 31 tL 10 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 11 t 3 − 3wr 11 t 2 1} −u ′ w − w ′ u {w′ r 21 t 3 − 3wr 21 t 2 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 31 t 3 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )3r(u ′ w − w ′ u) 2 31 t 2L 11 = w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 12 −u ′ w − w ′ u r 22 + v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 32L 12 = (v′ w − w ′ v)1(u ′ w − w ′ u) 2 {w′ r 12 t − wr 12 } −u ′ w − w ′ u {w′ r 22 t − wr 22 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 32 t − v(u′ w − w ′ u) − u(v ′ w − vw ′ )r(u ′ w − w ′ u) 2 32L 13 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 12 t 2 1− 2wr 12 t} −u ′ w − w ′ u {w′ r 22 t 2 − 2wr 22 t}111


+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 32 t 2 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )2r(u ′ w − w ′ u) 2 32 tL 14 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 13 t 3 − 3wr 13 t 2 1} −u ′ w − w ′ u {w′ r 23 t 3 − 3wr 23 t 2 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 33 t 3 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )3r(u ′ w − w ′ u) 2 33 t 2L 15 = w′ (v ′ w − w ′ v)(u ′ w − w ′ u) r w ′2 13 −u ′ w − w ′ u r 23 + v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 33L 16 = (v′ w − w ′ v)1(u ′ w − w ′ u) 2 {w′ r 13 t − wr 13 } −u ′ w − w ′ u {w′ r 23 t − wr 23 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 33 t − v(u′ w − w ′ u) − u(v ′ w − vw ′ )r(u ′ w − w ′ u) 2 33L 17 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 13 t 2 1− 2wr 13 t} −u ′ w − w ′ u {w′ r 23 t 2 − 2wr 23 t}+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)r(u ′ w − w ′ u) 2 33 t 2 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )2r(u ′ w − w ′ u) 2 33 tL 18 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ r 13 t 3 − 3wr 13 t 2 1} −u ′ w − w ′ u {w′ r 23 t 3 − 3wr 23 t 2 }+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)(u ′ w − w ′ u) 2 r 33 t 3 − v(u′ w − w ′ u) − u(v ′ w − vw ′ )(u ′ w − w ′ u) 2 3r 33 t 2L 19 = (v′ w − w ′ v)(u ′ w − w ′ u) 2 {w′ [r 11 X ′ i(t) + r 12 Y ′i (t) + r 13 Z ′ i(t)−w[r 11 X ′′i (t) + r 12 Y ′′i (t) + r 13 Z ′′i (t)}1−u ′ w − w ′ u {w′ [r 21 X i(t) ′ + r 22 Y i ′ (t) + r 23 Z i(t)′−w[r 21 X ′′i (t) + r 22 Y ′′i (t) + r 23 Z ′′i (t)}+ v′ (u ′ w − w ′ u) − u ′ (v ′ w − v ′ w)(u ′ w − w ′ u) 2 {w ′ [r 31 X ′ i(t) + r 32 Y ′i (t) + r 33 Z ′ i(t)}− v(u′ w − w ′ u) − u(v ′ w − vw ′ )(u ′ w − w ′ u) 2 − w[r 31 X ′′i (t) + r 32 Y ′′i (t) + r 33 Z ′′i (t)}112


BIBLIOGRAPHY[1] Ackerman, F., and V. Tsingas. 1994. Automatic digital aerial triangulation.Proceedings of ACSM/ASPRS annual convention 1,1–12.[2] Akav, A., G.H. Zalmanson, and Y. Doytsher. 2004. Linear feature based aerialtriangulation. XXth ISPRS Congress, Istanbul, Turkey Commission III.[3] Andrew, A.M. 2002. Homogenising Simpson’s rule. Kybernetes 31(2),282–291.[4] Ayache, N., and O. Faugeras. 1989. Maintaining representation of the environmentof a mibile robot. IEEE Transactions on Robotics Automation 5(6),804–819.[5] Bartels, R.H., J.C. Beatty, and B.A. Barsky. 1998. Hermite and <strong>cubic</strong> splineinterpolation. San Francisco, CA, Hilger,9-17.[6] Berengolts, A., and M. Lindenbaum. 2001. On the performance of connectedcomponents grouping. International Journal of Computer Vision 41(3),195–216.[7] Besl, P., and N. McKay. 1992. A method for registration of 3-D shapes. IEEETrans on Pattern Analysis and Machine Intelligence 14(2),239–256.[8] Boyer, K.L., and S. Sarkar. 1999. Perceptual organization in computer vision:status, challenges and potential. Computer Vision and Image UnderstandingVol. 76(1),1–5.[9] Canny, J. 1986. A computational approach to edge detection. IEEE Transactionon Pattern Analysis and Machine Intelligence Vol. 8,679–698.[10] Casadei, S., and S. Mitter. 1998. Hierarchical image segmentation- detection ofregular curves in a vector graph. International Journal of Computer Vision Vol.27(1),71–100.[11] Catmull, E., and R. Rom. 1974. A class of local interpolating <strong>splines</strong>. Computeraided geometric design, R.E. Barnhill and R.F. Riesenfeld, R.F. Eds., New York,NY, Academic Press,317-326.113


[12] Chen, T., and R.S. Shibasaki. 1998. Determination of camera’s orientation parametersbased on line features. International Archives of Photogrammetry andRemote Sensing, Hakodate, Japan 32(5),23–28.[13] Chung, K, and L. Shen. 1992. Vectorized algorithm for B-Spline curve fitting onCray X-MP EA/16se. Proceedings of the Supercomputing ’92 conference 166–169.[14] Drewniok, C., and K. Rohr. 1996. Automatic exterior orientation of aerial imagesin urban environments. International Archives of Photogrammetry and RemoteSensing 31(B3),146–152.[15] Drewniok, C., and K. Rohr. 1997. Exterior orientation-an automatic approachbased on fitting analytic landmark models. ISPRS Journal of Phtogrammetryand Remote Sensing 52,132–145.[16] Ebner, H., and T. Ohlhof. 1994. Utilization of ground control points for imageorientation <strong>with</strong>out point identification in image space. International Archivesof Photogrammetry and Remote Sensing 30(2/1),206–211.[17] Föstner, W. 2000. New orientation procedures. International Archives of Photogrammetry,Remote Sensing, and Spatial Information Sciences 33(3),297–304.[18] Föstner, W., and E. Gülch. 1986. A feature based correspondence algorithm forimage matching. International Archives of Photogrammetry and Remote Sensing26(B3/3),13–19.[19] Föstner, W., and E. Gülch. 1987. A fast operator for detection and preciselocation of distinct points, corners and centres of circular features. ISPRS Intercommissionworkshop, Interlaken 149–155.[20] Gülch, E. 1995. Line photogrammetry: a tool for precise localization of 3Dpoints and lines in automated object reconstruction. Integrating PhotogrammetricTechniques <strong>with</strong> Scene Analysis and Machine Vision II SPIE, Orlando, USA2486,2–12.[21] Gruen, A., and D. Akca. 2005. Least squares 3D surface and curve matching.ISPRS Photogrammetry & Remote Sensing 59,151–174.[22] Guenter, B., and R. Parent. 1990. Computing the arc length of parametriccurves. IEEE Computer Graphics and Applications 10(3),72–78.[23] Guy, G., and G. Medioni. 1996. Inferring global perceptual contours from localfeatures. International Journal of Computer Vision Vol. 20(1-2),113–133.114


[24] Haala, N., and G. Vosselman. 1992. Recognition of road and river patternsby relational matching. International Archives of Photogrammetry and RemoteSensing 29(B3),969–975.[25] Habib, A., M. Morgan, E.M. Kim, and R. Cheng. 2004. Linear features inphotogrammetric activities. XXth ISPRS Congress, Istanbul, Turkey PS ICWGII/IV: Automated Geo-Spatial Data Production and Updating,610.[26] Habib, A.F. 1999. Aerial triangulation using point and linear features. Proceedingsof the ISPRS Conference on Automatic Extraction of GIS Objects fromDigital Imagery, Munich, Germany, 32 32(3-2W5),137–142.[27] Habib, A.F., A. Asmamaw, D. Kelley, and M. May. 2000. Linear features inphotogrammetry. Tech. Rep. 450, Department of Civil and Environmental Engineeringand Geodetic Science, The Ohio State University.[28] Habib, A.F., M. Morgan, and Y.R. Lee. 2002a. Bundle <strong>adjustment</strong> <strong>with</strong> selfcalibrationusing straight lines. Photogrammetric Record 17(100),635–650.[29] Habib, A.F., S.W. Shin, and M. Morgan. 2002b. Automatic pose estimationof imagery using free-form control linear features. International Archivesof Photogrammetry, Remote Sensing and Spatial Information Sciences 34(Part3A),150–155.[30] Hannah, M. 1989. A system for digital stereo image matching. PhotogrammetricEngineering and Remote Sensing 55(12),1765–1770.[31] Harric, C., and M. Stephens. 1987. A combined corner and edge detector. 4thAlvey vision conference 147–151.[32] Heikkilä, J. 1991. Use of linear features in digital photogrammetry. PhotogrammetricJournal of Finland 12(2),40–56.[33] Heikkilä, J. 2000. Geometric camera calibration using circular control points.IEEE Transaction on Pattern Analysis and Machine Intelligence Vol. 22,No.10,1066–1077.[34] Heuvel, F. 1999. A line-photogrammetric mathematical model for the reconstructionof polyhedral objects. Videometrics VI, Proceedings of SPIE Vol. 3641,60–71.[35] Jacobs, D. 1996. Robust and efficient detection of salient convex groups. IEEETransactions on Pattern Analysis and Machine Intelligence 18(1),23–37.115


[36] Ji, Q., S. Costa, M. Haralick, and L. Shapiro. 2000. A robust linear least squaresestimation of camera exterior orientation using multiple geometric features. IS-PRS Journal of Phtogrammetry and Remote Sensing 55,75–93.[37] Kubik, K. 1992. Photogrammetric restitution based on linear features. InternationalArchives of Photogrammetry and Remote Sensing 29(B3),687–690.[38] Lee, C., and J.S. Bethel. 2004. Extraction, modelling, and use of linear featuresfor restituion of airborne hyperspectral imagery. ISPRS Photogrammetry& Remote Sensing 58,289–300.[39] Lee, Y. 2002. Pose estimation of line cameras using linear features. Ph.D.dissertation, Department of Civil and Environmental Engineering and GeodeticScience, The Ohio State University, Columbus, OH.[40] Lin, H.T. 2002. Autonomous recovery of exterior orientaion of imagery usingfree-form linear features. Ph.D. dissertation, Department of Civil and EnvironmentalEngineering and Geodetic Science, The Ohio State University, Columbus,OH.[41] Marr, D. 1982. Vision. New York, NY, Freeman Publishers.[42] Masry, S.E. 1981. Digital mapping using entities:a new concept. PhotogrammetricEngineering and Remote Sensing 48(11),1561–1565.[43] Mikhail, E.M. 1993. Linear features for photogrammetric restitution and objectcompletion. SPIE proceedings- Integrating photogrammetric techniques <strong>with</strong> sceneanalysis and machine vision, Orlando, FL Vol. 1944.[44] Mikhail, E.M., and K. Weerawong. 1994. Feature-based photogrammetric objectconstruction. Proceedings of ASPRS/ACSM Annual Convention 1,403–407.[45] Moravec, H. 1977. Towards automatic visual obstacle avoidance. Proceedings ofthe international joint conference on artificial intelligence 584.[46] Mulawa, D.C. 1989. Estimation and photogrammetric treatment of linear features.Ph.D. dissertation, Department of Civil Engineering, Purdue University,West Lafayette, IN.[47] Mulawa, D.C., and E.M. Mikhail. 1988. Photogrammetric treatment of linearfeatures. International Archives of Photogrammetry and Remote Sensing, Kyoto,Japan 27(B10),383–393.[48] Nasri, A.H., C.W.A.M. van Overveld, and B. Wyvill. 2001. A recursive subdivisionalgorithm for piecewise circular spline. Computer Graphics forum 20(1),35–45.116


[49] Parian, J.A., and A. Gruen. 2005. Panoramic camera calibration using 3Dstraight lines. International Archives of Photogrammetry, Remote Sensing andSpatial Information Sciences, ISPRS Panoramic Photogrammetry Workshop,Berlin, Germany XXXVI(5/W8).[50] Petsa, E., and P. Patias. 1994. Formulation and assessment of straight line basedalgorithms for digital photogrammetry. International Archives of Photogrammetryand Remote Sensing, Melbourne, Australia 30(5),310–317.[51] Prewitt, J. 1970. Object enhancement and extraction in picture processing andpsychopictorics. New York, NY, Academic Press.[52] Rabbani, T., Dijkman S., Heuvel F., and Vosselman G. 2007. An integratedapproach for modelling and global registration of point clouds. ISPRS Photogrammetry& Remote Sensing 61,355–370.[53] Roth, G., and M.D. Levine. 1994. Geometric primitive extraction using a geneticalgorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence16(9),901–905.[54] Sarkar, S., and K.L. Boyer. 1993. Integration, inference, and management ofspatial information using bayesian networks: perceptual organization. IEEETransactions on Pattern Analysis and Machine Intelligence 15(3),256–274.[55] Sarkar, S., and K.L. Boyer. 1994. Computing perceptual organization in computervision. River Edge, NJ, World Scientific.[56] Schenk, T. 1998. Determining transformation parameters between surfaces <strong>with</strong>outidentical points. Tech. Rep. No. 15, Department of Civil and EnvironmentEngineering and Geodetic Science, The Ohio State University.[57] Schenk, T. 1999. Digital photogrammetry Vol.I. Laurelville, OH, TerraScience.[58] Schenk, T. 2002. Towards a feature-based photogrammetry. Bildteknik/ImageScience, 2002(1), Swedish Society for Photogrammetry and Remote Sensing,Stockholm 143–150.[59] Schenk, T. 2004. From point-based to feature-based aerial triangulation. ISPRSPhotogrammetry & Remote Sensing 48,315–329.[60] Schenk, T., and C. Toth. 1993. Towards a fully automated aero-triangulationsystem. ACSM/ASPRS, Annual Convention Technical Papers 3,340–347.[61] Schenk, T., C. Toth, and J. Li. 1991. Towards an autonomous system for orientingdigital stereopaires. Photogrammetric Engineering and Remote Sensing57(8),1057–1064.117


[62] Schickler, W. 1992. Feature matching for outer orientation of single imagesusing <strong>3d</strong> wireframe control points. International Archives of Photogrammetryand Remote Sensing 29(B3),591–598.[63] Seedahmed, G.H. 2004. On the suitability of conic sections in a single-photoresection, camera calibration and photogrammetric triangulation. Ph.D. dissertation,Department of Civil and Environmental Engineering and Geodetic Science,The Ohio State University, Columbus, OH.[64] Shi, J.B., and J. Malik. 2000. Normalized cuts and image segmentation. IEEETransactions on Pattern Analysis and Machine Intelligence 22(8),888–905.[65] Smith, M.J., and D.W.G. Park. 2000. Absolute and exterior orientation usinglinear features. International Archives of Photogrammetry and Remote Sensing,Amsterdam, Netherlands 33(B3),850–857.[66] Smith, S., and J. Brady. 1995. SUSAN - A new approach to low level imageprocessing. Tech. Rep. TR95SMS1c, Defence Research Agency, Farnborough,England.[67] Sobel, I. 1974. Camera models and on calibrating computer controlled camerasfor perceiving 3D scenes. Artificial Intelligence Vol. 5,185–198.[68] Tangelder, J., P. Ermes, G. Vosselman, and F. Heuvel. 2003. CAD-based photogrammetryfor reverse engineering of industrial installation. Computer-AidedCivil and Infrastructure Engineering Vol.18,264–274.[69] Tankovich, J. 1991. Multiple photograph resection and intersection using linearfeature M.S. Thesis, Department of Civil and Environmental Engineering andGeodetic Science, The Ohio State University, Columbus, OH.[70] Tommaselli, A., and Lugnani J. 1988. An alternative mathematical modelto collinearity equations using straight features. Proceedings of InternationalArchives of Photogrammetry and Remote Sensing, Kyoto, Japan 27(B3),765–774.[71] Tommaselli, A., and A. Poz. 1999. Line based orientation of aerial images.Proceedings of the ISPRS Conference on Automatic Extraction of GIS Objectsfrom Digital Imagery, Munich, Germany 32(3-2W5),143–148.[72] Tommaselli, A., and C. Tozzi. 1992. A filtering-based approach to eye-in-handrobot vision. Proceedings of International Archives of Photogrammetry and RemoteSensing, Washington, USA, Commission V 29(B3),182–189.118


[73] Vosselman, G., and H. Veldhuis. 1999. Mapping by dragging and fitting ofwire-frame models. Photogrammetric Engineering & Remote Sensing Vol.65,No.7,769–776.[74] Wang, H., J. Kearney, and K. Atkinson. 2002. Arc-length parameterized splinecurves for real-time simulation. Proceedings of the 5th International Conferenceon Curves and Surfaces, San Malo, France 387–396.[75] Weisstein, E.W. 1999. B-spline. MathWorld–A Wolfram Web Resourcehttp://mathworld.wolfram.com/B-Spline.html.[76] Wiman, H., and P. Axelsson. 1996. Finding 3D-structures in multiple aerialimages using lines and regions. International Archives of Photogrammetry andRemote Sensing, Vienna, Austria 31(B3),953–959.[77] Zahran, M. 1997. Shape-based multi-image matching using the principles of generalizedHough transform. Ph.D. dissertation, Department of Civil and EnvironmentalEngineering and Geodetic Science, The Ohio State University, Columbus,OH.[78] Zalmanson, G. 2000. Hierarchical recovery of exterior orientation from parametricand <strong>natural</strong> 3D curves. Ph.D. dissertation, Department of Civil andEnvironmental Engineering and Geodetic Science, The Ohio State University,Columbus, OH.[79] Zalmanson, G., and T. Schenk. 2007. Relative orientation <strong>with</strong> straight lines?Unpublished.[80] Zielinski. 1993. Object reconstruction in digital photogrammetric systems. Ph.D.dissertation, Royal Institute of Technology, Stockholm, Sweden.119

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!