v2010.10.26 - Convex Optimization

v2010.10.26 - Convex Optimization v2010.10.26 - Convex Optimization

convexoptimization.com
from convexoptimization.com More from this publisher
12.07.2015 Views

330 CHAPTER 4. SEMIDEFINITE PROGRAMMINGdirection matrix WDenote by Z ⋆ an optimal composite matrix from semidefinite program (752).Then for Z ⋆ ∈ S N+n whose eigenvalues λ(Z ⋆ )∈ R N+n are arranged innonincreasing order, (Ky Fan)N+n∑λ(Z ⋆ ) ii=n+1= minimizeW ∈ S N+n 〈Z ⋆ , W 〉subject to 0 ≼ W ≼ ItrW = N(1700a)which has an optimal solution that is known in closed form (p.658,4.4.1.1).This eigenvalue sum is zero when Z ⋆ has rank n or less.Foreknowledge of optimal Z ⋆ , to make possible this search for W ,implies iteration; id est, semidefinite program (752) is solved for Z ⋆initializing W = I or W = 0. Once found, Z ⋆ becomes constant insemidefinite program (1700a) where a new normal direction W is found asits optimal solution. Then this cycle (752) (1700a) iterates until convergence.When rankZ ⋆ = n , solution via this convex iteration solves sensor-networklocalization problem (745) and its equivalent (750).numerical solutionIn all examples to follow, number of anchorsm = √ N (754)equals square root of cardinality N of list X . Indices set I identifying allmeasurable distances • is ascertained from connectivity matrix (746), (747),(748), or (749). We solve iteration (752) (1700a) in dimension n = 2 for eachrespective example illustrated in Figure 86 through Figure 89.In presence of negligible noise, true position is reliably localized for everystandardized example; noteworthy in so far as each example represents anincomplete graph. This implies that the set of all optimal solutions havinglowest rank must be small.To make the examples interesting and consistent with previous work, werandomize each range of distance-square that bounds 〈G, (e i −e j )(e i −e j ) T 〉in (752); id est, for each and every (i,j)∈ Id ij = d ij (1 + √ 3η χ l) 2d ij = d ij (1 − √ 3η χ l+1) 2 (755)

4.4. RANK-CONSTRAINED SEMIDEFINITE PROGRAM 331where η = 0.1 is a constant noise factor, χ lis the l th sample of a noiseprocess realization uniformly distributed in the interval (0, 1) like rand(1)from Matlab, and d ij is actual distance-square from i th to j th sensor.Because of distinct function calls to rand() , each range of distance-square[d ij , d ij ] is not necessarily centered on actual distance-square d ij . Unitstochastic variance is provided by factor √ 3.Figure 91 through Figure 94 each illustrate one realization of numericalsolution to the standardized lattice problems posed by Figure 86 throughFigure 89 respectively. Exact localization, by any method, is impossiblebecause of measurement noise. Certainly, by inspection of their publishedgraphical data, our results are better than those of Carter & Jin.(Figure 95, 96, 97) Obviously our solutions do not suffer from thosecompaction-type errors (clustering of localized sensors) exhibited by Biswas’graphical results for the same noise factor η .localization example conclusionSolution to this sensor-network localization problem became apparent byunderstanding geometry of optimization. Trace of a matrix, to a student oflinear algebra, is perhaps a sum of eigenvalues. But to us, trace representsthe normal I to some hyperplane in Euclidean vector space. (Figure 84)Our solutions are globally optimal, requiring: 1) no centralized-gradientpostprocessing heuristic refinement as in [44] because there is effectivelyno relaxation of (750) at optimality, 2) no implicit postprojection onrank-2 positive semidefinite matrices induced by nonzero G−X T X denotingsuboptimality as occurs in [45] [46] [47] [72] [216] [224]; indeed, G ⋆ = X ⋆T X ⋆by convex iteration.Numerical solution to noisy problems, containing sensor variables wellin excess of 100, becomes difficult via the holistic semidefinite programwe proposed. When problem size is within reach of contemporarygeneral-purpose semidefinite program solvers, then the convex iteration wepresented inherently overcomes limitations of Carter & Jin with respect toboth noise performance and ability to localize in any desired affine dimension.The legacy of Carter, Jin, Saunders, & Ye [72] is a sobering demonstrationof the need for more efficient methods for solution of semidefinite programs,while that of So & Ye [320] forever bonds distance geometry to semidefiniteprogramming. Elegance of our semidefinite problem statement (752),for constraining affine dimension of sensor-network localization, should

330 CHAPTER 4. SEMIDEFINITE PROGRAMMINGdirection matrix WDenote by Z ⋆ an optimal composite matrix from semidefinite program (752).Then for Z ⋆ ∈ S N+n whose eigenvalues λ(Z ⋆ )∈ R N+n are arranged innonincreasing order, (Ky Fan)N+n∑λ(Z ⋆ ) ii=n+1= minimizeW ∈ S N+n 〈Z ⋆ , W 〉subject to 0 ≼ W ≼ ItrW = N(1700a)which has an optimal solution that is known in closed form (p.658,4.4.1.1).This eigenvalue sum is zero when Z ⋆ has rank n or less.Foreknowledge of optimal Z ⋆ , to make possible this search for W ,implies iteration; id est, semidefinite program (752) is solved for Z ⋆initializing W = I or W = 0. Once found, Z ⋆ becomes constant insemidefinite program (1700a) where a new normal direction W is found asits optimal solution. Then this cycle (752) (1700a) iterates until convergence.When rankZ ⋆ = n , solution via this convex iteration solves sensor-networklocalization problem (745) and its equivalent (750).numerical solutionIn all examples to follow, number of anchorsm = √ N (754)equals square root of cardinality N of list X . Indices set I identifying allmeasurable distances • is ascertained from connectivity matrix (746), (747),(748), or (749). We solve iteration (752) (1700a) in dimension n = 2 for eachrespective example illustrated in Figure 86 through Figure 89.In presence of negligible noise, true position is reliably localized for everystandardized example; noteworthy in so far as each example represents anincomplete graph. This implies that the set of all optimal solutions havinglowest rank must be small.To make the examples interesting and consistent with previous work, werandomize each range of distance-square that bounds 〈G, (e i −e j )(e i −e j ) T 〉in (752); id est, for each and every (i,j)∈ Id ij = d ij (1 + √ 3η χ l) 2d ij = d ij (1 − √ 3η χ l+1) 2 (755)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!