12.07.2015 Views

Metrics of curves in shape optimization and analysis - Andrea Carlo ...

Metrics of curves in shape optimization and analysis - Andrea Carlo ...

Metrics of curves in shape optimization and analysis - Andrea Carlo ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

S<strong>in</strong>ce h is arbitrary, us<strong>in</strong>g de la Vallée-Pouss<strong>in</strong> Lemma, we obta<strong>in</strong> thatf = c − v , λL 2 D s f = D s c〈(c − v), (c − c)〉 + α , (10.25)where the constant α is the unique α ∈ lR n such that the rightmost term is <strong>in</strong>D c M. In conclusion we obta<strong>in</strong> (10.24).We similarly compute the gradient <strong>of</strong> the active contour energy.Proposition 10.21 (Gradient <strong>of</strong> geodesic active contour model) We consideronce aga<strong>in</strong> the geodesic active contour model [7, 27] (that was presented<strong>in</strong> Section 2.2) where the energy is∫E(c) = φ(c(s)) dscwith φ : lR 2 → lR + appropriately designed. The gradient with respect to ˜H 1 is∇˜H1E(c) = Lavg c (∇φ(c)) − 1λL P cP c Π c(∇φ(c))+1λL P cΠ c(φ(c)Ds c ) . (10.26)Pro<strong>of</strong>. Let us note 23 (recall<strong>in</strong>g eqn. (2.3)) that∇ H 0E = L∇φ(c) − LD s (φ(c)D s c) , (10.27)so the above equation (10.26) can be obta<strong>in</strong>ed by the relation <strong>in</strong> Theorem10.17.Alternatively, we know that∫DE(c; h) = L ∇φ(c) · h + φ(c)(D s h · D s c) ds ;let f = ∇˜H1E(c) be the ˜H 1 gradient <strong>of</strong> E; the equalitycDE(c; h) = 〈h, f〉 ˜H1∀hbecomes ∫(L∇φ(c) − avg c (f)) · h + D s h · (Lφ(c)D s c − λL 2 D s f) ds = 0cimitat<strong>in</strong>g the pro<strong>of</strong> <strong>of</strong> the previous proposition, we obta<strong>in</strong> (10.26).We remark a few th<strong>in</strong>gs.• Note that the first term <strong>in</strong> the formula (10.26) is <strong>in</strong> lR n while the othertwo are <strong>in</strong> D c M.• Us<strong>in</strong>g the kernel ˜Kλ that was def<strong>in</strong>ed <strong>in</strong> (10.10), we can rewrite∇˜H1E(c) = L ˜K λ ⋆ (∇φ(c)) + 1λL P cΠ c(φ(c)Ds c ) (10.28)• The formula for the gradient does not require that the curve be twicedifferentiable: we will use this fact to prove an existence result for thegradient flow, <strong>in</strong> Theorem 10.31.23 We use the def<strong>in</strong>ition (10.4) <strong>of</strong> H 0 .71

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!